mirror of
https://github.com/alibaba/DataX.git
synced 2025-05-02 06:50:35 +08:00
Merge branch 'master' into doriswriter-1
This commit is contained in:
commit
006d24fccf
39
NOTICE
Normal file
39
NOTICE
Normal file
@ -0,0 +1,39 @@
|
||||
========================================================
|
||||
DataX 是阿里云 DataWorks数据集成 的开源版本,在阿里巴巴集团内被广泛使用的离线数据同步工具/平台。DataX 实现了包括 MySQL、Oracle、OceanBase、SqlServer、Postgre、HDFS、Hive、ADS、HBase、TableStore(OTS)、MaxCompute(ODPS)、Hologres、DRDS 等各种异构数据源之间高效的数据同步功能。
|
||||
|
||||
DataX is an open source offline data synchronization tool / platform widely used in Alibaba group and other companies. DataX implements efficient data synchronization between heterogeneous data sources including mysql, Oracle, oceanbase, sqlserver, postgre, HDFS, hive, ads, HBase, tablestore (OTS), maxcompute (ODPs), hologres, DRDS, etc.
|
||||
|
||||
Copyright 1999-2022 Alibaba Group Holding Ltd.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
|
||||
===================================================================
|
||||
文级别引用,按许可证
|
||||
This product contains various third-party components under other open source licenses.
|
||||
This section summarizes those components and their licenses.
|
||||
GNU Lesser General Public License
|
||||
--------------------------------------
|
||||
opentsdbreader/src/main/java/com/alibaba/datax/plugin/reader/conn/CliQuery.java
|
||||
opentsdbreader/src/main/java/com/alibaba/datax/plugin/reader/conn/Connection4TSDB.java
|
||||
opentsdbreader/src/main/java/com/alibaba/datax/plugin/reader/conn/DataPoint4TSDB.java
|
||||
opentsdbreader/src/main/java/com/alibaba/datax/plugin/reader/conn/DumpSeries.java
|
||||
opentsdbreader/src/main/java/com/alibaba/datax/plugin/reader/conn/OpenTSDBConnection.java
|
||||
opentsdbreader/src/main/java/com/alibaba/datax/plugin/reader/conn/OpenTSDBDump.java
|
||||
opentsdbreader/src/main/java/com/alibaba/datax/plugin/reader/opentsdbreader/Constant.java
|
||||
opentsdbreader/src/main/java/com/alibaba/datax/plugin/reader/opentsdbreader/Key.java
|
||||
opentsdbreader/src/main/java/com/alibaba/datax/plugin/reader/opentsdbreader/OpenTSDBReader.java
|
||||
opentsdbreader/src/main/java/com/alibaba/datax/plugin/reader/opentsdbreader/OpenTSDBReaderErrorCode.java
|
||||
opentsdbreader/src/main/java/com/alibaba/datax/plugin/reader/util/HttpUtils.java
|
||||
opentsdbreader/src/main/java/com/alibaba/datax/plugin/reader/util/TSDBUtils.java
|
||||
opentsdbreader/src/main/java/com/alibaba/datax/plugin/reader/util/TimeUtils.java
|
||||
===================================================================
|
28
README.md
28
README.md
@ -25,7 +25,8 @@ DataX本身作为数据同步框架,将不同数据源的同步抽象为从源
|
||||
|
||||
# Quick Start
|
||||
|
||||
##### Download [DataX下载地址](http://datax-opensource.oss-cn-hangzhou.aliyuncs.com/datax.tar.gz)
|
||||
##### Download [DataX下载地址](https://datax-opensource.oss-cn-hangzhou.aliyuncs.com/20220530/datax.tar.gz)
|
||||
|
||||
|
||||
##### 请点击:[Quick Start](https://github.com/alibaba/DataX/blob/master/userGuid.md)
|
||||
|
||||
@ -62,6 +63,7 @@ DataX目前已经有了比较全面的插件体系,主流的RDBMS数据库、N
|
||||
| | Elasticsearch | | √ |[写](https://github.com/alibaba/DataX/blob/master/elasticsearchwriter/doc/elasticsearchwriter.md)|
|
||||
| 时间序列数据库 | OpenTSDB | √ | |[读](https://github.com/alibaba/DataX/blob/master/opentsdbreader/doc/opentsdbreader.md)|
|
||||
| | TSDB | √ | √ |[读](https://github.com/alibaba/DataX/blob/master/tsdbreader/doc/tsdbreader.md) 、[写](https://github.com/alibaba/DataX/blob/master/tsdbwriter/doc/tsdbhttpwriter.md)|
|
||||
| | TDengine | √ | √ |[读](https://github.com/alibaba/DataX/blob/master/tdenginereader/doc/tdenginereader-CN.md) 、[写](https://github.com/alibaba/DataX/blob/master/tdenginewriter/doc/tdenginewriter-CN.md)|
|
||||
|
||||
# 阿里云DataWorks数据集成
|
||||
|
||||
@ -89,6 +91,13 @@ DataX目前已经有了比较全面的插件体系,主流的RDBMS数据库、N
|
||||
|
||||
请点击:[DataX插件开发宝典](https://github.com/alibaba/DataX/blob/master/dataxPluginDev.md)
|
||||
|
||||
# 重要版本更新说明
|
||||
|
||||
DataX 后续计划月度迭代更新,也欢迎感兴趣的同学提交 Pull requests,月度更新内容会介绍介绍如下。
|
||||
|
||||
- [datax_v202205](https://github.com/alibaba/DataX/releases/tag/datax_v202205)
|
||||
- 涉及通道能力更新(MaxCompute、Hologres、OSS、Tdengine等)、安全漏洞更新、通用打包更新等
|
||||
|
||||
|
||||
# 项目成员
|
||||
|
||||
@ -136,23 +145,10 @@ This software is free to use under the Apache License [Apache license](https://g
|
||||
8. 对高并发、高稳定可用性、高性能、大数据处理有过实际项目及产品经验者优先考虑;
|
||||
9. 有大数据产品、云产品、中间件技术解决方案者优先考虑。
|
||||
````
|
||||
钉钉用户群:
|
||||
|
||||
- DataX开源用户交流群
|
||||
- <img src="https://github.com/alibaba/DataX/blob/master/images/DataX%E5%BC%80%E6%BA%90%E7%94%A8%E6%88%B7%E4%BA%A4%E6%B5%81%E7%BE%A4.jpg" width="20%" height="20%">
|
||||
用户咨询支持:
|
||||
|
||||
- DataX开源用户交流群2
|
||||
- <img src="https://github.com/alibaba/DataX/blob/master/images/DataX%E5%BC%80%E6%BA%90%E7%94%A8%E6%88%B7%E4%BA%A4%E6%B5%81%E7%BE%A42.jpg" width="20%" height="20%">
|
||||
钉钉群目前暂时受到了一些管控策略影响,建议大家有问题优先在这里提交问题 Issue,DataX研发和社区会定期回答Issue中的问题,知识库丰富后也能帮助到后来的使用者。
|
||||
|
||||
- DataX开源用户交流群3
|
||||
- <img src="https://github.com/alibaba/DataX/blob/master/images/DataX%E5%BC%80%E6%BA%90%E7%94%A8%E6%88%B7%E4%BA%A4%E6%B5%81%E7%BE%A43.jpg" width="20%" height="20%">
|
||||
|
||||
- DataX开源用户交流群4
|
||||
- <img src="https://github.com/alibaba/DataX/blob/master/images/DataX%E5%BC%80%E6%BA%90%E7%94%A8%E6%88%B7%E4%BA%A4%E6%B5%81%E7%BE%A44.jpg" width="20%" height="20%">
|
||||
|
||||
- DataX开源用户交流群5
|
||||
- <img src="https://github.com/alibaba/DataX/blob/master/images/DataX%E5%BC%80%E6%BA%90%E7%94%A8%E6%88%B7%E4%BA%A4%E6%B5%81%E7%BE%A45.jpg" width="20%" height="20%">
|
||||
|
||||
- DataX开源用户交流群6
|
||||
- <img src="https://user-images.githubusercontent.com/1905000/124073771-139cbd00-da75-11eb-9a3f-598cba145a76.png" width="20%" height="20%">
|
||||
|
||||
|
@ -65,9 +65,9 @@ COPY命令将数据写入ADB PG数据库中。
|
||||
"writer": {
|
||||
"name": "adbpgwriter",
|
||||
"parameter": {
|
||||
"username": "username",
|
||||
"password": "password",
|
||||
"host": "host",
|
||||
"username": "",
|
||||
"password": "",
|
||||
"host": "127.0.0.1",
|
||||
"port": "1234",
|
||||
"database": "database",
|
||||
"schema": "schema",
|
||||
|
@ -61,6 +61,14 @@
|
||||
</dependencies>
|
||||
|
||||
<build>
|
||||
<resources>
|
||||
<resource>
|
||||
<directory>src/main/java</directory>
|
||||
<includes>
|
||||
<include>**/*.properties</include>
|
||||
</includes>
|
||||
</resource>
|
||||
</resources>
|
||||
<plugins>
|
||||
<plugin>
|
||||
<artifactId>maven-compiler-plugin</artifactId>
|
||||
|
@ -93,6 +93,12 @@ public class BoolColumn extends Column {
|
||||
CommonErrorCode.CONVERT_NOT_SUPPORT, "Bool类型不能转为Date .");
|
||||
}
|
||||
|
||||
@Override
|
||||
public Date asDate(String dateFormat) {
|
||||
throw DataXException.asDataXException(
|
||||
CommonErrorCode.CONVERT_NOT_SUPPORT, "Bool类型不能转为Date .");
|
||||
}
|
||||
|
||||
@Override
|
||||
public byte[] asBytes() {
|
||||
throw DataXException.asDataXException(
|
||||
|
@ -76,6 +76,12 @@ public class BytesColumn extends Column {
|
||||
CommonErrorCode.CONVERT_NOT_SUPPORT, "Bytes类型不能转为Date .");
|
||||
}
|
||||
|
||||
@Override
|
||||
public Date asDate(String dateFormat) {
|
||||
throw DataXException.asDataXException(
|
||||
CommonErrorCode.CONVERT_NOT_SUPPORT, "Bytes类型不能转为Date .");
|
||||
}
|
||||
|
||||
@Override
|
||||
public Boolean asBoolean() {
|
||||
throw DataXException.asDataXException(
|
||||
|
@ -56,6 +56,8 @@ public abstract class Column {
|
||||
|
||||
public abstract Date asDate();
|
||||
|
||||
public abstract Date asDate(String dateFormat);
|
||||
|
||||
public abstract byte[] asBytes();
|
||||
|
||||
public abstract Boolean asBoolean();
|
||||
|
@ -23,6 +23,11 @@ public final class ColumnCast {
|
||||
return StringCast.asDate(column);
|
||||
}
|
||||
|
||||
public static Date string2Date(final StringColumn column, String dateFormat)
|
||||
throws ParseException {
|
||||
return StringCast.asDate(column, dateFormat);
|
||||
}
|
||||
|
||||
public static byte[] string2Bytes(final StringColumn column)
|
||||
throws UnsupportedEncodingException {
|
||||
return StringCast.asBytes(column);
|
||||
@ -116,6 +121,16 @@ class StringCast {
|
||||
throw e;
|
||||
}
|
||||
|
||||
static Date asDate(final StringColumn column, String dateFormat) throws ParseException {
|
||||
ParseException e;
|
||||
try {
|
||||
return FastDateFormat.getInstance(dateFormat, StringCast.timeZoner).parse(column.asString());
|
||||
} catch (ParseException ignored) {
|
||||
e = ignored;
|
||||
}
|
||||
throw e;
|
||||
}
|
||||
|
||||
static byte[] asBytes(final StringColumn column)
|
||||
throws UnsupportedEncodingException {
|
||||
if (null == column.asString()) {
|
||||
|
@ -90,6 +90,11 @@ public class DateColumn extends Column {
|
||||
return new Date((Long)this.getRawData());
|
||||
}
|
||||
|
||||
@Override
|
||||
public Date asDate(String dateFormat) {
|
||||
return asDate();
|
||||
}
|
||||
|
||||
@Override
|
||||
public byte[] asBytes() {
|
||||
throw DataXException.asDataXException(
|
||||
|
@ -133,6 +133,12 @@ public class DoubleColumn extends Column {
|
||||
CommonErrorCode.CONVERT_NOT_SUPPORT, "Double类型无法转为Date类型 .");
|
||||
}
|
||||
|
||||
@Override
|
||||
public Date asDate(String dateFormat) {
|
||||
throw DataXException.asDataXException(
|
||||
CommonErrorCode.CONVERT_NOT_SUPPORT, "Double类型无法转为Date类型 .");
|
||||
}
|
||||
|
||||
@Override
|
||||
public byte[] asBytes() {
|
||||
throw DataXException.asDataXException(
|
||||
|
@ -126,6 +126,11 @@ public class LongColumn extends Column {
|
||||
return new Date(this.asLong());
|
||||
}
|
||||
|
||||
@Override
|
||||
public Date asDate(String dateFormat) {
|
||||
return this.asDate();
|
||||
}
|
||||
|
||||
@Override
|
||||
public byte[] asBytes() {
|
||||
throw DataXException.asDataXException(
|
||||
|
@ -1,5 +1,7 @@
|
||||
package com.alibaba.datax.common.element;
|
||||
|
||||
import java.util.Map;
|
||||
|
||||
/**
|
||||
* Created by jingxing on 14-8-24.
|
||||
*/
|
||||
@ -20,4 +22,8 @@ public interface Record {
|
||||
|
||||
public int getMemorySize();
|
||||
|
||||
public void setMeta(Map<String, String> meta);
|
||||
|
||||
public Map<String, String> getMeta();
|
||||
|
||||
}
|
||||
|
@ -150,6 +150,16 @@ public class StringColumn extends Column {
|
||||
}
|
||||
}
|
||||
|
||||
@Override
|
||||
public Date asDate(String dateFormat) {
|
||||
try {
|
||||
return ColumnCast.string2Date(this, dateFormat);
|
||||
} catch (Exception e) {
|
||||
throw DataXException.asDataXException(CommonErrorCode.CONVERT_NOT_SUPPORT,
|
||||
String.format("String[\"%s\"]不能转为Date .", this.asString()));
|
||||
}
|
||||
}
|
||||
|
||||
@Override
|
||||
public byte[] asBytes() {
|
||||
try {
|
||||
|
@ -16,6 +16,10 @@ public class DataXException extends RuntimeException {
|
||||
this.errorCode = errorCode;
|
||||
}
|
||||
|
||||
public DataXException(String errorMessage) {
|
||||
super(errorMessage);
|
||||
}
|
||||
|
||||
private DataXException(ErrorCode errorCode, String errorMessage, Throwable cause) {
|
||||
super(errorCode.toString() + " - " + getMessage(errorMessage) + " - " + getMessage(cause), cause);
|
||||
|
||||
@ -26,6 +30,10 @@ public class DataXException extends RuntimeException {
|
||||
return new DataXException(errorCode, message);
|
||||
}
|
||||
|
||||
public static DataXException asDataXException(String message) {
|
||||
return new DataXException(message);
|
||||
}
|
||||
|
||||
public static DataXException asDataXException(ErrorCode errorCode, String message, Throwable cause) {
|
||||
if (cause instanceof DataXException) {
|
||||
return (DataXException) cause;
|
||||
|
@ -3,6 +3,8 @@ package com.alibaba.datax.common.plugin;
|
||||
import com.alibaba.datax.common.base.BaseObject;
|
||||
import com.alibaba.datax.common.util.Configuration;
|
||||
|
||||
import java.util.List;
|
||||
|
||||
public abstract class AbstractPlugin extends BaseObject implements Pluginable {
|
||||
//作业的config
|
||||
private Configuration pluginJobConf;
|
||||
@ -15,6 +17,8 @@ public abstract class AbstractPlugin extends BaseObject implements Pluginable {
|
||||
|
||||
private String peerPluginName;
|
||||
|
||||
private List<Configuration> readerPluginSplitConf;
|
||||
|
||||
@Override
|
||||
public String getPluginName() {
|
||||
assert null != this.pluginConf;
|
||||
@ -84,4 +88,12 @@ public abstract class AbstractPlugin extends BaseObject implements Pluginable {
|
||||
public void postHandler(Configuration jobConfiguration){
|
||||
|
||||
}
|
||||
|
||||
public List<Configuration> getReaderPluginSplitConf(){
|
||||
return this.readerPluginSplitConf;
|
||||
}
|
||||
|
||||
public void setReaderPluginSplitConf(List<Configuration> readerPluginSplitConf){
|
||||
this.readerPluginSplitConf = readerPluginSplitConf;
|
||||
}
|
||||
}
|
||||
|
@ -0,0 +1,37 @@
|
||||
package com.alibaba.datax.common.util;
|
||||
|
||||
import java.util.Arrays;
|
||||
import java.util.List;
|
||||
import java.util.Set;
|
||||
|
||||
import org.apache.commons.lang3.StringUtils;
|
||||
|
||||
public class ConfigurationUtil {
|
||||
private static final List<String> SENSITIVE_KEYS = Arrays.asList("password", "accessKey", "securityToken",
|
||||
"AccessKeyId", "AccessKeySecert", "AccessKeySecret", "clientPassword");
|
||||
|
||||
public static Configuration filterSensitive(Configuration origin) {
|
||||
// shell 任务configuration metric 可能为null。
|
||||
if (origin == null) {
|
||||
return origin;
|
||||
}
|
||||
// 确保不影响入参的对象
|
||||
Configuration configuration = origin.clone();
|
||||
Set<String> keys = configuration.getKeys();
|
||||
for (final String key : keys) {
|
||||
boolean isSensitive = false;
|
||||
for (String sensitiveKey : SENSITIVE_KEYS) {
|
||||
if (StringUtils.endsWithIgnoreCase(key, sensitiveKey)) {
|
||||
isSensitive = true;
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
if (isSensitive && configuration.get(key) instanceof String) {
|
||||
configuration.set(key, configuration.getString(key).replaceAll(".", "*"));
|
||||
}
|
||||
|
||||
}
|
||||
return configuration;
|
||||
}
|
||||
}
|
@ -1,5 +1,5 @@
|
||||
/**
|
||||
* (C) 2010-2014 Alibaba Group Holding Limited.
|
||||
* (C) 2010-2022 Alibaba Group Holding Limited.
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
@ -14,342 +14,216 @@
|
||||
* limitations under the License.
|
||||
*/
|
||||
|
||||
package com.alibaba.datax.plugin.reader.odpsreader.util;
|
||||
package com.alibaba.datax.common.util;
|
||||
|
||||
import javax.crypto.Cipher;
|
||||
import javax.crypto.SecretKey;
|
||||
import javax.crypto.SecretKeyFactory;
|
||||
import javax.crypto.spec.DESKeySpec;
|
||||
|
||||
import org.slf4j.Logger;
|
||||
import org.slf4j.LoggerFactory;
|
||||
|
||||
import java.security.SecureRandom;
|
||||
|
||||
/**
|
||||
* * DES加解密,支持与delphi交互(字符串编码需统一为UTF-8)
|
||||
*
|
||||
* *
|
||||
*
|
||||
* * @author wym
|
||||
*
|
||||
*
|
||||
* DES加解密,支持与delphi交互(字符串编码需统一为UTF-8)
|
||||
* 将这个工具类抽取到 common 中,方便后续代码复用
|
||||
*/
|
||||
|
||||
public class DESCipher {
|
||||
|
||||
private static Logger LOGGER = LoggerFactory.getLogger(DESCipher.class);
|
||||
/**
|
||||
* * 密钥
|
||||
*
|
||||
*
|
||||
* 密钥
|
||||
*/
|
||||
|
||||
public static final String KEY = "DESDES";
|
||||
|
||||
public static final String KEY = "";
|
||||
private final static String DES = "DES";
|
||||
|
||||
/**
|
||||
* * 加密
|
||||
*
|
||||
* *
|
||||
*
|
||||
* * @param src
|
||||
*
|
||||
* * 明文(字节)
|
||||
*
|
||||
* * @param key
|
||||
*
|
||||
* * 密钥,长度必须是8的倍数
|
||||
*
|
||||
* * @return 密文(字节)
|
||||
*
|
||||
* * @throws Exception
|
||||
*
|
||||
*
|
||||
* 加密
|
||||
* @param src 明文(字节)
|
||||
* @param key 密钥,长度必须是8的倍数
|
||||
* @return 密文(字节)
|
||||
* @throws Exception
|
||||
*/
|
||||
|
||||
public static byte[] encrypt(byte[] src, byte[] key) throws Exception {
|
||||
|
||||
// DES算法要求有一个可信任的随机数源
|
||||
|
||||
SecureRandom sr = new SecureRandom();
|
||||
|
||||
// 从原始密匙数据创建DESKeySpec对象
|
||||
|
||||
DESKeySpec dks = new DESKeySpec(key);
|
||||
|
||||
// 创建一个密匙工厂,然后用它把DESKeySpec转换成
|
||||
|
||||
// 一个SecretKey对象
|
||||
|
||||
SecretKeyFactory keyFactory = SecretKeyFactory.getInstance(DES);
|
||||
|
||||
SecretKey securekey = keyFactory.generateSecret(dks);
|
||||
|
||||
// Cipher对象实际完成加密操作
|
||||
|
||||
Cipher cipher = Cipher.getInstance(DES);
|
||||
|
||||
// 用密匙初始化Cipher对象
|
||||
|
||||
cipher.init(Cipher.ENCRYPT_MODE, securekey, sr);
|
||||
|
||||
// 现在,获取数据并加密
|
||||
|
||||
// 正式执行加密操作
|
||||
|
||||
return cipher.doFinal(src);
|
||||
|
||||
}
|
||||
|
||||
/**
|
||||
* * 解密
|
||||
*
|
||||
* *
|
||||
*
|
||||
* * @param src
|
||||
*
|
||||
* * 密文(字节)
|
||||
*
|
||||
* * @param key
|
||||
*
|
||||
* * 密钥,长度必须是8的倍数
|
||||
*
|
||||
* * @return 明文(字节)
|
||||
*
|
||||
* * @throws Exception
|
||||
*
|
||||
*
|
||||
* * 解密
|
||||
* * @param src
|
||||
* * 密文(字节)
|
||||
* * @param key
|
||||
* * 密钥,长度必须是8的倍数
|
||||
* * @return 明文(字节)
|
||||
* * @throws Exception
|
||||
*/
|
||||
|
||||
public static byte[] decrypt(byte[] src, byte[] key) throws Exception {
|
||||
|
||||
// DES算法要求有一个可信任的随机数源
|
||||
|
||||
SecureRandom sr = new SecureRandom();
|
||||
|
||||
// 从原始密匙数据创建一个DESKeySpec对象
|
||||
|
||||
DESKeySpec dks = new DESKeySpec(key);
|
||||
|
||||
// 创建一个密匙工厂,然后用它把DESKeySpec对象转换成
|
||||
|
||||
// 一个SecretKey对象
|
||||
|
||||
SecretKeyFactory keyFactory = SecretKeyFactory.getInstance(DES);
|
||||
|
||||
SecretKey securekey = keyFactory.generateSecret(dks);
|
||||
|
||||
// Cipher对象实际完成解密操作
|
||||
|
||||
Cipher cipher = Cipher.getInstance(DES);
|
||||
|
||||
// 用密匙初始化Cipher对象
|
||||
|
||||
cipher.init(Cipher.DECRYPT_MODE, securekey, sr);
|
||||
|
||||
// 现在,获取数据并解密
|
||||
|
||||
// 正式执行解密操作
|
||||
|
||||
return cipher.doFinal(src);
|
||||
|
||||
}
|
||||
|
||||
/**
|
||||
* * 加密
|
||||
*
|
||||
* *
|
||||
*
|
||||
* * @param src
|
||||
*
|
||||
* * 明文(字节)
|
||||
*
|
||||
* * @return 密文(字节)
|
||||
*
|
||||
* * @throws Exception
|
||||
*
|
||||
*
|
||||
* 加密
|
||||
* @param src * 明文(字节)
|
||||
* @return 密文(字节)
|
||||
* @throws Exception
|
||||
*/
|
||||
|
||||
public static byte[] encrypt(byte[] src) throws Exception {
|
||||
|
||||
return encrypt(src, KEY.getBytes());
|
||||
|
||||
}
|
||||
|
||||
/**
|
||||
* * 解密
|
||||
*
|
||||
* *
|
||||
*
|
||||
* * @param src
|
||||
*
|
||||
* * 密文(字节)
|
||||
*
|
||||
* * @return 明文(字节)
|
||||
*
|
||||
* * @throws Exception
|
||||
*
|
||||
*
|
||||
* 解密
|
||||
* @param src 密文(字节)
|
||||
* @return 明文(字节)
|
||||
* @throws Exception
|
||||
*/
|
||||
|
||||
public static byte[] decrypt(byte[] src) throws Exception {
|
||||
|
||||
return decrypt(src, KEY.getBytes());
|
||||
|
||||
}
|
||||
|
||||
/**
|
||||
* * 加密
|
||||
*
|
||||
* *
|
||||
*
|
||||
* * @param src
|
||||
*
|
||||
* * 明文(字符串)
|
||||
*
|
||||
* * @return 密文(16进制字符串)
|
||||
*
|
||||
* * @throws Exception
|
||||
*
|
||||
*
|
||||
* 加密
|
||||
* @param src 明文(字符串)
|
||||
* @return 密文(16进制字符串)
|
||||
* @throws Exception
|
||||
*/
|
||||
|
||||
public final static String encrypt(String src) {
|
||||
|
||||
try {
|
||||
|
||||
return byte2hex(encrypt(src.getBytes(), KEY.getBytes()));
|
||||
|
||||
} catch (Exception e) {
|
||||
|
||||
e.printStackTrace();
|
||||
|
||||
LOGGER.warn(e.getMessage(), e);
|
||||
}
|
||||
|
||||
return null;
|
||||
|
||||
}
|
||||
|
||||
/**
|
||||
* * 解密
|
||||
*
|
||||
* *
|
||||
*
|
||||
* * @param src
|
||||
*
|
||||
* * 密文(字符串)
|
||||
*
|
||||
* * @return 明文(字符串)
|
||||
*
|
||||
* * @throws Exception
|
||||
*
|
||||
*
|
||||
* 加密
|
||||
* @param src 明文(字符串)
|
||||
* @param encryptKey 加密用的秘钥
|
||||
* @return 密文(16进制字符串)
|
||||
* @throws Exception
|
||||
*/
|
||||
public final static String encrypt(String src, String encryptKey) {
|
||||
try {
|
||||
return byte2hex(encrypt(src.getBytes(), encryptKey.getBytes()));
|
||||
} catch (Exception e) {
|
||||
LOGGER.warn(e.getMessage(), e);
|
||||
}
|
||||
return null;
|
||||
}
|
||||
|
||||
/**
|
||||
* 解密
|
||||
* @param src 密文(字符串)
|
||||
* @return 明文(字符串)
|
||||
* @throws Exception
|
||||
*/
|
||||
public final static String decrypt(String src) {
|
||||
try {
|
||||
|
||||
return new String(decrypt(hex2byte(src.getBytes()), KEY.getBytes()));
|
||||
|
||||
} catch (Exception e) {
|
||||
|
||||
e.printStackTrace();
|
||||
|
||||
LOGGER.warn(e.getMessage(), e);
|
||||
}
|
||||
|
||||
return null;
|
||||
|
||||
}
|
||||
|
||||
/**
|
||||
* * 加密
|
||||
*
|
||||
* *
|
||||
*
|
||||
* * @param src
|
||||
*
|
||||
* * 明文(字节)
|
||||
*
|
||||
* * @return 密文(16进制字符串)
|
||||
*
|
||||
* * @throws Exception
|
||||
*
|
||||
*
|
||||
* 解密
|
||||
* @param src 密文(字符串)
|
||||
* @param decryptKey 解密用的秘钥
|
||||
* @return 明文(字符串)
|
||||
* @throws Exception
|
||||
*/
|
||||
public final static String decrypt(String src, String decryptKey) {
|
||||
try {
|
||||
return new String(decrypt(hex2byte(src.getBytes()), decryptKey.getBytes()));
|
||||
} catch (Exception e) {
|
||||
LOGGER.warn(e.getMessage(), e);
|
||||
}
|
||||
return null;
|
||||
}
|
||||
|
||||
/**
|
||||
* 加密
|
||||
* @param src
|
||||
* 明文(字节)
|
||||
* @return 密文(16进制字符串)
|
||||
* @throws Exception
|
||||
*/
|
||||
public static String encryptToString(byte[] src) throws Exception {
|
||||
|
||||
return encrypt(new String(src));
|
||||
|
||||
}
|
||||
|
||||
/**
|
||||
* * 解密
|
||||
*
|
||||
* *
|
||||
*
|
||||
* * @param src
|
||||
*
|
||||
* * 密文(字节)
|
||||
*
|
||||
* * @return 明文(字符串)
|
||||
*
|
||||
* * @throws Exception
|
||||
*
|
||||
*
|
||||
* 解密
|
||||
* @param src 密文(字节)
|
||||
* @return 明文(字符串)
|
||||
* @throws Exception
|
||||
*/
|
||||
|
||||
public static String decryptToString(byte[] src) throws Exception {
|
||||
|
||||
return decrypt(new String(src));
|
||||
|
||||
}
|
||||
|
||||
public static String byte2hex(byte[] b) {
|
||||
|
||||
String hs = "";
|
||||
|
||||
String stmp = "";
|
||||
|
||||
for (int n = 0; n < b.length; n++) {
|
||||
|
||||
stmp = (Integer.toHexString(b[n] & 0XFF));
|
||||
|
||||
if (stmp.length() == 1)
|
||||
|
||||
hs = hs + "0" + stmp;
|
||||
|
||||
else
|
||||
|
||||
hs = hs + stmp;
|
||||
|
||||
}
|
||||
|
||||
return hs.toUpperCase();
|
||||
|
||||
}
|
||||
|
||||
public static byte[] hex2byte(byte[] b) {
|
||||
|
||||
if ((b.length % 2) != 0)
|
||||
|
||||
throw new IllegalArgumentException("长度不是偶数");
|
||||
|
||||
throw new IllegalArgumentException("The length is not an even number");
|
||||
byte[] b2 = new byte[b.length / 2];
|
||||
|
||||
for (int n = 0; n < b.length; n += 2) {
|
||||
|
||||
String item = new String(b, n, 2);
|
||||
|
||||
b2[n / 2] = (byte) Integer.parseInt(item, 16);
|
||||
|
||||
}
|
||||
return b2;
|
||||
|
||||
}
|
||||
|
||||
/*
|
||||
* public static void main(String[] args) { try { String src = "cheetah";
|
||||
* String crypto = DESCipher.encrypt(src); System.out.println("密文[" + src +
|
||||
* "]:" + crypto); System.out.println("解密后:" + DESCipher.decrypt(crypto)); }
|
||||
* catch (Exception e) { e.printStackTrace(); } }
|
||||
*/
|
||||
}
|
@ -0,0 +1,33 @@
|
||||
package com.alibaba.datax.common.util;
|
||||
|
||||
import org.slf4j.Logger;
|
||||
import org.slf4j.LoggerFactory;
|
||||
|
||||
|
||||
public class DataXCaseEnvUtil {
|
||||
|
||||
private static final Logger LOGGER = LoggerFactory.getLogger(DataXCaseEnvUtil.class);
|
||||
|
||||
// datax回归测试效率提升
|
||||
private static String DATAX_AUTOTEST_RETRY_TIME = System.getenv("DATAX_AUTOTEST_RETRY_TIME");
|
||||
private static String DATAX_AUTOTEST_RETRY_INTERVAL = System.getenv("DATAX_AUTOTEST_RETRY_INTERVAL");
|
||||
private static String DATAX_AUTOTEST_RETRY_EXPONENTIAL = System.getenv("DATAX_AUTOTEST_RETRY_EXPONENTIAL");
|
||||
|
||||
public static int getRetryTimes(int retryTimes) {
|
||||
int actualRetryTimes = DATAX_AUTOTEST_RETRY_TIME != null ? Integer.valueOf(DATAX_AUTOTEST_RETRY_TIME) : retryTimes;
|
||||
// LOGGER.info("The actualRetryTimes is {}", actualRetryTimes);
|
||||
return actualRetryTimes;
|
||||
}
|
||||
|
||||
public static long getRetryInterval(long retryInterval) {
|
||||
long actualRetryInterval = DATAX_AUTOTEST_RETRY_INTERVAL != null ? Long.valueOf(DATAX_AUTOTEST_RETRY_INTERVAL) : retryInterval;
|
||||
// LOGGER.info("The actualRetryInterval is {}", actualRetryInterval);
|
||||
return actualRetryInterval;
|
||||
}
|
||||
|
||||
public static boolean getRetryExponential(boolean retryExponential) {
|
||||
boolean actualRetryExponential = DATAX_AUTOTEST_RETRY_EXPONENTIAL != null ? Boolean.valueOf(DATAX_AUTOTEST_RETRY_EXPONENTIAL) : retryExponential;
|
||||
// LOGGER.info("The actualRetryExponential is {}", actualRetryExponential);
|
||||
return actualRetryExponential;
|
||||
}
|
||||
}
|
@ -0,0 +1,62 @@
|
||||
package com.alibaba.datax.common.util;
|
||||
|
||||
import java.util.Map;
|
||||
|
||||
import org.apache.commons.lang3.StringUtils;
|
||||
import org.slf4j.Logger;
|
||||
import org.slf4j.LoggerFactory;
|
||||
|
||||
import com.alibaba.datax.common.exception.DataXException;
|
||||
|
||||
public class IdAndKeyRollingUtil {
|
||||
private static Logger LOGGER = LoggerFactory.getLogger(IdAndKeyRollingUtil.class);
|
||||
public static final String SKYNET_ACCESSID = "SKYNET_ACCESSID";
|
||||
public static final String SKYNET_ACCESSKEY = "SKYNET_ACCESSKEY";
|
||||
|
||||
public final static String ACCESS_ID = "accessId";
|
||||
public final static String ACCESS_KEY = "accessKey";
|
||||
|
||||
public static String parseAkFromSkynetAccessKey() {
|
||||
Map<String, String> envProp = System.getenv();
|
||||
String skynetAccessID = envProp.get(IdAndKeyRollingUtil.SKYNET_ACCESSID);
|
||||
String skynetAccessKey = envProp.get(IdAndKeyRollingUtil.SKYNET_ACCESSKEY);
|
||||
String accessKey = null;
|
||||
// follow 原有的判断条件
|
||||
// 环境变量中,如果存在SKYNET_ACCESSID/SKYNET_ACCESSKEy(只要有其中一个变量,则认为一定是两个都存在的!
|
||||
// if (StringUtils.isNotBlank(skynetAccessID) ||
|
||||
// StringUtils.isNotBlank(skynetAccessKey)) {
|
||||
// 检查严格,只有加密串不为空的时候才进去,不过 之前能跑的加密串都不应该为空
|
||||
if (StringUtils.isNotBlank(skynetAccessKey)) {
|
||||
LOGGER.info("Try to get accessId/accessKey from environment SKYNET_ACCESSKEY.");
|
||||
accessKey = DESCipher.decrypt(skynetAccessKey);
|
||||
if (StringUtils.isBlank(accessKey)) {
|
||||
// 环境变量里面有,但是解析不到
|
||||
throw DataXException.asDataXException(String.format(
|
||||
"Failed to get the [accessId]/[accessKey] from the environment variable. The [accessId]=[%s]",
|
||||
skynetAccessID));
|
||||
}
|
||||
}
|
||||
if (StringUtils.isNotBlank(accessKey)) {
|
||||
LOGGER.info("Get accessId/accessKey from environment variables SKYNET_ACCESSKEY successfully.");
|
||||
}
|
||||
return accessKey;
|
||||
}
|
||||
|
||||
public static String getAccessIdAndKeyFromEnv(Configuration originalConfig) {
|
||||
String accessId = null;
|
||||
Map<String, String> envProp = System.getenv();
|
||||
accessId = envProp.get(IdAndKeyRollingUtil.SKYNET_ACCESSID);
|
||||
String accessKey = null;
|
||||
if (StringUtils.isBlank(accessKey)) {
|
||||
// 老的没有出异常,只是获取不到ak
|
||||
accessKey = IdAndKeyRollingUtil.parseAkFromSkynetAccessKey();
|
||||
}
|
||||
|
||||
if (StringUtils.isNotBlank(accessKey)) {
|
||||
// 确认使用这个的都是 accessId、accessKey的命名习惯
|
||||
originalConfig.set(IdAndKeyRollingUtil.ACCESS_ID, accessId);
|
||||
originalConfig.set(IdAndKeyRollingUtil.ACCESS_KEY, accessKey);
|
||||
}
|
||||
return accessKey;
|
||||
}
|
||||
}
|
@ -6,6 +6,7 @@ import org.apache.commons.lang3.StringUtils;
|
||||
|
||||
import java.util.ArrayList;
|
||||
import java.util.Collections;
|
||||
import java.util.HashSet;
|
||||
import java.util.List;
|
||||
|
||||
/**
|
||||
@ -136,4 +137,25 @@ public final class ListUtil {
|
||||
|
||||
return result;
|
||||
}
|
||||
|
||||
public static Boolean checkIfHasSameValue(List<String> listA, List<String> listB) {
|
||||
if (null == listA || listA.isEmpty() || null == listB || listB.isEmpty()) {
|
||||
return false;
|
||||
}
|
||||
|
||||
for (String oneValue : listA) {
|
||||
if (listB.contains(oneValue)) {
|
||||
return true;
|
||||
}
|
||||
}
|
||||
|
||||
return false;
|
||||
}
|
||||
|
||||
public static boolean checkIfAllSameValue(List<String> listA, List<String> listB) {
|
||||
if (null == listA || listA.isEmpty() || null == listB || listB.isEmpty() || listA.size() != listB.size()) {
|
||||
return false;
|
||||
}
|
||||
return new HashSet<>(listA).containsAll(new HashSet<>(listB));
|
||||
}
|
||||
}
|
||||
|
@ -0,0 +1,54 @@
|
||||
very_like_yixiao=\u4e00{0}\u4e8c{1}\u4e09
|
||||
|
||||
|
||||
configuration.1=\u914d\u7f6e\u4fe1\u606f\u9519\u8bef\uff0c\u60a8\u63d0\u4f9b\u7684\u914d\u7f6e\u6587\u4ef6[{0}]\u4e0d\u5b58\u5728. \u8bf7\u68c0\u67e5\u60a8\u7684\u914d\u7f6e\u6587\u4ef6.
|
||||
configuration.2=\u914d\u7f6e\u4fe1\u606f\u9519\u8bef. \u60a8\u63d0\u4f9b\u914d\u7f6e\u6587\u4ef6[{0}]\u8bfb\u53d6\u5931\u8d25\uff0c\u9519\u8bef\u539f\u56e0: {1}. \u8bf7\u68c0\u67e5\u60a8\u7684\u914d\u7f6e\u6587\u4ef6\u7684\u6743\u9650\u8bbe\u7f6e.
|
||||
configuration.3=\u8bf7\u68c0\u67e5\u60a8\u7684\u914d\u7f6e\u6587\u4ef6. \u60a8\u63d0\u4f9b\u7684\u914d\u7f6e\u6587\u4ef6\u8bfb\u53d6\u5931\u8d25\uff0c\u9519\u8bef\u539f\u56e0: {0}. \u8bf7\u68c0\u67e5\u60a8\u7684\u914d\u7f6e\u6587\u4ef6\u7684\u6743\u9650\u8bbe\u7f6e.
|
||||
configuration.4=\u60a8\u63d0\u4f9b\u914d\u7f6e\u6587\u4ef6\u6709\u8bef\uff0c[{0}]\u662f\u5fc5\u586b\u53c2\u6570\uff0c\u4e0d\u5141\u8bb8\u4e3a\u7a7a\u6216\u8005\u7559\u767d .
|
||||
configuration.5=\u60a8\u63d0\u4f9b\u914d\u7f6e\u6587\u4ef6\u6709\u8bef\uff0c[{0}]\u662f\u5fc5\u586b\u53c2\u6570\uff0c\u4e0d\u5141\u8bb8\u4e3a\u7a7a\u6216\u8005\u7559\u767d .
|
||||
configuration.6=\u4efb\u52a1\u8bfb\u53d6\u914d\u7f6e\u6587\u4ef6\u51fa\u9519. \u56e0\u4e3a\u914d\u7f6e\u6587\u4ef6\u8def\u5f84[{0}] \u503c\u975e\u6cd5\uff0c\u671f\u671b\u662f\u5b57\u7b26\u7c7b\u578b: {1}. \u8bf7\u68c0\u67e5\u60a8\u7684\u914d\u7f6e\u5e76\u4f5c\u51fa\u4fee\u6539.
|
||||
configuration.7=\u60a8\u63d0\u4f9b\u7684\u914d\u7f6e\u4fe1\u606f\u6709\u8bef\uff0c\u56e0\u4e3a\u4ece[{0}]\u83b7\u53d6\u7684\u503c[{1}]\u65e0\u6cd5\u8f6c\u6362\u4e3abool\u7c7b\u578b. \u8bf7\u68c0\u67e5\u6e90\u8868\u7684\u914d\u7f6e\u5e76\u4e14\u505a\u51fa\u76f8\u5e94\u7684\u4fee\u6539.
|
||||
configuration.8=\u4efb\u52a1\u8bfb\u53d6\u914d\u7f6e\u6587\u4ef6\u51fa\u9519. \u914d\u7f6e\u6587\u4ef6\u8def\u5f84[{0}] \u503c\u975e\u6cd5, \u671f\u671b\u662f\u6574\u6570\u7c7b\u578b: {1}. \u8bf7\u68c0\u67e5\u60a8\u7684\u914d\u7f6e\u5e76\u4f5c\u51fa\u4fee\u6539.
|
||||
configuration.9=\u4efb\u52a1\u8bfb\u53d6\u914d\u7f6e\u6587\u4ef6\u51fa\u9519. \u914d\u7f6e\u6587\u4ef6\u8def\u5f84[{0}] \u503c\u975e\u6cd5, \u671f\u671b\u662f\u6574\u6570\u7c7b\u578b: {1}. \u8bf7\u68c0\u67e5\u60a8\u7684\u914d\u7f6e\u5e76\u4f5c\u51fa\u4fee\u6539.
|
||||
configuration.10=\u4efb\u52a1\u8bfb\u53d6\u914d\u7f6e\u6587\u4ef6\u51fa\u9519. \u914d\u7f6e\u6587\u4ef6\u8def\u5f84[{0}] \u503c\u975e\u6cd5, \u671f\u671b\u662f\u6d6e\u70b9\u7c7b\u578b: {1}. \u8bf7\u68c0\u67e5\u60a8\u7684\u914d\u7f6e\u5e76\u4f5c\u51fa\u4fee\u6539.
|
||||
configuration.11=\u914d\u7f6e\u6587\u4ef6\u5bf9\u5e94Key[{0}]\u5e76\u4e0d\u5b58\u5728\uff0c\u8be5\u60c5\u51b5\u662f\u4ee3\u7801\u7f16\u7a0b\u9519\u8bef. \u8bf7\u8054\u7cfbDataX\u56e2\u961f\u7684\u540c\u5b66.
|
||||
configuration.12=\u503c[{0}]\u65e0\u6cd5\u9002\u914d\u60a8\u63d0\u4f9b[{1}]\uff0c \u8be5\u5f02\u5e38\u4ee3\u8868\u7cfb\u7edf\u7f16\u7a0b\u9519\u8bef, \u8bf7\u8054\u7cfbDataX\u5f00\u53d1\u56e2\u961f!
|
||||
configuration.13=Path\u4e0d\u80fd\u4e3anull\uff0c\u8be5\u5f02\u5e38\u4ee3\u8868\u7cfb\u7edf\u7f16\u7a0b\u9519\u8bef, \u8bf7\u8054\u7cfbDataX\u5f00\u53d1\u56e2\u961f !
|
||||
configuration.14=\u8def\u5f84[{0}]\u51fa\u73b0\u975e\u6cd5\u503c\u7c7b\u578b[{1}]\uff0c\u8be5\u5f02\u5e38\u4ee3\u8868\u7cfb\u7edf\u7f16\u7a0b\u9519\u8bef, \u8bf7\u8054\u7cfbDataX\u5f00\u53d1\u56e2\u961f! .
|
||||
configuration.15=\u8be5\u5f02\u5e38\u4ee3\u8868\u7cfb\u7edf\u7f16\u7a0b\u9519\u8bef, \u8bf7\u8054\u7cfbDataX\u5f00\u53d1\u56e2\u961f !
|
||||
configuration.16=\u60a8\u63d0\u4f9b\u7684\u914d\u7f6e\u6587\u4ef6\u6709\u8bef. \u8def\u5f84[{0}]\u9700\u8981\u914d\u7f6eJson\u683c\u5f0f\u7684Map\u5bf9\u8c61\uff0c\u4f46\u8be5\u8282\u70b9\u53d1\u73b0\u5b9e\u9645\u7c7b\u578b\u662f[{1}]. \u8bf7\u68c0\u67e5\u60a8\u7684\u914d\u7f6e\u5e76\u4f5c\u51fa\u4fee\u6539.
|
||||
configuration.17=\u60a8\u63d0\u4f9b\u7684\u914d\u7f6e\u6587\u4ef6\u6709\u8bef. \u8def\u5f84[{0}]\u503c\u4e3anull\uff0cdatax\u65e0\u6cd5\u8bc6\u522b\u8be5\u914d\u7f6e. \u8bf7\u68c0\u67e5\u60a8\u7684\u914d\u7f6e\u5e76\u4f5c\u51fa\u4fee\u6539.
|
||||
configuration.18=\u60a8\u63d0\u4f9b\u7684\u914d\u7f6e\u6587\u4ef6\u6709\u8bef. \u8def\u5f84[{0}]\u9700\u8981\u914d\u7f6eJson\u683c\u5f0f\u7684Map\u5bf9\u8c61\uff0c\u4f46\u8be5\u8282\u70b9\u53d1\u73b0\u5b9e\u9645\u7c7b\u578b\u662f[{1}]. \u8bf7\u68c0\u67e5\u60a8\u7684\u914d\u7f6e\u5e76\u4f5c\u51fa\u4fee\u6539.
|
||||
configuration.19=\u7cfb\u7edf\u7f16\u7a0b\u9519\u8bef\uff0c\u5217\u8868\u4e0b\u6807\u5fc5\u987b\u4e3a\u6570\u5b57\u7c7b\u578b\uff0c\u4f46\u8be5\u8282\u70b9\u53d1\u73b0\u5b9e\u9645\u7c7b\u578b\u662f[{0}] \uff0c\u8be5\u5f02\u5e38\u4ee3\u8868\u7cfb\u7edf\u7f16\u7a0b\u9519\u8bef, \u8bf7\u8054\u7cfbDataX\u5f00\u53d1\u56e2\u961f !
|
||||
configuration.20=\u7cfb\u7edf\u7f16\u7a0b\u9519\u8bef, \u8be5\u5f02\u5e38\u4ee3\u8868\u7cfb\u7edf\u7f16\u7a0b\u9519\u8bef, \u8bf7\u8054\u7cfbDataX\u5f00\u53d1\u56e2\u961f!.
|
||||
configuration.21=\u7cfb\u7edf\u7f16\u7a0b\u9519\u8bef, \u8def\u5f84[{0}]\u4e0d\u5408\u6cd5, \u8def\u5f84\u5c42\u6b21\u4e4b\u95f4\u4e0d\u80fd\u51fa\u73b0\u7a7a\u767d\u5b57\u7b26 .
|
||||
configuration.22=\u914d\u7f6e\u4fe1\u606f\u9519\u8bef. \u56e0\u4e3a\u60a8\u63d0\u4f9b\u7684\u914d\u7f6e\u4fe1\u606f\u4e0d\u662f\u5408\u6cd5\u7684JSON\u683c\u5f0f, JSON\u4e0d\u80fd\u4e3a\u7a7a\u767d. \u8bf7\u6309\u7167\u6807\u51c6json\u683c\u5f0f\u63d0\u4f9b\u914d\u7f6e\u4fe1\u606f.
|
||||
configuration.23=\u914d\u7f6e\u4fe1\u606f\u9519\u8bef. \u60a8\u63d0\u4f9b\u7684\u914d\u7f6e\u4fe1\u606f\u4e0d\u662f\u5408\u6cd5\u7684JSON\u683c\u5f0f: {0} . \u8bf7\u6309\u7167\u6807\u51c6json\u683c\u5f0f\u63d0\u4f9b\u914d\u7f6e\u4fe1\u606f.
|
||||
|
||||
|
||||
listutil.1=\u60a8\u63d0\u4f9b\u7684\u4f5c\u4e1a\u914d\u7f6e\u6709\u8bef\uff0cList\u4e0d\u80fd\u4e3a\u7a7a.
|
||||
listutil.2=\u60a8\u63d0\u4f9b\u7684\u4f5c\u4e1a\u914d\u7f6e\u6709\u8bef, List\u4e0d\u80fd\u4e3a\u7a7a.
|
||||
listutil.3=\u60a8\u63d0\u4f9b\u7684\u4f5c\u4e1a\u914d\u7f6e\u4fe1\u606f\u6709\u8bef, String:[{0}] \u4e0d\u5141\u8bb8\u91cd\u590d\u51fa\u73b0\u5728\u5217\u8868\u4e2d: [{1}].
|
||||
listutil.4=\u60a8\u63d0\u4f9b\u7684\u4f5c\u4e1a\u914d\u7f6e\u6709\u8bef, List\u4e0d\u80fd\u4e3a\u7a7a.
|
||||
listutil.5=\u60a8\u63d0\u4f9b\u7684\u4f5c\u4e1a\u914d\u7f6e\u6709\u8bef, List\u4e0d\u80fd\u4e3a\u7a7a.
|
||||
listutil.6=\u60a8\u63d0\u4f9b\u7684\u4f5c\u4e1a\u914d\u7f6e\u4fe1\u606f\u6709\u8bef, String:[{0}] \u4e0d\u5b58\u5728\u4e8e\u5217\u8868\u4e2d:[{1}].
|
||||
listutil.7=\u60a8\u63d0\u4f9b\u7684\u4f5c\u4e1a\u914d\u7f6e\u6709\u8bef, List\u4e0d\u80fd\u4e3a\u7a7a.
|
||||
listutil.8=\u60a8\u63d0\u4f9b\u7684\u4f5c\u4e1a\u914d\u7f6e\u6709\u8bef, List\u4e0d\u80fd\u4e3a\u7a7a.
|
||||
|
||||
|
||||
rangesplitutil.1=\u5207\u5206\u4efd\u6570\u4e0d\u80fd\u5c0f\u4e8e1. \u6b64\u5904:expectSliceNumber=[{0}].
|
||||
rangesplitutil.2=\u5bf9 BigInteger \u8fdb\u884c\u5207\u5206\u65f6\uff0c\u5176\u5de6\u53f3\u533a\u95f4\u4e0d\u80fd\u4e3a null. \u6b64\u5904:left=[{0}],right=[{1}].
|
||||
rangesplitutil.3=\u53c2\u6570 bigInteger \u4e0d\u80fd\u4e3a\u7a7a.
|
||||
rangesplitutil.4=\u6839\u636e\u5b57\u7b26\u4e32\u8fdb\u884c\u5207\u5206\u65f6\u4ec5\u652f\u6301 ASCII \u5b57\u7b26\u4e32\uff0c\u800c\u5b57\u7b26\u4e32:[{0}]\u975e ASCII \u5b57\u7b26\u4e32.
|
||||
rangesplitutil.5=\u53c2\u6570 bigInteger \u4e0d\u80fd\u4e3a\u7a7a.
|
||||
rangesplitutil.6=\u6839\u636e\u5b57\u7b26\u4e32\u8fdb\u884c\u5207\u5206\u65f6\u4ec5\u652f\u6301 ASCII \u5b57\u7b26\u4e32\uff0c\u800c\u5b57\u7b26\u4e32:[{0}]\u975e ASCII \u5b57\u7b26\u4e32.
|
||||
|
||||
|
||||
retryutil.1=\u7cfb\u7edf\u7f16\u7a0b\u9519\u8bef, \u5165\u53c2callable\u4e0d\u80fd\u4e3a\u7a7a !
|
||||
retryutil.2=\u7cfb\u7edf\u7f16\u7a0b\u9519\u8bef, \u5165\u53c2retrytime[%d]\u4e0d\u80fd\u5c0f\u4e8e1 !
|
||||
retryutil.3=Exception when calling callable, \u5f02\u5e38Msg:{0}
|
||||
retryutil.4=Exception when calling callable, \u5373\u5c06\u5c1d\u8bd5\u6267\u884c\u7b2c{0}\u6b21\u91cd\u8bd5,\u5171\u8ba1\u91cd\u8bd5{1}\u6b21.\u672c\u6b21\u91cd\u8bd5\u8ba1\u5212\u7b49\u5f85[{2}]ms,\u5b9e\u9645\u7b49\u5f85[{3}]ms, \u5f02\u5e38Msg:[{4}]
|
||||
|
||||
|
||||
httpclientutil.1=\u8BF7\u6C42\u5730\u5740\uFF1A{0}, \u8BF7\u6C42\u65B9\u6CD5\uFF1A{1}, STATUS CODE = {2}, Response Entity: {3}
|
||||
httpclientutil.2=\u8FDC\u7A0B\u63A5\u53E3\u8FD4\u56DE-1,\u5C06\u91CD\u8BD5
|
@ -0,0 +1,53 @@
|
||||
very_like_yixiao=1{0}2{1}3
|
||||
|
||||
|
||||
configuration.1=Configuration information error. The configuration file [{0}] you provided does not exist. Please check your configuration files.
|
||||
configuration.2=Configuration information error. Failed to read the configuration file [{0}] you provided. Error reason: {1}. Please check the permission settings of your configuration files.
|
||||
configuration.3=Please check your configuration files. Failed to read the configuration file you provided. Error reason: {0}. Please check the permission settings of your configuration files.
|
||||
configuration.4=The configuration file you provided contains errors. [{0}] is a required parameter and cannot be empty or blank.
|
||||
configuration.5=The configuration file you provided contains errors. [{0}] is a required parameter and cannot be empty or blank.
|
||||
configuration.6=Task reading configuration file error. Invalid configuration file path [{0}] value. The expected value should be of the character type: {1}. Please check your configuration and make corrections.
|
||||
configuration.7=The configuration information you provided contains errors. The value [{1}] obtained from [{0}] cannot be converted to the Bool type. Please check the source table configuration and make corrections.
|
||||
configuration.8=Task reading configuration file error. Invalid configuration file path [{0}] value. The expected value should be of the integer type: {1}. Please check your configuration and make corrections.
|
||||
configuration.9=Task reading configuration file error. Invalid configuration file path [{0}] value. The expected value should be of the integer type: {1}. Please check your configuration and make corrections.
|
||||
configuration.10=Task reading configuration file error. Invalid configuration file path [{0}] value. The expected value should be of the floating-point type: {1}. Please check your configuration and make corrections.
|
||||
configuration.11=The Key [{0}] for the configuration file does not exist. This is a code programming error. Please contact the DataX team.
|
||||
configuration.12=The value [{0}] cannot adapt to the [{1}] you provided. This exception represents a system programming error. Please contact the DataX developer team.
|
||||
configuration.13=The path cannot be null. This exception represents a system programming error. Please contact the DataX developer team.
|
||||
configuration.14=The path [{0}] has an invalid value type [{1}]. This exception represents a system programming error. Please contact the DataX developer team.
|
||||
configuration.15=This exception represents a system programming error. Please contact the DataX developer team.
|
||||
configuration.16=The configuration file you provided contains errors. The path [{0}] requires you to configure a Map object in JSON format, but the actual type found on the node is [{1}]. Please check your configuration and make corrections.
|
||||
configuration.17=The configuration file you provided contains errors. The value of the path [{0}] is null and DataX cannot recognize the configuration. Please check your configuration and make corrections.
|
||||
configuration.18=The configuration file you provided contains errors. The path [{0}] requires you to configure a Map object in JSON format, but the actual type found on the node is [{1}]. Please check your configuration and make corrections.
|
||||
configuration.19=System programming error. The list subscript must be of the numeric type, but the actual type found on this node is [{0}]. This exception represents a system programming error. Please contact the DataX developer team.
|
||||
configuration.20=System programming error. This exception represents a system programming error. Please contact the DataX developer team.
|
||||
configuration.21=System programming error. Invalid path [{0}]. No spaces are allowed between path layers.
|
||||
configuration.22=Configuration information error. The configuration information you provided is not in a legal JSON format. JSON cannot be blank. Please provide the configuration information in the standard JSON format.
|
||||
configuration.23=Configuration information error. The configuration information you provided is not in a valid JSON format: {0}. Please provide the configuration information in the standard JSON format.
|
||||
|
||||
|
||||
listutil.1=The job configuration you provided contains errors. The list cannot be empty.
|
||||
listutil.2=The job configuration you provided contains errors. The list cannot be empty.
|
||||
listutil.3=The job configuration information you provided contains errors. String: [{0}] is not allowed to be repeated in the list: [{1}].
|
||||
listutil.4=The job configuration you provided contains errors. The list cannot be empty.
|
||||
listutil.5=The job configuration you provided contains errors. The list cannot be empty.
|
||||
listutil.6=The job configuration information you provided contains errors. String: [{0}] does not exist in the list: [{1}].
|
||||
listutil.7=The job configuration you provided contains errors. The list cannot be empty.
|
||||
listutil.8=The job configuration you provided contains errors. The list cannot be empty.
|
||||
|
||||
|
||||
rangesplitutil.1=The slice number cannot be less than 1. Here: [expectSliceNumber]=[{0}].
|
||||
rangesplitutil.2=The left or right intervals of BigInteger character strings cannot be null when they are sliced. Here: [left]=[{0}], [right]=[{1}].
|
||||
rangesplitutil.3=The [bigInteger] parameter cannot be null.
|
||||
rangesplitutil.4=Only ASCII character strings are supported for character string slicing, but the [{0}] character string is not of the ASCII type.
|
||||
rangesplitutil.5=The [bigInteger] parameter cannot be null.
|
||||
rangesplitutil.6=Only ASCII character strings are supported for character string slicing, but the [{0}] character string is not of the ASCII type.
|
||||
|
||||
|
||||
retryutil.1=System programming error. The “callable” input parameter cannot be null.
|
||||
retryutil.2=System programming error. The “retrytime[%d]” input parameter cannot be less than 1.
|
||||
retryutil.3=Exception when calling callable. Exception Msg: {0}
|
||||
retryutil.4=Exception when calling callable. Retry Attempt: {0} will start soon. {1} attempts in total. This attempt planned to wait for [{2}]ms, and actually waited for [{3}]ms. Exception Msg: [{4}].
|
||||
|
||||
httpclientutil.1=Request address: {0}. Request method: {1}. STATUS CODE = {2}, Response Entity: {3}
|
||||
httpclientutil.2=The remote interface returns -1. We will try again
|
@ -0,0 +1,53 @@
|
||||
very_like_yixiao=1{0}2{1}3
|
||||
|
||||
|
||||
configuration.1=\u914d\u7f6e\u4fe1\u606f\u9519\u8bef\uff0c\u60a8\u63d0\u4f9b\u7684\u914d\u7f6e\u6587\u4ef6[{0}]\u4e0d\u5b58\u5728. \u8bf7\u68c0\u67e5\u60a8\u7684\u914d\u7f6e\u6587\u4ef6.
|
||||
configuration.2=\u914d\u7f6e\u4fe1\u606f\u9519\u8bef. \u60a8\u63d0\u4f9b\u914d\u7f6e\u6587\u4ef6[{0}]\u8bfb\u53d6\u5931\u8d25\uff0c\u9519\u8bef\u539f\u56e0: {1}. \u8bf7\u68c0\u67e5\u60a8\u7684\u914d\u7f6e\u6587\u4ef6\u7684\u6743\u9650\u8bbe\u7f6e.
|
||||
configuration.3=\u8bf7\u68c0\u67e5\u60a8\u7684\u914d\u7f6e\u6587\u4ef6. \u60a8\u63d0\u4f9b\u7684\u914d\u7f6e\u6587\u4ef6\u8bfb\u53d6\u5931\u8d25\uff0c\u9519\u8bef\u539f\u56e0: {0}. \u8bf7\u68c0\u67e5\u60a8\u7684\u914d\u7f6e\u6587\u4ef6\u7684\u6743\u9650\u8bbe\u7f6e.
|
||||
configuration.4=\u60a8\u63d0\u4f9b\u914d\u7f6e\u6587\u4ef6\u6709\u8bef\uff0c[{0}]\u662f\u5fc5\u586b\u53c2\u6570\uff0c\u4e0d\u5141\u8bb8\u4e3a\u7a7a\u6216\u8005\u7559\u767d .
|
||||
configuration.5=\u60a8\u63d0\u4f9b\u914d\u7f6e\u6587\u4ef6\u6709\u8bef\uff0c[{0}]\u662f\u5fc5\u586b\u53c2\u6570\uff0c\u4e0d\u5141\u8bb8\u4e3a\u7a7a\u6216\u8005\u7559\u767d .
|
||||
configuration.6=\u4efb\u52a1\u8bfb\u53d6\u914d\u7f6e\u6587\u4ef6\u51fa\u9519. \u56e0\u4e3a\u914d\u7f6e\u6587\u4ef6\u8def\u5f84[{0}] \u503c\u975e\u6cd5\uff0c\u671f\u671b\u662f\u5b57\u7b26\u7c7b\u578b: {1}. \u8bf7\u68c0\u67e5\u60a8\u7684\u914d\u7f6e\u5e76\u4f5c\u51fa\u4fee\u6539.
|
||||
configuration.7=\u60a8\u63d0\u4f9b\u7684\u914d\u7f6e\u4fe1\u606f\u6709\u8bef\uff0c\u56e0\u4e3a\u4ece[{0}]\u83b7\u53d6\u7684\u503c[{1}]\u65e0\u6cd5\u8f6c\u6362\u4e3abool\u7c7b\u578b. \u8bf7\u68c0\u67e5\u6e90\u8868\u7684\u914d\u7f6e\u5e76\u4e14\u505a\u51fa\u76f8\u5e94\u7684\u4fee\u6539.
|
||||
configuration.8=\u4efb\u52a1\u8bfb\u53d6\u914d\u7f6e\u6587\u4ef6\u51fa\u9519. \u914d\u7f6e\u6587\u4ef6\u8def\u5f84[{0}] \u503c\u975e\u6cd5, \u671f\u671b\u662f\u6574\u6570\u7c7b\u578b: {1}. \u8bf7\u68c0\u67e5\u60a8\u7684\u914d\u7f6e\u5e76\u4f5c\u51fa\u4fee\u6539.
|
||||
configuration.9=\u4efb\u52a1\u8bfb\u53d6\u914d\u7f6e\u6587\u4ef6\u51fa\u9519. \u914d\u7f6e\u6587\u4ef6\u8def\u5f84[{0}] \u503c\u975e\u6cd5, \u671f\u671b\u662f\u6574\u6570\u7c7b\u578b: {1}. \u8bf7\u68c0\u67e5\u60a8\u7684\u914d\u7f6e\u5e76\u4f5c\u51fa\u4fee\u6539.
|
||||
configuration.10=\u4efb\u52a1\u8bfb\u53d6\u914d\u7f6e\u6587\u4ef6\u51fa\u9519. \u914d\u7f6e\u6587\u4ef6\u8def\u5f84[{0}] \u503c\u975e\u6cd5, \u671f\u671b\u662f\u6d6e\u70b9\u7c7b\u578b: {1}. \u8bf7\u68c0\u67e5\u60a8\u7684\u914d\u7f6e\u5e76\u4f5c\u51fa\u4fee\u6539.
|
||||
configuration.11=\u914d\u7f6e\u6587\u4ef6\u5bf9\u5e94Key[{0}]\u5e76\u4e0d\u5b58\u5728\uff0c\u8be5\u60c5\u51b5\u662f\u4ee3\u7801\u7f16\u7a0b\u9519\u8bef. \u8bf7\u8054\u7cfbDataX\u56e2\u961f\u7684\u540c\u5b66.
|
||||
configuration.12=\u503c[{0}]\u65e0\u6cd5\u9002\u914d\u60a8\u63d0\u4f9b[{1}]\uff0c \u8be5\u5f02\u5e38\u4ee3\u8868\u7cfb\u7edf\u7f16\u7a0b\u9519\u8bef, \u8bf7\u8054\u7cfbDataX\u5f00\u53d1\u56e2\u961f!
|
||||
configuration.13=Path\u4e0d\u80fd\u4e3anull\uff0c\u8be5\u5f02\u5e38\u4ee3\u8868\u7cfb\u7edf\u7f16\u7a0b\u9519\u8bef, \u8bf7\u8054\u7cfbDataX\u5f00\u53d1\u56e2\u961f !
|
||||
configuration.14=\u8def\u5f84[{0}]\u51fa\u73b0\u975e\u6cd5\u503c\u7c7b\u578b[{1}]\uff0c\u8be5\u5f02\u5e38\u4ee3\u8868\u7cfb\u7edf\u7f16\u7a0b\u9519\u8bef, \u8bf7\u8054\u7cfbDataX\u5f00\u53d1\u56e2\u961f! .
|
||||
configuration.15=\u8be5\u5f02\u5e38\u4ee3\u8868\u7cfb\u7edf\u7f16\u7a0b\u9519\u8bef, \u8bf7\u8054\u7cfbDataX\u5f00\u53d1\u56e2\u961f !
|
||||
configuration.16=\u60a8\u63d0\u4f9b\u7684\u914d\u7f6e\u6587\u4ef6\u6709\u8bef. \u8def\u5f84[{0}]\u9700\u8981\u914d\u7f6eJson\u683c\u5f0f\u7684Map\u5bf9\u8c61\uff0c\u4f46\u8be5\u8282\u70b9\u53d1\u73b0\u5b9e\u9645\u7c7b\u578b\u662f[{1}]. \u8bf7\u68c0\u67e5\u60a8\u7684\u914d\u7f6e\u5e76\u4f5c\u51fa\u4fee\u6539.
|
||||
configuration.17=\u60a8\u63d0\u4f9b\u7684\u914d\u7f6e\u6587\u4ef6\u6709\u8bef. \u8def\u5f84[{0}]\u503c\u4e3anull\uff0cdatax\u65e0\u6cd5\u8bc6\u522b\u8be5\u914d\u7f6e. \u8bf7\u68c0\u67e5\u60a8\u7684\u914d\u7f6e\u5e76\u4f5c\u51fa\u4fee\u6539.
|
||||
configuration.18=\u60a8\u63d0\u4f9b\u7684\u914d\u7f6e\u6587\u4ef6\u6709\u8bef. \u8def\u5f84[{0}]\u9700\u8981\u914d\u7f6eJson\u683c\u5f0f\u7684Map\u5bf9\u8c61\uff0c\u4f46\u8be5\u8282\u70b9\u53d1\u73b0\u5b9e\u9645\u7c7b\u578b\u662f[{1}]. \u8bf7\u68c0\u67e5\u60a8\u7684\u914d\u7f6e\u5e76\u4f5c\u51fa\u4fee\u6539.
|
||||
configuration.19=\u7cfb\u7edf\u7f16\u7a0b\u9519\u8bef\uff0c\u5217\u8868\u4e0b\u6807\u5fc5\u987b\u4e3a\u6570\u5b57\u7c7b\u578b\uff0c\u4f46\u8be5\u8282\u70b9\u53d1\u73b0\u5b9e\u9645\u7c7b\u578b\u662f[{0}] \uff0c\u8be5\u5f02\u5e38\u4ee3\u8868\u7cfb\u7edf\u7f16\u7a0b\u9519\u8bef, \u8bf7\u8054\u7cfbDataX\u5f00\u53d1\u56e2\u961f !
|
||||
configuration.20=\u7cfb\u7edf\u7f16\u7a0b\u9519\u8bef, \u8be5\u5f02\u5e38\u4ee3\u8868\u7cfb\u7edf\u7f16\u7a0b\u9519\u8bef, \u8bf7\u8054\u7cfbDataX\u5f00\u53d1\u56e2\u961f!.
|
||||
configuration.21=\u7cfb\u7edf\u7f16\u7a0b\u9519\u8bef, \u8def\u5f84[{0}]\u4e0d\u5408\u6cd5, \u8def\u5f84\u5c42\u6b21\u4e4b\u95f4\u4e0d\u80fd\u51fa\u73b0\u7a7a\u767d\u5b57\u7b26 .
|
||||
configuration.22=\u914d\u7f6e\u4fe1\u606f\u9519\u8bef. \u56e0\u4e3a\u60a8\u63d0\u4f9b\u7684\u914d\u7f6e\u4fe1\u606f\u4e0d\u662f\u5408\u6cd5\u7684JSON\u683c\u5f0f, JSON\u4e0d\u80fd\u4e3a\u7a7a\u767d. \u8bf7\u6309\u7167\u6807\u51c6json\u683c\u5f0f\u63d0\u4f9b\u914d\u7f6e\u4fe1\u606f.
|
||||
configuration.23=\u914d\u7f6e\u4fe1\u606f\u9519\u8bef. \u60a8\u63d0\u4f9b\u7684\u914d\u7f6e\u4fe1\u606f\u4e0d\u662f\u5408\u6cd5\u7684JSON\u683c\u5f0f: {0} . \u8bf7\u6309\u7167\u6807\u51c6json\u683c\u5f0f\u63d0\u4f9b\u914d\u7f6e\u4fe1\u606f.
|
||||
|
||||
|
||||
listutil.1=\u60a8\u63d0\u4f9b\u7684\u4f5c\u4e1a\u914d\u7f6e\u6709\u8bef\uff0cList\u4e0d\u80fd\u4e3a\u7a7a.
|
||||
listutil.2=\u60a8\u63d0\u4f9b\u7684\u4f5c\u4e1a\u914d\u7f6e\u6709\u8bef, List\u4e0d\u80fd\u4e3a\u7a7a.
|
||||
listutil.3=\u60a8\u63d0\u4f9b\u7684\u4f5c\u4e1a\u914d\u7f6e\u4fe1\u606f\u6709\u8bef, String:[{0}] \u4e0d\u5141\u8bb8\u91cd\u590d\u51fa\u73b0\u5728\u5217\u8868\u4e2d: [{1}].
|
||||
listutil.4=\u60a8\u63d0\u4f9b\u7684\u4f5c\u4e1a\u914d\u7f6e\u6709\u8bef, List\u4e0d\u80fd\u4e3a\u7a7a.
|
||||
listutil.5=\u60a8\u63d0\u4f9b\u7684\u4f5c\u4e1a\u914d\u7f6e\u6709\u8bef, List\u4e0d\u80fd\u4e3a\u7a7a.
|
||||
listutil.6=\u60a8\u63d0\u4f9b\u7684\u4f5c\u4e1a\u914d\u7f6e\u4fe1\u606f\u6709\u8bef, String:[{0}] \u4e0d\u5b58\u5728\u4e8e\u5217\u8868\u4e2d:[{1}].
|
||||
listutil.7=\u60a8\u63d0\u4f9b\u7684\u4f5c\u4e1a\u914d\u7f6e\u6709\u8bef, List\u4e0d\u80fd\u4e3a\u7a7a.
|
||||
listutil.8=\u60a8\u63d0\u4f9b\u7684\u4f5c\u4e1a\u914d\u7f6e\u6709\u8bef, List\u4e0d\u80fd\u4e3a\u7a7a.
|
||||
|
||||
|
||||
rangesplitutil.1=\u5207\u5206\u4efd\u6570\u4e0d\u80fd\u5c0f\u4e8e1. \u6b64\u5904:expectSliceNumber=[{0}].
|
||||
rangesplitutil.2=\u5bf9 BigInteger \u8fdb\u884c\u5207\u5206\u65f6\uff0c\u5176\u5de6\u53f3\u533a\u95f4\u4e0d\u80fd\u4e3a null. \u6b64\u5904:left=[{0}],right=[{1}].
|
||||
rangesplitutil.3=\u53c2\u6570 bigInteger \u4e0d\u80fd\u4e3a\u7a7a.
|
||||
rangesplitutil.4=\u6839\u636e\u5b57\u7b26\u4e32\u8fdb\u884c\u5207\u5206\u65f6\u4ec5\u652f\u6301 ASCII \u5b57\u7b26\u4e32\uff0c\u800c\u5b57\u7b26\u4e32:[{0}]\u975e ASCII \u5b57\u7b26\u4e32.
|
||||
rangesplitutil.5=\u53c2\u6570 bigInteger \u4e0d\u80fd\u4e3a\u7a7a.
|
||||
rangesplitutil.6=\u6839\u636e\u5b57\u7b26\u4e32\u8fdb\u884c\u5207\u5206\u65f6\u4ec5\u652f\u6301 ASCII \u5b57\u7b26\u4e32\uff0c\u800c\u5b57\u7b26\u4e32:[{0}]\u975e ASCII \u5b57\u7b26\u4e32.
|
||||
|
||||
|
||||
retryutil.1=\u7cfb\u7edf\u7f16\u7a0b\u9519\u8bef, \u5165\u53c2callable\u4e0d\u80fd\u4e3a\u7a7a !
|
||||
retryutil.2=\u7cfb\u7edf\u7f16\u7a0b\u9519\u8bef, \u5165\u53c2retrytime[%d]\u4e0d\u80fd\u5c0f\u4e8e1 !
|
||||
retryutil.3=Exception when calling callable, \u5f02\u5e38Msg:{0}
|
||||
retryutil.4=Exception when calling callable, \u5373\u5c06\u5c1d\u8bd5\u6267\u884c\u7b2c{0}\u6b21\u91cd\u8bd5,\u5171\u8ba1\u91cd\u8bd5{1}\u6b21.\u672c\u6b21\u91cd\u8bd5\u8ba1\u5212\u7b49\u5f85[{2}]ms,\u5b9e\u9645\u7b49\u5f85[{3}]ms, \u5f02\u5e38Msg:[{4}]
|
||||
|
||||
httpclientutil.1=\u8BF7\u6C42\u5730\u5740\uFF1A{0}, \u8BF7\u6C42\u65B9\u6CD5\uFF1A{1},STATUS CODE = {2}, Response Entity: {3}
|
||||
httpclientutil.2=\u8FDC\u7A0B\u63A5\u53E3\u8FD4\u56DE-1,\u5C06\u91CD\u8BD5
|
@ -0,0 +1,54 @@
|
||||
very_like_yixiao=\u4e00{0}\u4e8c{1}\u4e09
|
||||
|
||||
|
||||
configuration.1=\u914d\u7f6e\u4fe1\u606f\u9519\u8bef\uff0c\u60a8\u63d0\u4f9b\u7684\u914d\u7f6e\u6587\u4ef6[{0}]\u4e0d\u5b58\u5728. \u8bf7\u68c0\u67e5\u60a8\u7684\u914d\u7f6e\u6587\u4ef6.
|
||||
configuration.2=\u914d\u7f6e\u4fe1\u606f\u9519\u8bef. \u60a8\u63d0\u4f9b\u914d\u7f6e\u6587\u4ef6[{0}]\u8bfb\u53d6\u5931\u8d25\uff0c\u9519\u8bef\u539f\u56e0: {1}. \u8bf7\u68c0\u67e5\u60a8\u7684\u914d\u7f6e\u6587\u4ef6\u7684\u6743\u9650\u8bbe\u7f6e.
|
||||
configuration.3=\u8bf7\u68c0\u67e5\u60a8\u7684\u914d\u7f6e\u6587\u4ef6. \u60a8\u63d0\u4f9b\u7684\u914d\u7f6e\u6587\u4ef6\u8bfb\u53d6\u5931\u8d25\uff0c\u9519\u8bef\u539f\u56e0: {0}. \u8bf7\u68c0\u67e5\u60a8\u7684\u914d\u7f6e\u6587\u4ef6\u7684\u6743\u9650\u8bbe\u7f6e.
|
||||
configuration.4=\u60a8\u63d0\u4f9b\u914d\u7f6e\u6587\u4ef6\u6709\u8bef\uff0c[{0}]\u662f\u5fc5\u586b\u53c2\u6570\uff0c\u4e0d\u5141\u8bb8\u4e3a\u7a7a\u6216\u8005\u7559\u767d .
|
||||
configuration.5=\u60a8\u63d0\u4f9b\u914d\u7f6e\u6587\u4ef6\u6709\u8bef\uff0c[{0}]\u662f\u5fc5\u586b\u53c2\u6570\uff0c\u4e0d\u5141\u8bb8\u4e3a\u7a7a\u6216\u8005\u7559\u767d .
|
||||
configuration.6=\u4efb\u52a1\u8bfb\u53d6\u914d\u7f6e\u6587\u4ef6\u51fa\u9519. \u56e0\u4e3a\u914d\u7f6e\u6587\u4ef6\u8def\u5f84[{0}] \u503c\u975e\u6cd5\uff0c\u671f\u671b\u662f\u5b57\u7b26\u7c7b\u578b: {1}. \u8bf7\u68c0\u67e5\u60a8\u7684\u914d\u7f6e\u5e76\u4f5c\u51fa\u4fee\u6539.
|
||||
configuration.7=\u60a8\u63d0\u4f9b\u7684\u914d\u7f6e\u4fe1\u606f\u6709\u8bef\uff0c\u56e0\u4e3a\u4ece[{0}]\u83b7\u53d6\u7684\u503c[{1}]\u65e0\u6cd5\u8f6c\u6362\u4e3abool\u7c7b\u578b. \u8bf7\u68c0\u67e5\u6e90\u8868\u7684\u914d\u7f6e\u5e76\u4e14\u505a\u51fa\u76f8\u5e94\u7684\u4fee\u6539.
|
||||
configuration.8=\u4efb\u52a1\u8bfb\u53d6\u914d\u7f6e\u6587\u4ef6\u51fa\u9519. \u914d\u7f6e\u6587\u4ef6\u8def\u5f84[{0}] \u503c\u975e\u6cd5, \u671f\u671b\u662f\u6574\u6570\u7c7b\u578b: {1}. \u8bf7\u68c0\u67e5\u60a8\u7684\u914d\u7f6e\u5e76\u4f5c\u51fa\u4fee\u6539.
|
||||
configuration.9=\u4efb\u52a1\u8bfb\u53d6\u914d\u7f6e\u6587\u4ef6\u51fa\u9519. \u914d\u7f6e\u6587\u4ef6\u8def\u5f84[{0}] \u503c\u975e\u6cd5, \u671f\u671b\u662f\u6574\u6570\u7c7b\u578b: {1}. \u8bf7\u68c0\u67e5\u60a8\u7684\u914d\u7f6e\u5e76\u4f5c\u51fa\u4fee\u6539.
|
||||
configuration.10=\u4efb\u52a1\u8bfb\u53d6\u914d\u7f6e\u6587\u4ef6\u51fa\u9519. \u914d\u7f6e\u6587\u4ef6\u8def\u5f84[{0}] \u503c\u975e\u6cd5, \u671f\u671b\u662f\u6d6e\u70b9\u7c7b\u578b: {1}. \u8bf7\u68c0\u67e5\u60a8\u7684\u914d\u7f6e\u5e76\u4f5c\u51fa\u4fee\u6539.
|
||||
configuration.11=\u914d\u7f6e\u6587\u4ef6\u5bf9\u5e94Key[{0}]\u5e76\u4e0d\u5b58\u5728\uff0c\u8be5\u60c5\u51b5\u662f\u4ee3\u7801\u7f16\u7a0b\u9519\u8bef. \u8bf7\u8054\u7cfbDataX\u56e2\u961f\u7684\u540c\u5b66.
|
||||
configuration.12=\u503c[{0}]\u65e0\u6cd5\u9002\u914d\u60a8\u63d0\u4f9b[{1}]\uff0c \u8be5\u5f02\u5e38\u4ee3\u8868\u7cfb\u7edf\u7f16\u7a0b\u9519\u8bef, \u8bf7\u8054\u7cfbDataX\u5f00\u53d1\u56e2\u961f!
|
||||
configuration.13=Path\u4e0d\u80fd\u4e3anull\uff0c\u8be5\u5f02\u5e38\u4ee3\u8868\u7cfb\u7edf\u7f16\u7a0b\u9519\u8bef, \u8bf7\u8054\u7cfbDataX\u5f00\u53d1\u56e2\u961f !
|
||||
configuration.14=\u8def\u5f84[{0}]\u51fa\u73b0\u975e\u6cd5\u503c\u7c7b\u578b[{1}]\uff0c\u8be5\u5f02\u5e38\u4ee3\u8868\u7cfb\u7edf\u7f16\u7a0b\u9519\u8bef, \u8bf7\u8054\u7cfbDataX\u5f00\u53d1\u56e2\u961f! .
|
||||
configuration.15=\u8be5\u5f02\u5e38\u4ee3\u8868\u7cfb\u7edf\u7f16\u7a0b\u9519\u8bef, \u8bf7\u8054\u7cfbDataX\u5f00\u53d1\u56e2\u961f !
|
||||
configuration.16=\u60a8\u63d0\u4f9b\u7684\u914d\u7f6e\u6587\u4ef6\u6709\u8bef. \u8def\u5f84[{0}]\u9700\u8981\u914d\u7f6eJson\u683c\u5f0f\u7684Map\u5bf9\u8c61\uff0c\u4f46\u8be5\u8282\u70b9\u53d1\u73b0\u5b9e\u9645\u7c7b\u578b\u662f[{1}]. \u8bf7\u68c0\u67e5\u60a8\u7684\u914d\u7f6e\u5e76\u4f5c\u51fa\u4fee\u6539.
|
||||
configuration.17=\u60a8\u63d0\u4f9b\u7684\u914d\u7f6e\u6587\u4ef6\u6709\u8bef. \u8def\u5f84[{0}]\u503c\u4e3anull\uff0cdatax\u65e0\u6cd5\u8bc6\u522b\u8be5\u914d\u7f6e. \u8bf7\u68c0\u67e5\u60a8\u7684\u914d\u7f6e\u5e76\u4f5c\u51fa\u4fee\u6539.
|
||||
configuration.18=\u60a8\u63d0\u4f9b\u7684\u914d\u7f6e\u6587\u4ef6\u6709\u8bef. \u8def\u5f84[{0}]\u9700\u8981\u914d\u7f6eJson\u683c\u5f0f\u7684Map\u5bf9\u8c61\uff0c\u4f46\u8be5\u8282\u70b9\u53d1\u73b0\u5b9e\u9645\u7c7b\u578b\u662f[{1}]. \u8bf7\u68c0\u67e5\u60a8\u7684\u914d\u7f6e\u5e76\u4f5c\u51fa\u4fee\u6539.
|
||||
configuration.19=\u7cfb\u7edf\u7f16\u7a0b\u9519\u8bef\uff0c\u5217\u8868\u4e0b\u6807\u5fc5\u987b\u4e3a\u6570\u5b57\u7c7b\u578b\uff0c\u4f46\u8be5\u8282\u70b9\u53d1\u73b0\u5b9e\u9645\u7c7b\u578b\u662f[{0}] \uff0c\u8be5\u5f02\u5e38\u4ee3\u8868\u7cfb\u7edf\u7f16\u7a0b\u9519\u8bef, \u8bf7\u8054\u7cfbDataX\u5f00\u53d1\u56e2\u961f !
|
||||
configuration.20=\u7cfb\u7edf\u7f16\u7a0b\u9519\u8bef, \u8be5\u5f02\u5e38\u4ee3\u8868\u7cfb\u7edf\u7f16\u7a0b\u9519\u8bef, \u8bf7\u8054\u7cfbDataX\u5f00\u53d1\u56e2\u961f!.
|
||||
configuration.21=\u7cfb\u7edf\u7f16\u7a0b\u9519\u8bef, \u8def\u5f84[{0}]\u4e0d\u5408\u6cd5, \u8def\u5f84\u5c42\u6b21\u4e4b\u95f4\u4e0d\u80fd\u51fa\u73b0\u7a7a\u767d\u5b57\u7b26 .
|
||||
configuration.22=\u914d\u7f6e\u4fe1\u606f\u9519\u8bef. \u56e0\u4e3a\u60a8\u63d0\u4f9b\u7684\u914d\u7f6e\u4fe1\u606f\u4e0d\u662f\u5408\u6cd5\u7684JSON\u683c\u5f0f, JSON\u4e0d\u80fd\u4e3a\u7a7a\u767d. \u8bf7\u6309\u7167\u6807\u51c6json\u683c\u5f0f\u63d0\u4f9b\u914d\u7f6e\u4fe1\u606f.
|
||||
configuration.23=\u914d\u7f6e\u4fe1\u606f\u9519\u8bef. \u60a8\u63d0\u4f9b\u7684\u914d\u7f6e\u4fe1\u606f\u4e0d\u662f\u5408\u6cd5\u7684JSON\u683c\u5f0f: {0} . \u8bf7\u6309\u7167\u6807\u51c6json\u683c\u5f0f\u63d0\u4f9b\u914d\u7f6e\u4fe1\u606f.
|
||||
|
||||
|
||||
listutil.1=\u60a8\u63d0\u4f9b\u7684\u4f5c\u4e1a\u914d\u7f6e\u6709\u8bef\uff0cList\u4e0d\u80fd\u4e3a\u7a7a.
|
||||
listutil.2=\u60a8\u63d0\u4f9b\u7684\u4f5c\u4e1a\u914d\u7f6e\u6709\u8bef, List\u4e0d\u80fd\u4e3a\u7a7a.
|
||||
listutil.3=\u60a8\u63d0\u4f9b\u7684\u4f5c\u4e1a\u914d\u7f6e\u4fe1\u606f\u6709\u8bef, String:[{0}] \u4e0d\u5141\u8bb8\u91cd\u590d\u51fa\u73b0\u5728\u5217\u8868\u4e2d: [{1}].
|
||||
listutil.4=\u60a8\u63d0\u4f9b\u7684\u4f5c\u4e1a\u914d\u7f6e\u6709\u8bef, List\u4e0d\u80fd\u4e3a\u7a7a.
|
||||
listutil.5=\u60a8\u63d0\u4f9b\u7684\u4f5c\u4e1a\u914d\u7f6e\u6709\u8bef, List\u4e0d\u80fd\u4e3a\u7a7a.
|
||||
listutil.6=\u60a8\u63d0\u4f9b\u7684\u4f5c\u4e1a\u914d\u7f6e\u4fe1\u606f\u6709\u8bef, String:[{0}] \u4e0d\u5b58\u5728\u4e8e\u5217\u8868\u4e2d:[{1}].
|
||||
listutil.7=\u60a8\u63d0\u4f9b\u7684\u4f5c\u4e1a\u914d\u7f6e\u6709\u8bef, List\u4e0d\u80fd\u4e3a\u7a7a.
|
||||
listutil.8=\u60a8\u63d0\u4f9b\u7684\u4f5c\u4e1a\u914d\u7f6e\u6709\u8bef, List\u4e0d\u80fd\u4e3a\u7a7a.
|
||||
|
||||
|
||||
rangesplitutil.1=\u5207\u5206\u4efd\u6570\u4e0d\u80fd\u5c0f\u4e8e1. \u6b64\u5904:expectSliceNumber=[{0}].
|
||||
rangesplitutil.2=\u5bf9 BigInteger \u8fdb\u884c\u5207\u5206\u65f6\uff0c\u5176\u5de6\u53f3\u533a\u95f4\u4e0d\u80fd\u4e3a null. \u6b64\u5904:left=[{0}],right=[{1}].
|
||||
rangesplitutil.3=\u53c2\u6570 bigInteger \u4e0d\u80fd\u4e3a\u7a7a.
|
||||
rangesplitutil.4=\u6839\u636e\u5b57\u7b26\u4e32\u8fdb\u884c\u5207\u5206\u65f6\u4ec5\u652f\u6301 ASCII \u5b57\u7b26\u4e32\uff0c\u800c\u5b57\u7b26\u4e32:[{0}]\u975e ASCII \u5b57\u7b26\u4e32.
|
||||
rangesplitutil.5=\u53c2\u6570 bigInteger \u4e0d\u80fd\u4e3a\u7a7a.
|
||||
rangesplitutil.6=\u6839\u636e\u5b57\u7b26\u4e32\u8fdb\u884c\u5207\u5206\u65f6\u4ec5\u652f\u6301 ASCII \u5b57\u7b26\u4e32\uff0c\u800c\u5b57\u7b26\u4e32:[{0}]\u975e ASCII \u5b57\u7b26\u4e32.
|
||||
|
||||
|
||||
retryutil.1=\u7cfb\u7edf\u7f16\u7a0b\u9519\u8bef, \u5165\u53c2callable\u4e0d\u80fd\u4e3a\u7a7a !
|
||||
retryutil.2=\u7cfb\u7edf\u7f16\u7a0b\u9519\u8bef, \u5165\u53c2retrytime[%d]\u4e0d\u80fd\u5c0f\u4e8e1 !
|
||||
retryutil.3=Exception when calling callable, \u5f02\u5e38Msg:{0}
|
||||
retryutil.4=Exception when calling callable, \u5373\u5c06\u5c1d\u8bd5\u6267\u884c\u7b2c{0}\u6b21\u91cd\u8bd5,\u5171\u8ba1\u91cd\u8bd5{1}\u6b21.\u672c\u6b21\u91cd\u8bd5\u8ba1\u5212\u7b49\u5f85[{2}]ms,\u5b9e\u9645\u7b49\u5f85[{3}]ms, \u5f02\u5e38Msg:[{4}]
|
||||
|
||||
|
||||
httpclientutil.1=\u8BF7\u6C42\u5730\u5740\uFF1A{0}, \u8BF7\u6C42\u65B9\u6CD5\uFF1A{1},STATUS CODE = {2}, Response Entity: {3}
|
||||
httpclientutil.2=\u8FDC\u7A0B\u63A5\u53E3\u8FD4\u56DE-1,\u5C06\u91CD\u8BD5
|
@ -0,0 +1,104 @@
|
||||
very_like_yixiao=\u4e00{0}\u4e8c{1}\u4e09
|
||||
|
||||
|
||||
configuration.1=\u914d\u7f6e\u4fe1\u606f\u9519\u8bef\uff0c\u60a8\u63d0\u4f9b\u7684\u914d\u7f6e\u6587\u4ef6[{0}]\u4e0d\u5b58\u5728. \u8bf7\u68c0\u67e5\u60a8\u7684\u914d\u7f6e\u6587\u4ef6.
|
||||
configuration.2=\u914d\u7f6e\u4fe1\u606f\u9519\u8bef. \u60a8\u63d0\u4f9b\u914d\u7f6e\u6587\u4ef6[{0}]\u8bfb\u53d6\u5931\u8d25\uff0c\u9519\u8bef\u539f\u56e0: {1}. \u8bf7\u68c0\u67e5\u60a8\u7684\u914d\u7f6e\u6587\u4ef6\u7684\u6743\u9650\u8bbe\u7f6e.
|
||||
configuration.3=\u8bf7\u68c0\u67e5\u60a8\u7684\u914d\u7f6e\u6587\u4ef6. \u60a8\u63d0\u4f9b\u7684\u914d\u7f6e\u6587\u4ef6\u8bfb\u53d6\u5931\u8d25\uff0c\u9519\u8bef\u539f\u56e0: {0}. \u8bf7\u68c0\u67e5\u60a8\u7684\u914d\u7f6e\u6587\u4ef6\u7684\u6743\u9650\u8bbe\u7f6e.
|
||||
configuration.4=\u60a8\u63d0\u4f9b\u914d\u7f6e\u6587\u4ef6\u6709\u8bef\uff0c[{0}]\u662f\u5fc5\u586b\u53c2\u6570\uff0c\u4e0d\u5141\u8bb8\u4e3a\u7a7a\u6216\u8005\u7559\u767d .
|
||||
configuration.5=\u60a8\u63d0\u4f9b\u914d\u7f6e\u6587\u4ef6\u6709\u8bef\uff0c[{0}]\u662f\u5fc5\u586b\u53c2\u6570\uff0c\u4e0d\u5141\u8bb8\u4e3a\u7a7a\u6216\u8005\u7559\u767d .
|
||||
configuration.6=\u4efb\u52a1\u8bfb\u53d6\u914d\u7f6e\u6587\u4ef6\u51fa\u9519. \u56e0\u4e3a\u914d\u7f6e\u6587\u4ef6\u8def\u5f84[{0}] \u503c\u975e\u6cd5\uff0c\u671f\u671b\u662f\u5b57\u7b26\u7c7b\u578b: {1}. \u8bf7\u68c0\u67e5\u60a8\u7684\u914d\u7f6e\u5e76\u4f5c\u51fa\u4fee\u6539.
|
||||
configuration.7=\u60a8\u63d0\u4f9b\u7684\u914d\u7f6e\u4fe1\u606f\u6709\u8bef\uff0c\u56e0\u4e3a\u4ece[{0}]\u83b7\u53d6\u7684\u503c[{1}]\u65e0\u6cd5\u8f6c\u6362\u4e3abool\u7c7b\u578b. \u8bf7\u68c0\u67e5\u6e90\u8868\u7684\u914d\u7f6e\u5e76\u4e14\u505a\u51fa\u76f8\u5e94\u7684\u4fee\u6539.
|
||||
configuration.8=\u4efb\u52a1\u8bfb\u53d6\u914d\u7f6e\u6587\u4ef6\u51fa\u9519. \u914d\u7f6e\u6587\u4ef6\u8def\u5f84[{0}] \u503c\u975e\u6cd5, \u671f\u671b\u662f\u6574\u6570\u7c7b\u578b: {1}. \u8bf7\u68c0\u67e5\u60a8\u7684\u914d\u7f6e\u5e76\u4f5c\u51fa\u4fee\u6539.
|
||||
configuration.9=\u4efb\u52a1\u8bfb\u53d6\u914d\u7f6e\u6587\u4ef6\u51fa\u9519. \u914d\u7f6e\u6587\u4ef6\u8def\u5f84[{0}] \u503c\u975e\u6cd5, \u671f\u671b\u662f\u6574\u6570\u7c7b\u578b: {1}. \u8bf7\u68c0\u67e5\u60a8\u7684\u914d\u7f6e\u5e76\u4f5c\u51fa\u4fee\u6539.
|
||||
configuration.10=\u4efb\u52a1\u8bfb\u53d6\u914d\u7f6e\u6587\u4ef6\u51fa\u9519. \u914d\u7f6e\u6587\u4ef6\u8def\u5f84[{0}] \u503c\u975e\u6cd5, \u671f\u671b\u662f\u6d6e\u70b9\u7c7b\u578b: {1}. \u8bf7\u68c0\u67e5\u60a8\u7684\u914d\u7f6e\u5e76\u4f5c\u51fa\u4fee\u6539.
|
||||
configuration.11=\u914d\u7f6e\u6587\u4ef6\u5bf9\u5e94Key[{0}]\u5e76\u4e0d\u5b58\u5728\uff0c\u8be5\u60c5\u51b5\u662f\u4ee3\u7801\u7f16\u7a0b\u9519\u8bef. \u8bf7\u8054\u7cfbDataX\u56e2\u961f\u7684\u540c\u5b66.
|
||||
configuration.12=\u503c[{0}]\u65e0\u6cd5\u9002\u914d\u60a8\u63d0\u4f9b[{1}]\uff0c \u8be5\u5f02\u5e38\u4ee3\u8868\u7cfb\u7edf\u7f16\u7a0b\u9519\u8bef, \u8bf7\u8054\u7cfbDataX\u5f00\u53d1\u56e2\u961f!
|
||||
configuration.13=Path\u4e0d\u80fd\u4e3anull\uff0c\u8be5\u5f02\u5e38\u4ee3\u8868\u7cfb\u7edf\u7f16\u7a0b\u9519\u8bef, \u8bf7\u8054\u7cfbDataX\u5f00\u53d1\u56e2\u961f !
|
||||
configuration.14=\u8def\u5f84[{0}]\u51fa\u73b0\u975e\u6cd5\u503c\u7c7b\u578b[{1}]\uff0c\u8be5\u5f02\u5e38\u4ee3\u8868\u7cfb\u7edf\u7f16\u7a0b\u9519\u8bef, \u8bf7\u8054\u7cfbDataX\u5f00\u53d1\u56e2\u961f! .
|
||||
configuration.15=\u8be5\u5f02\u5e38\u4ee3\u8868\u7cfb\u7edf\u7f16\u7a0b\u9519\u8bef, \u8bf7\u8054\u7cfbDataX\u5f00\u53d1\u56e2\u961f !
|
||||
configuration.16=\u60a8\u63d0\u4f9b\u7684\u914d\u7f6e\u6587\u4ef6\u6709\u8bef. \u8def\u5f84[{0}]\u9700\u8981\u914d\u7f6eJson\u683c\u5f0f\u7684Map\u5bf9\u8c61\uff0c\u4f46\u8be5\u8282\u70b9\u53d1\u73b0\u5b9e\u9645\u7c7b\u578b\u662f[{1}]. \u8bf7\u68c0\u67e5\u60a8\u7684\u914d\u7f6e\u5e76\u4f5c\u51fa\u4fee\u6539.
|
||||
configuration.17=\u60a8\u63d0\u4f9b\u7684\u914d\u7f6e\u6587\u4ef6\u6709\u8bef. \u8def\u5f84[{0}]\u503c\u4e3anull\uff0cdatax\u65e0\u6cd5\u8bc6\u522b\u8be5\u914d\u7f6e. \u8bf7\u68c0\u67e5\u60a8\u7684\u914d\u7f6e\u5e76\u4f5c\u51fa\u4fee\u6539.
|
||||
configuration.18=\u60a8\u63d0\u4f9b\u7684\u914d\u7f6e\u6587\u4ef6\u6709\u8bef. \u8def\u5f84[{0}]\u9700\u8981\u914d\u7f6eJson\u683c\u5f0f\u7684Map\u5bf9\u8c61\uff0c\u4f46\u8be5\u8282\u70b9\u53d1\u73b0\u5b9e\u9645\u7c7b\u578b\u662f[{1}]. \u8bf7\u68c0\u67e5\u60a8\u7684\u914d\u7f6e\u5e76\u4f5c\u51fa\u4fee\u6539.
|
||||
configuration.19=\u7cfb\u7edf\u7f16\u7a0b\u9519\u8bef\uff0c\u5217\u8868\u4e0b\u6807\u5fc5\u987b\u4e3a\u6570\u5b57\u7c7b\u578b\uff0c\u4f46\u8be5\u8282\u70b9\u53d1\u73b0\u5b9e\u9645\u7c7b\u578b\u662f[{0}] \uff0c\u8be5\u5f02\u5e38\u4ee3\u8868\u7cfb\u7edf\u7f16\u7a0b\u9519\u8bef, \u8bf7\u8054\u7cfbDataX\u5f00\u53d1\u56e2\u961f !
|
||||
configuration.20=\u7cfb\u7edf\u7f16\u7a0b\u9519\u8bef, \u8be5\u5f02\u5e38\u4ee3\u8868\u7cfb\u7edf\u7f16\u7a0b\u9519\u8bef, \u8bf7\u8054\u7cfbDataX\u5f00\u53d1\u56e2\u961f!.
|
||||
configuration.21=\u7cfb\u7edf\u7f16\u7a0b\u9519\u8bef, \u8def\u5f84[{0}]\u4e0d\u5408\u6cd5, \u8def\u5f84\u5c42\u6b21\u4e4b\u95f4\u4e0d\u80fd\u51fa\u73b0\u7a7a\u767d\u5b57\u7b26 .
|
||||
configuration.22=\u914d\u7f6e\u4fe1\u606f\u9519\u8bef. \u56e0\u4e3a\u60a8\u63d0\u4f9b\u7684\u914d\u7f6e\u4fe1\u606f\u4e0d\u662f\u5408\u6cd5\u7684JSON\u683c\u5f0f, JSON\u4e0d\u80fd\u4e3a\u7a7a\u767d. \u8bf7\u6309\u7167\u6807\u51c6json\u683c\u5f0f\u63d0\u4f9b\u914d\u7f6e\u4fe1\u606f.
|
||||
configuration.23=\u914d\u7f6e\u4fe1\u606f\u9519\u8bef. \u60a8\u63d0\u4f9b\u7684\u914d\u7f6e\u4fe1\u606f\u4e0d\u662f\u5408\u6cd5\u7684JSON\u683c\u5f0f: {0} . \u8bf7\u6309\u7167\u6807\u51c6json\u683c\u5f0f\u63d0\u4f9b\u914d\u7f6e\u4fe1\u606f.
|
||||
|
||||
|
||||
listutil.1=\u60a8\u63d0\u4f9b\u7684\u4f5c\u4e1a\u914d\u7f6e\u6709\u8bef\uff0cList\u4e0d\u80fd\u4e3a\u7a7a.
|
||||
listutil.2=\u60a8\u63d0\u4f9b\u7684\u4f5c\u4e1a\u914d\u7f6e\u6709\u8bef, List\u4e0d\u80fd\u4e3a\u7a7a.
|
||||
listutil.3=\u60a8\u63d0\u4f9b\u7684\u4f5c\u4e1a\u914d\u7f6e\u4fe1\u606f\u6709\u8bef, String:[{0}] \u4e0d\u5141\u8bb8\u91cd\u590d\u51fa\u73b0\u5728\u5217\u8868\u4e2d: [{1}].
|
||||
listutil.4=\u60a8\u63d0\u4f9b\u7684\u4f5c\u4e1a\u914d\u7f6e\u6709\u8bef, List\u4e0d\u80fd\u4e3a\u7a7a.
|
||||
listutil.5=\u60a8\u63d0\u4f9b\u7684\u4f5c\u4e1a\u914d\u7f6e\u6709\u8bef, List\u4e0d\u80fd\u4e3a\u7a7a.
|
||||
listutil.6=\u60a8\u63d0\u4f9b\u7684\u4f5c\u4e1a\u914d\u7f6e\u4fe1\u606f\u6709\u8bef, String:[{0}] \u4e0d\u5b58\u5728\u4e8e\u5217\u8868\u4e2d:[{1}].
|
||||
listutil.7=\u60a8\u63d0\u4f9b\u7684\u4f5c\u4e1a\u914d\u7f6e\u6709\u8bef, List\u4e0d\u80fd\u4e3a\u7a7a.
|
||||
listutil.8=\u60a8\u63d0\u4f9b\u7684\u4f5c\u4e1a\u914d\u7f6e\u6709\u8bef, List\u4e0d\u80fd\u4e3a\u7a7a.
|
||||
|
||||
|
||||
rangesplitutil.1=\u5207\u5206\u4efd\u6570\u4e0d\u80fd\u5c0f\u4e8e1. \u6b64\u5904:expectSliceNumber=[{0}].
|
||||
rangesplitutil.2=\u5bf9 BigInteger \u8fdb\u884c\u5207\u5206\u65f6\uff0c\u5176\u5de6\u53f3\u533a\u95f4\u4e0d\u80fd\u4e3a null. \u6b64\u5904:left=[{0}],right=[{1}].
|
||||
rangesplitutil.3=\u53c2\u6570 bigInteger \u4e0d\u80fd\u4e3a\u7a7a.
|
||||
rangesplitutil.4=\u6839\u636e\u5b57\u7b26\u4e32\u8fdb\u884c\u5207\u5206\u65f6\u4ec5\u652f\u6301 ASCII \u5b57\u7b26\u4e32\uff0c\u800c\u5b57\u7b26\u4e32:[{0}]\u975e ASCII \u5b57\u7b26\u4e32.
|
||||
rangesplitutil.5=\u53c2\u6570 bigInteger \u4e0d\u80fd\u4e3a\u7a7a.
|
||||
rangesplitutil.6=\u6839\u636e\u5b57\u7b26\u4e32\u8fdb\u884c\u5207\u5206\u65f6\u4ec5\u652f\u6301 ASCII \u5b57\u7b26\u4e32\uff0c\u800c\u5b57\u7b26\u4e32:[{0}]\u975e ASCII \u5b57\u7b26\u4e32.
|
||||
|
||||
|
||||
retryutil.1=\u7cfb\u7edf\u7f16\u7a0b\u9519\u8bef, \u5165\u53c2callable\u4e0d\u80fd\u4e3a\u7a7a !
|
||||
retryutil.2=\u7cfb\u7edf\u7f16\u7a0b\u9519\u8bef, \u5165\u53c2retrytime[%d]\u4e0d\u80fd\u5c0f\u4e8e1 !
|
||||
retryutil.3=Exception when calling callable, \u5f02\u5e38Msg:{0}
|
||||
retryutil.4=Exception when calling callable, \u5373\u5c06\u5c1d\u8bd5\u6267\u884c\u7b2c{0}\u6b21\u91cd\u8bd5,\u5171\u8ba1\u91cd\u8bd5{1}\u6b21.\u672c\u6b21\u91cd\u8bd5\u8ba1\u5212\u7b49\u5f85[{2}]ms,\u5b9e\u9645\u7b49\u5f85[{3}]ms, \u5f02\u5e38Msg:[{4}]
|
||||
|
||||
very_like_yixiao=一{0}二{1}三
|
||||
|
||||
|
||||
configuration.1=配置資訊錯誤,您提供的配置檔案[{0}]不存在. 請檢查您的配置檔案.
|
||||
configuration.2=配置資訊錯誤. 您提供配置檔案[{0}]讀取失敗,錯誤原因: {1}. 請檢查您的配置檔案的權限設定.
|
||||
configuration.3=請檢查您的配置檔案. 您提供的配置檔案讀取失敗,錯誤原因: {0}. 請檢查您的配置檔案的權限設定.
|
||||
configuration.4=您提供配置檔案有誤,[{0}]是必填參數,不允許為空或者留白 .
|
||||
configuration.5=您提供配置檔案有誤,[{0}]是必填參數,不允許為空或者留白 .
|
||||
configuration.6=任務讀取配置檔案出錯. 因為配置檔案路徑[{0}] 值不合法,期望是字符類型: {1}. 請檢查您的配置並作出修改.
|
||||
configuration.7=您提供的配置資訊有誤,因為從[{0}]獲取的值[{1}]無法轉換為bool類型. 請檢查源表的配置並且做出相應的修改.
|
||||
configuration.8=任務讀取配置檔案出錯. 配置檔案路徑[{0}] 值不合法, 期望是整數類型: {1}. 請檢查您的配置並作出修改.
|
||||
configuration.9=任務讀取配置檔案出錯. 配置檔案路徑[{0}] 值不合法, 期望是整數類型: {1}. 請檢查您的配置並作出修改.
|
||||
configuration.10=任務讀取配置檔案出錯. 配置檔案路徑[{0}] 值不合法, 期望是浮點類型: {1}. 請檢查您的配置並作出修改.
|
||||
configuration.11=配置檔案對應Key[{0}]並不存在,該情況是代碼編程錯誤. 請聯絡DataX團隊的同學.
|
||||
configuration.12=值[{0}]無法適配您提供[{1}], 該異常代表系統編程錯誤, 請聯絡DataX開發團隊!
|
||||
configuration.13=Path不能為null,該異常代表系統編程錯誤, 請聯絡DataX開發團隊 !
|
||||
configuration.14=路徑[{0}]出現不合法值類型[{1}],該異常代表系統編程錯誤, 請聯絡DataX開發團隊! .
|
||||
configuration.15=該異常代表系統編程錯誤, 請聯絡DataX開發團隊 !
|
||||
configuration.16=您提供的配置檔案有誤. 路徑[{0}]需要配置Json格式的Map對象,但該節點發現實際類型是[{1}]. 請檢查您的配置並作出修改.
|
||||
configuration.17=您提供的配置檔案有誤. 路徑[{0}]值為null,datax無法識別該配置. 請檢查您的配置並作出修改.
|
||||
configuration.18=您提供的配置檔案有誤. 路徑[{0}]需要配置Json格式的Map對象,但該節點發現實際類型是[{1}]. 請檢查您的配置並作出修改.
|
||||
configuration.19=系統編程錯誤,清單下標必須為數字類型,但該節點發現實際類型是[{0}] ,該異常代表系統編程錯誤, 請聯絡DataX開發團隊 !
|
||||
configuration.20=系統編程錯誤, 該異常代表系統編程錯誤, 請聯絡DataX開發團隊!.
|
||||
configuration.21=系統編程錯誤, 路徑[{0}]不合法, 路徑層次之間不能出現空白字符 .
|
||||
configuration.22=配置資訊錯誤. 因為您提供的配置資訊不是合法的JSON格式, JSON不能為空白. 請按照標準json格式提供配置資訊.
|
||||
configuration.23=配置資訊錯誤. 您提供的配置資訊不是合法的JSON格式: {0}. 請按照標準json格式提供配置資訊.
|
||||
|
||||
|
||||
listutil.1=您提供的作業配置有誤,List不能為空.
|
||||
listutil.2=您提供的作業配置有誤, List不能為空.
|
||||
listutil.3=您提供的作業配置資訊有誤, String:[{0}]不允許重複出現在清單中: [{1}].
|
||||
listutil.4=您提供的作業配置有誤, List不能為空.
|
||||
listutil.5=您提供的作業配置有誤, List不能為空.
|
||||
listutil.6=您提供的作業配置資訊有誤, String:[{0}]不存在於清單中:[{1}].
|
||||
listutil.7=您提供的作業配置有誤, List不能為空.
|
||||
listutil.8=您提供的作業配置有誤, List不能為空.
|
||||
|
||||
|
||||
rangesplitutil.1=切分份數不能小於1. 此處:expectSliceNumber=[{0}].
|
||||
rangesplitutil.2=對 BigInteger 進行切分時,其左右區間不能為 null. 此處:left=[{0}],right=[{1}].
|
||||
rangesplitutil.3=參數 bigInteger 不能為空.
|
||||
rangesplitutil.4=根據字符串進行切分時僅支援 ASCII 字符串,而字符串:[{0}]非 ASCII 字符串.
|
||||
rangesplitutil.5=參數 bigInteger 不能為空.
|
||||
rangesplitutil.6=根據字符串進行切分時僅支援 ASCII 字符串,而字符串:[{0}]非 ASCII 字符串.
|
||||
|
||||
|
||||
retryutil.1=系統編程錯誤, 入參callable不能為空 !
|
||||
retryutil.2=系統編程錯誤, 入參retrytime[%d]不能小於1 !
|
||||
retryutil.3=Exception when calling callable, 異常Msg:{0}
|
||||
retryutil.4=Exception when calling callable, 即將嘗試執行第{0}次重試,共計重試{1}次.本次重試計劃等待[{2}]ms,實際等待[{3}]ms, 異常Msg:[{4}]
|
||||
|
||||
httpclientutil.1=\u8ACB\u6C42\u5730\u5740\uFF1A{0}, \u8ACB\u6C42\u65B9\u6CD5\uFF1A{1},STATUS CODE = {2}, Response Entity: {3}
|
||||
httpclientutil.2=\u9060\u7A0B\u63A5\u53E3\u8FD4\u56DE-1,\u5C07\u91CD\u8A66
|
@ -0,0 +1,104 @@
|
||||
very_like_yixiao=\u4e00{0}\u4e8c{1}\u4e09
|
||||
|
||||
|
||||
configuration.1=\u914d\u7f6e\u4fe1\u606f\u9519\u8bef\uff0c\u60a8\u63d0\u4f9b\u7684\u914d\u7f6e\u6587\u4ef6[{0}]\u4e0d\u5b58\u5728. \u8bf7\u68c0\u67e5\u60a8\u7684\u914d\u7f6e\u6587\u4ef6.
|
||||
configuration.2=\u914d\u7f6e\u4fe1\u606f\u9519\u8bef. \u60a8\u63d0\u4f9b\u914d\u7f6e\u6587\u4ef6[{0}]\u8bfb\u53d6\u5931\u8d25\uff0c\u9519\u8bef\u539f\u56e0: {1}. \u8bf7\u68c0\u67e5\u60a8\u7684\u914d\u7f6e\u6587\u4ef6\u7684\u6743\u9650\u8bbe\u7f6e.
|
||||
configuration.3=\u8bf7\u68c0\u67e5\u60a8\u7684\u914d\u7f6e\u6587\u4ef6. \u60a8\u63d0\u4f9b\u7684\u914d\u7f6e\u6587\u4ef6\u8bfb\u53d6\u5931\u8d25\uff0c\u9519\u8bef\u539f\u56e0: {0}. \u8bf7\u68c0\u67e5\u60a8\u7684\u914d\u7f6e\u6587\u4ef6\u7684\u6743\u9650\u8bbe\u7f6e.
|
||||
configuration.4=\u60a8\u63d0\u4f9b\u914d\u7f6e\u6587\u4ef6\u6709\u8bef\uff0c[{0}]\u662f\u5fc5\u586b\u53c2\u6570\uff0c\u4e0d\u5141\u8bb8\u4e3a\u7a7a\u6216\u8005\u7559\u767d .
|
||||
configuration.5=\u60a8\u63d0\u4f9b\u914d\u7f6e\u6587\u4ef6\u6709\u8bef\uff0c[{0}]\u662f\u5fc5\u586b\u53c2\u6570\uff0c\u4e0d\u5141\u8bb8\u4e3a\u7a7a\u6216\u8005\u7559\u767d .
|
||||
configuration.6=\u4efb\u52a1\u8bfb\u53d6\u914d\u7f6e\u6587\u4ef6\u51fa\u9519. \u56e0\u4e3a\u914d\u7f6e\u6587\u4ef6\u8def\u5f84[{0}] \u503c\u975e\u6cd5\uff0c\u671f\u671b\u662f\u5b57\u7b26\u7c7b\u578b: {1}. \u8bf7\u68c0\u67e5\u60a8\u7684\u914d\u7f6e\u5e76\u4f5c\u51fa\u4fee\u6539.
|
||||
configuration.7=\u60a8\u63d0\u4f9b\u7684\u914d\u7f6e\u4fe1\u606f\u6709\u8bef\uff0c\u56e0\u4e3a\u4ece[{0}]\u83b7\u53d6\u7684\u503c[{1}]\u65e0\u6cd5\u8f6c\u6362\u4e3abool\u7c7b\u578b. \u8bf7\u68c0\u67e5\u6e90\u8868\u7684\u914d\u7f6e\u5e76\u4e14\u505a\u51fa\u76f8\u5e94\u7684\u4fee\u6539.
|
||||
configuration.8=\u4efb\u52a1\u8bfb\u53d6\u914d\u7f6e\u6587\u4ef6\u51fa\u9519. \u914d\u7f6e\u6587\u4ef6\u8def\u5f84[{0}] \u503c\u975e\u6cd5, \u671f\u671b\u662f\u6574\u6570\u7c7b\u578b: {1}. \u8bf7\u68c0\u67e5\u60a8\u7684\u914d\u7f6e\u5e76\u4f5c\u51fa\u4fee\u6539.
|
||||
configuration.9=\u4efb\u52a1\u8bfb\u53d6\u914d\u7f6e\u6587\u4ef6\u51fa\u9519. \u914d\u7f6e\u6587\u4ef6\u8def\u5f84[{0}] \u503c\u975e\u6cd5, \u671f\u671b\u662f\u6574\u6570\u7c7b\u578b: {1}. \u8bf7\u68c0\u67e5\u60a8\u7684\u914d\u7f6e\u5e76\u4f5c\u51fa\u4fee\u6539.
|
||||
configuration.10=\u4efb\u52a1\u8bfb\u53d6\u914d\u7f6e\u6587\u4ef6\u51fa\u9519. \u914d\u7f6e\u6587\u4ef6\u8def\u5f84[{0}] \u503c\u975e\u6cd5, \u671f\u671b\u662f\u6d6e\u70b9\u7c7b\u578b: {1}. \u8bf7\u68c0\u67e5\u60a8\u7684\u914d\u7f6e\u5e76\u4f5c\u51fa\u4fee\u6539.
|
||||
configuration.11=\u914d\u7f6e\u6587\u4ef6\u5bf9\u5e94Key[{0}]\u5e76\u4e0d\u5b58\u5728\uff0c\u8be5\u60c5\u51b5\u662f\u4ee3\u7801\u7f16\u7a0b\u9519\u8bef. \u8bf7\u8054\u7cfbDataX\u56e2\u961f\u7684\u540c\u5b66.
|
||||
configuration.12=\u503c[{0}]\u65e0\u6cd5\u9002\u914d\u60a8\u63d0\u4f9b[{1}]\uff0c \u8be5\u5f02\u5e38\u4ee3\u8868\u7cfb\u7edf\u7f16\u7a0b\u9519\u8bef, \u8bf7\u8054\u7cfbDataX\u5f00\u53d1\u56e2\u961f!
|
||||
configuration.13=Path\u4e0d\u80fd\u4e3anull\uff0c\u8be5\u5f02\u5e38\u4ee3\u8868\u7cfb\u7edf\u7f16\u7a0b\u9519\u8bef, \u8bf7\u8054\u7cfbDataX\u5f00\u53d1\u56e2\u961f !
|
||||
configuration.14=\u8def\u5f84[{0}]\u51fa\u73b0\u975e\u6cd5\u503c\u7c7b\u578b[{1}]\uff0c\u8be5\u5f02\u5e38\u4ee3\u8868\u7cfb\u7edf\u7f16\u7a0b\u9519\u8bef, \u8bf7\u8054\u7cfbDataX\u5f00\u53d1\u56e2\u961f! .
|
||||
configuration.15=\u8be5\u5f02\u5e38\u4ee3\u8868\u7cfb\u7edf\u7f16\u7a0b\u9519\u8bef, \u8bf7\u8054\u7cfbDataX\u5f00\u53d1\u56e2\u961f !
|
||||
configuration.16=\u60a8\u63d0\u4f9b\u7684\u914d\u7f6e\u6587\u4ef6\u6709\u8bef. \u8def\u5f84[{0}]\u9700\u8981\u914d\u7f6eJson\u683c\u5f0f\u7684Map\u5bf9\u8c61\uff0c\u4f46\u8be5\u8282\u70b9\u53d1\u73b0\u5b9e\u9645\u7c7b\u578b\u662f[{1}]. \u8bf7\u68c0\u67e5\u60a8\u7684\u914d\u7f6e\u5e76\u4f5c\u51fa\u4fee\u6539.
|
||||
configuration.17=\u60a8\u63d0\u4f9b\u7684\u914d\u7f6e\u6587\u4ef6\u6709\u8bef. \u8def\u5f84[{0}]\u503c\u4e3anull\uff0cdatax\u65e0\u6cd5\u8bc6\u522b\u8be5\u914d\u7f6e. \u8bf7\u68c0\u67e5\u60a8\u7684\u914d\u7f6e\u5e76\u4f5c\u51fa\u4fee\u6539.
|
||||
configuration.18=\u60a8\u63d0\u4f9b\u7684\u914d\u7f6e\u6587\u4ef6\u6709\u8bef. \u8def\u5f84[{0}]\u9700\u8981\u914d\u7f6eJson\u683c\u5f0f\u7684Map\u5bf9\u8c61\uff0c\u4f46\u8be5\u8282\u70b9\u53d1\u73b0\u5b9e\u9645\u7c7b\u578b\u662f[{1}]. \u8bf7\u68c0\u67e5\u60a8\u7684\u914d\u7f6e\u5e76\u4f5c\u51fa\u4fee\u6539.
|
||||
configuration.19=\u7cfb\u7edf\u7f16\u7a0b\u9519\u8bef\uff0c\u5217\u8868\u4e0b\u6807\u5fc5\u987b\u4e3a\u6570\u5b57\u7c7b\u578b\uff0c\u4f46\u8be5\u8282\u70b9\u53d1\u73b0\u5b9e\u9645\u7c7b\u578b\u662f[{0}] \uff0c\u8be5\u5f02\u5e38\u4ee3\u8868\u7cfb\u7edf\u7f16\u7a0b\u9519\u8bef, \u8bf7\u8054\u7cfbDataX\u5f00\u53d1\u56e2\u961f !
|
||||
configuration.20=\u7cfb\u7edf\u7f16\u7a0b\u9519\u8bef, \u8be5\u5f02\u5e38\u4ee3\u8868\u7cfb\u7edf\u7f16\u7a0b\u9519\u8bef, \u8bf7\u8054\u7cfbDataX\u5f00\u53d1\u56e2\u961f!.
|
||||
configuration.21=\u7cfb\u7edf\u7f16\u7a0b\u9519\u8bef, \u8def\u5f84[{0}]\u4e0d\u5408\u6cd5, \u8def\u5f84\u5c42\u6b21\u4e4b\u95f4\u4e0d\u80fd\u51fa\u73b0\u7a7a\u767d\u5b57\u7b26 .
|
||||
configuration.22=\u914d\u7f6e\u4fe1\u606f\u9519\u8bef. \u56e0\u4e3a\u60a8\u63d0\u4f9b\u7684\u914d\u7f6e\u4fe1\u606f\u4e0d\u662f\u5408\u6cd5\u7684JSON\u683c\u5f0f, JSON\u4e0d\u80fd\u4e3a\u7a7a\u767d. \u8bf7\u6309\u7167\u6807\u51c6json\u683c\u5f0f\u63d0\u4f9b\u914d\u7f6e\u4fe1\u606f.
|
||||
configuration.23=\u914d\u7f6e\u4fe1\u606f\u9519\u8bef. \u60a8\u63d0\u4f9b\u7684\u914d\u7f6e\u4fe1\u606f\u4e0d\u662f\u5408\u6cd5\u7684JSON\u683c\u5f0f: {0} . \u8bf7\u6309\u7167\u6807\u51c6json\u683c\u5f0f\u63d0\u4f9b\u914d\u7f6e\u4fe1\u606f.
|
||||
|
||||
|
||||
listutil.1=\u60a8\u63d0\u4f9b\u7684\u4f5c\u4e1a\u914d\u7f6e\u6709\u8bef\uff0cList\u4e0d\u80fd\u4e3a\u7a7a.
|
||||
listutil.2=\u60a8\u63d0\u4f9b\u7684\u4f5c\u4e1a\u914d\u7f6e\u6709\u8bef, List\u4e0d\u80fd\u4e3a\u7a7a.
|
||||
listutil.3=\u60a8\u63d0\u4f9b\u7684\u4f5c\u4e1a\u914d\u7f6e\u4fe1\u606f\u6709\u8bef, String:[{0}] \u4e0d\u5141\u8bb8\u91cd\u590d\u51fa\u73b0\u5728\u5217\u8868\u4e2d: [{1}].
|
||||
listutil.4=\u60a8\u63d0\u4f9b\u7684\u4f5c\u4e1a\u914d\u7f6e\u6709\u8bef, List\u4e0d\u80fd\u4e3a\u7a7a.
|
||||
listutil.5=\u60a8\u63d0\u4f9b\u7684\u4f5c\u4e1a\u914d\u7f6e\u6709\u8bef, List\u4e0d\u80fd\u4e3a\u7a7a.
|
||||
listutil.6=\u60a8\u63d0\u4f9b\u7684\u4f5c\u4e1a\u914d\u7f6e\u4fe1\u606f\u6709\u8bef, String:[{0}] \u4e0d\u5b58\u5728\u4e8e\u5217\u8868\u4e2d:[{1}].
|
||||
listutil.7=\u60a8\u63d0\u4f9b\u7684\u4f5c\u4e1a\u914d\u7f6e\u6709\u8bef, List\u4e0d\u80fd\u4e3a\u7a7a.
|
||||
listutil.8=\u60a8\u63d0\u4f9b\u7684\u4f5c\u4e1a\u914d\u7f6e\u6709\u8bef, List\u4e0d\u80fd\u4e3a\u7a7a.
|
||||
|
||||
|
||||
rangesplitutil.1=\u5207\u5206\u4efd\u6570\u4e0d\u80fd\u5c0f\u4e8e1. \u6b64\u5904:expectSliceNumber=[{0}].
|
||||
rangesplitutil.2=\u5bf9 BigInteger \u8fdb\u884c\u5207\u5206\u65f6\uff0c\u5176\u5de6\u53f3\u533a\u95f4\u4e0d\u80fd\u4e3a null. \u6b64\u5904:left=[{0}],right=[{1}].
|
||||
rangesplitutil.3=\u53c2\u6570 bigInteger \u4e0d\u80fd\u4e3a\u7a7a.
|
||||
rangesplitutil.4=\u6839\u636e\u5b57\u7b26\u4e32\u8fdb\u884c\u5207\u5206\u65f6\u4ec5\u652f\u6301 ASCII \u5b57\u7b26\u4e32\uff0c\u800c\u5b57\u7b26\u4e32:[{0}]\u975e ASCII \u5b57\u7b26\u4e32.
|
||||
rangesplitutil.5=\u53c2\u6570 bigInteger \u4e0d\u80fd\u4e3a\u7a7a.
|
||||
rangesplitutil.6=\u6839\u636e\u5b57\u7b26\u4e32\u8fdb\u884c\u5207\u5206\u65f6\u4ec5\u652f\u6301 ASCII \u5b57\u7b26\u4e32\uff0c\u800c\u5b57\u7b26\u4e32:[{0}]\u975e ASCII \u5b57\u7b26\u4e32.
|
||||
|
||||
|
||||
retryutil.1=\u7cfb\u7edf\u7f16\u7a0b\u9519\u8bef, \u5165\u53c2callable\u4e0d\u80fd\u4e3a\u7a7a !
|
||||
retryutil.2=\u7cfb\u7edf\u7f16\u7a0b\u9519\u8bef, \u5165\u53c2retrytime[%d]\u4e0d\u80fd\u5c0f\u4e8e1 !
|
||||
retryutil.3=Exception when calling callable, \u5f02\u5e38Msg:{0}
|
||||
retryutil.4=Exception when calling callable, \u5373\u5c06\u5c1d\u8bd5\u6267\u884c\u7b2c{0}\u6b21\u91cd\u8bd5,\u5171\u8ba1\u91cd\u8bd5{1}\u6b21.\u672c\u6b21\u91cd\u8bd5\u8ba1\u5212\u7b49\u5f85[{2}]ms,\u5b9e\u9645\u7b49\u5f85[{3}]ms, \u5f02\u5e38Msg:[{4}]
|
||||
|
||||
very_like_yixiao=一{0}二{1}三
|
||||
|
||||
|
||||
configuration.1=配置資訊錯誤,您提供的配置檔案[{0}]不存在. 請檢查您的配置檔案.
|
||||
configuration.2=配置資訊錯誤. 您提供配置檔案[{0}]讀取失敗,錯誤原因: {1}. 請檢查您的配置檔案的權限設定.
|
||||
configuration.3=請檢查您的配置檔案. 您提供的配置檔案讀取失敗,錯誤原因: {0}. 請檢查您的配置檔案的權限設定.
|
||||
configuration.4=您提供配置檔案有誤,[{0}]是必填參數,不允許為空或者留白 .
|
||||
configuration.5=您提供配置檔案有誤,[{0}]是必填參數,不允許為空或者留白 .
|
||||
configuration.6=任務讀取配置檔案出錯. 因為配置檔案路徑[{0}] 值不合法,期望是字符類型: {1}. 請檢查您的配置並作出修改.
|
||||
configuration.7=您提供的配置資訊有誤,因為從[{0}]獲取的值[{1}]無法轉換為bool類型. 請檢查源表的配置並且做出相應的修改.
|
||||
configuration.8=任務讀取配置檔案出錯. 配置檔案路徑[{0}] 值不合法, 期望是整數類型: {1}. 請檢查您的配置並作出修改.
|
||||
configuration.9=任務讀取配置檔案出錯. 配置檔案路徑[{0}] 值不合法, 期望是整數類型: {1}. 請檢查您的配置並作出修改.
|
||||
configuration.10=任務讀取配置檔案出錯. 配置檔案路徑[{0}] 值不合法, 期望是浮點類型: {1}. 請檢查您的配置並作出修改.
|
||||
configuration.11=配置檔案對應Key[{0}]並不存在,該情況是代碼編程錯誤. 請聯絡DataX團隊的同學.
|
||||
configuration.12=值[{0}]無法適配您提供[{1}], 該異常代表系統編程錯誤, 請聯絡DataX開發團隊!
|
||||
configuration.13=Path不能為null,該異常代表系統編程錯誤, 請聯絡DataX開發團隊 !
|
||||
configuration.14=路徑[{0}]出現不合法值類型[{1}],該異常代表系統編程錯誤, 請聯絡DataX開發團隊! .
|
||||
configuration.15=該異常代表系統編程錯誤, 請聯絡DataX開發團隊 !
|
||||
configuration.16=您提供的配置檔案有誤. 路徑[{0}]需要配置Json格式的Map對象,但該節點發現實際類型是[{1}]. 請檢查您的配置並作出修改.
|
||||
configuration.17=您提供的配置檔案有誤. 路徑[{0}]值為null,datax無法識別該配置. 請檢查您的配置並作出修改.
|
||||
configuration.18=您提供的配置檔案有誤. 路徑[{0}]需要配置Json格式的Map對象,但該節點發現實際類型是[{1}]. 請檢查您的配置並作出修改.
|
||||
configuration.19=系統編程錯誤,清單下標必須為數字類型,但該節點發現實際類型是[{0}] ,該異常代表系統編程錯誤, 請聯絡DataX開發團隊 !
|
||||
configuration.20=系統編程錯誤, 該異常代表系統編程錯誤, 請聯絡DataX開發團隊!.
|
||||
configuration.21=系統編程錯誤, 路徑[{0}]不合法, 路徑層次之間不能出現空白字符 .
|
||||
configuration.22=配置資訊錯誤. 因為您提供的配置資訊不是合法的JSON格式, JSON不能為空白. 請按照標準json格式提供配置資訊.
|
||||
configuration.23=配置資訊錯誤. 您提供的配置資訊不是合法的JSON格式: {0}. 請按照標準json格式提供配置資訊.
|
||||
|
||||
|
||||
listutil.1=您提供的作業配置有誤,List不能為空.
|
||||
listutil.2=您提供的作業配置有誤, List不能為空.
|
||||
listutil.3=您提供的作業配置資訊有誤, String:[{0}]不允許重複出現在清單中: [{1}].
|
||||
listutil.4=您提供的作業配置有誤, List不能為空.
|
||||
listutil.5=您提供的作業配置有誤, List不能為空.
|
||||
listutil.6=您提供的作業配置資訊有誤, String:[{0}]不存在於清單中:[{1}].
|
||||
listutil.7=您提供的作業配置有誤, List不能為空.
|
||||
listutil.8=您提供的作業配置有誤, List不能為空.
|
||||
|
||||
|
||||
rangesplitutil.1=切分份數不能小於1. 此處:expectSliceNumber=[{0}].
|
||||
rangesplitutil.2=對 BigInteger 進行切分時,其左右區間不能為 null. 此處:left=[{0}],right=[{1}].
|
||||
rangesplitutil.3=參數 bigInteger 不能為空.
|
||||
rangesplitutil.4=根據字符串進行切分時僅支援 ASCII 字符串,而字符串:[{0}]非 ASCII 字符串.
|
||||
rangesplitutil.5=參數 bigInteger 不能為空.
|
||||
rangesplitutil.6=根據字符串進行切分時僅支援 ASCII 字符串,而字符串:[{0}]非 ASCII 字符串.
|
||||
|
||||
|
||||
retryutil.1=系統編程錯誤, 入參callable不能為空 !
|
||||
retryutil.2=系統編程錯誤, 入參retrytime[%d]不能小於1 !
|
||||
retryutil.3=Exception when calling callable, 異常Msg:{0}
|
||||
retryutil.4=Exception when calling callable, 即將嘗試執行第{0}次重試,共計重試{1}次.本次重試計劃等待[{2}]ms,實際等待[{3}]ms, 異常Msg:[{4}]
|
||||
|
||||
httpclientutil.1=\u8BF7\u6C42\u5730\u5740\uFF1A{0}, \u8BF7\u6C42\u65B9\u6CD5\uFF1A{1},STATUS CODE = {2}, Response Entity: {3}
|
||||
httpclientutil.2=\u8FDC\u7A0B\u63A5\u53E3\u8FD4\u56DE-1,\u5C06\u91CD\u8BD5
|
@ -0,0 +1,207 @@
|
||||
package com.alibaba.datax.common.util;
|
||||
|
||||
import java.text.MessageFormat;
|
||||
import java.util.HashMap;
|
||||
import java.util.Locale;
|
||||
import java.util.Map;
|
||||
import java.util.MissingResourceException;
|
||||
import java.util.ResourceBundle;
|
||||
import java.util.TimeZone;
|
||||
|
||||
import org.apache.commons.lang3.LocaleUtils;
|
||||
import org.apache.commons.lang3.StringUtils;
|
||||
import org.slf4j.Logger;
|
||||
import org.slf4j.LoggerFactory;
|
||||
|
||||
|
||||
public class MessageSource {
|
||||
private static final Logger LOG = LoggerFactory.getLogger(MessageSource.class);
|
||||
private static Map<String, ResourceBundle> resourceBundleCache = new HashMap<String, ResourceBundle>();
|
||||
public static Locale locale = null;
|
||||
public static TimeZone timeZone = null;
|
||||
private ResourceBundle resourceBundle = null;
|
||||
|
||||
private MessageSource(ResourceBundle resourceBundle) {
|
||||
this.resourceBundle = resourceBundle;
|
||||
}
|
||||
|
||||
/**
|
||||
* @param baseName
|
||||
* demo: javax.servlet.http.LocalStrings
|
||||
*
|
||||
* @throws MissingResourceException
|
||||
* - if no resource bundle for the specified base name can be
|
||||
* found
|
||||
* */
|
||||
public static MessageSource loadResourceBundle(String baseName) {
|
||||
return loadResourceBundle(baseName, MessageSource.locale,
|
||||
MessageSource.timeZone);
|
||||
}
|
||||
|
||||
/**
|
||||
* @param clazz
|
||||
* 根据其获取package name
|
||||
* */
|
||||
public static <T> MessageSource loadResourceBundle(Class<T> clazz) {
|
||||
return loadResourceBundle(clazz.getPackage().getName());
|
||||
}
|
||||
|
||||
/**
|
||||
* @param clazz
|
||||
* 根据其获取package name
|
||||
* */
|
||||
public static <T> MessageSource loadResourceBundle(Class<T> clazz,
|
||||
Locale locale, TimeZone timeZone) {
|
||||
return loadResourceBundle(clazz.getPackage().getName(), locale,
|
||||
timeZone);
|
||||
}
|
||||
|
||||
/**
|
||||
* warn:
|
||||
* ok: ResourceBundle.getBundle("xxx.LocalStrings", Locale.getDefault(), LoadUtil.getJarLoader(PluginType.WRITER, "odpswriter"))
|
||||
* error: ResourceBundle.getBundle("xxx.LocalStrings", Locale.getDefault(), LoadUtil.getJarLoader(PluginType.WRITER, "odpswriter"))
|
||||
* @param baseName
|
||||
* demo: javax.servlet.http.LocalStrings
|
||||
*
|
||||
* @throws MissingResourceException
|
||||
* - if no resource bundle for the specified base name can be
|
||||
* found
|
||||
*
|
||||
* */
|
||||
public static MessageSource loadResourceBundle(String baseName,
|
||||
Locale locale, TimeZone timeZone) {
|
||||
ResourceBundle resourceBundle = null;
|
||||
if (null == locale) {
|
||||
locale = LocaleUtils.toLocale("en_US");
|
||||
}
|
||||
if (null == timeZone) {
|
||||
timeZone = TimeZone.getDefault();
|
||||
}
|
||||
String resourceBaseName = String.format("%s.LocalStrings", baseName);
|
||||
LOG.debug(
|
||||
"initEnvironment MessageSource.locale[{}], MessageSource.timeZone[{}]",
|
||||
MessageSource.locale, MessageSource.timeZone);
|
||||
LOG.debug(
|
||||
"loadResourceBundle with locale[{}], timeZone[{}], baseName[{}]",
|
||||
locale, timeZone, resourceBaseName);
|
||||
// warn: 这个map的维护需要考虑Local吗, no?
|
||||
if (!MessageSource.resourceBundleCache.containsKey(resourceBaseName)) {
|
||||
ClassLoader clazzLoader = Thread.currentThread()
|
||||
.getContextClassLoader();
|
||||
LOG.debug("loadResourceBundle classLoader:{}", clazzLoader);
|
||||
resourceBundle = ResourceBundle.getBundle(resourceBaseName, locale,
|
||||
clazzLoader);
|
||||
MessageSource.resourceBundleCache.put(resourceBaseName,
|
||||
resourceBundle);
|
||||
} else {
|
||||
resourceBundle = MessageSource.resourceBundleCache
|
||||
.get(resourceBaseName);
|
||||
}
|
||||
|
||||
return new MessageSource(resourceBundle);
|
||||
}
|
||||
|
||||
public static <T> boolean unloadResourceBundle(Class<T> clazz) {
|
||||
String baseName = clazz.getPackage().getName();
|
||||
String resourceBaseName = String.format("%s.LocalStrings", baseName);
|
||||
if (!MessageSource.resourceBundleCache.containsKey(resourceBaseName)) {
|
||||
return false;
|
||||
} else {
|
||||
MessageSource.resourceBundleCache.remove(resourceBaseName);
|
||||
return true;
|
||||
}
|
||||
}
|
||||
|
||||
public static <T> MessageSource reloadResourceBundle(Class<T> clazz) {
|
||||
MessageSource.unloadResourceBundle(clazz);
|
||||
return MessageSource.loadResourceBundle(clazz);
|
||||
}
|
||||
|
||||
public static void setEnvironment(Locale locale, TimeZone timeZone) {
|
||||
// warn: 设置默认? @2018.03.21 将此处注释移除,否则在国际化多时区下会遇到问题
|
||||
Locale.setDefault(locale);
|
||||
TimeZone.setDefault(timeZone);
|
||||
MessageSource.locale = locale;
|
||||
MessageSource.timeZone = timeZone;
|
||||
LOG.info("use Locale: {} timeZone: {}", locale, timeZone);
|
||||
}
|
||||
|
||||
public static void init(final Configuration configuration) {
|
||||
Locale locale2Set = Locale.getDefault();
|
||||
String localeStr = configuration.getString("common.column.locale", "zh_CN");// 默认操作系统的
|
||||
if (StringUtils.isNotBlank(localeStr)) {
|
||||
try {
|
||||
locale2Set = LocaleUtils.toLocale(localeStr);
|
||||
} catch (Exception e) {
|
||||
LOG.warn("ignored locale parse exception: {}", e.getMessage());
|
||||
}
|
||||
}
|
||||
|
||||
TimeZone timeZone2Set = TimeZone.getDefault();
|
||||
String timeZoneStr = configuration.getString("common.column.timeZone");// 默认操作系统的
|
||||
if (StringUtils.isNotBlank(timeZoneStr)) {
|
||||
try {
|
||||
timeZone2Set = TimeZone.getTimeZone(timeZoneStr);
|
||||
} catch (Exception e) {
|
||||
LOG.warn("ignored timezone parse exception: {}", e.getMessage());
|
||||
}
|
||||
}
|
||||
|
||||
LOG.info("JVM TimeZone: {}, Locale: {}", timeZone2Set.getID(), locale2Set);
|
||||
MessageSource.setEnvironment(locale2Set, timeZone2Set);
|
||||
}
|
||||
|
||||
public static void clearCache() {
|
||||
MessageSource.resourceBundleCache.clear();
|
||||
}
|
||||
|
||||
public String message(String code) {
|
||||
return this.messageWithDefaultMessage(code, null);
|
||||
}
|
||||
|
||||
public String message(String code, String args1) {
|
||||
return this.messageWithDefaultMessage(code, null,
|
||||
new Object[] { args1 });
|
||||
}
|
||||
|
||||
public String message(String code, String args1, String args2) {
|
||||
return this.messageWithDefaultMessage(code, null, new Object[] { args1,
|
||||
args2 });
|
||||
}
|
||||
|
||||
public String message(String code, String args1, String args2, String args3) {
|
||||
return this.messageWithDefaultMessage(code, null, new Object[] { args1,
|
||||
args2, args3 });
|
||||
}
|
||||
|
||||
// 上面几个重载可以应对大多数情况, 避免使用这个可以提高性能的
|
||||
public String message(String code, Object... args) {
|
||||
return this.messageWithDefaultMessage(code, null, args);
|
||||
}
|
||||
|
||||
public String messageWithDefaultMessage(String code, String defaultMessage) {
|
||||
return this.messageWithDefaultMessage(code, defaultMessage,
|
||||
new Object[] {});
|
||||
}
|
||||
|
||||
/**
|
||||
* @param args
|
||||
* MessageFormat会依次调用对应对象的toString方法
|
||||
* */
|
||||
public String messageWithDefaultMessage(String code, String defaultMessage,
|
||||
Object... args) {
|
||||
String messageStr = null;
|
||||
try {
|
||||
messageStr = this.resourceBundle.getString(code);
|
||||
} catch (MissingResourceException e) {
|
||||
messageStr = defaultMessage;
|
||||
}
|
||||
if (null != messageStr && null != args && args.length > 0) {
|
||||
// warn: see loadResourceBundle set default locale
|
||||
return MessageFormat.format(messageStr, args);
|
||||
} else {
|
||||
return messageStr;
|
||||
}
|
||||
|
||||
}
|
||||
}
|
@ -206,4 +206,27 @@ public final class RangeSplitUtil {
|
||||
return true;
|
||||
}
|
||||
|
||||
|
||||
/**
|
||||
* List拆分工具函数,主要用于reader插件的split拆分逻辑
|
||||
* */
|
||||
public static <T> List<List<T>> doListSplit(List<T> objects, int adviceNumber) {
|
||||
List<List<T>> splitLists = new ArrayList<List<T>>();
|
||||
if (null == objects) {
|
||||
return splitLists;
|
||||
}
|
||||
long[] splitPoint = RangeSplitUtil.doLongSplit(0, objects.size(), adviceNumber);
|
||||
for (int startIndex = 0; startIndex < splitPoint.length - 1; startIndex++) {
|
||||
List<T> objectsForTask = new ArrayList<T>();
|
||||
int endIndex = startIndex + 1;
|
||||
for (long i = splitPoint[startIndex]; i < splitPoint[endIndex]; i++) {
|
||||
objectsForTask.add(objects.get((int) i));
|
||||
}
|
||||
if (!objectsForTask.isEmpty()) {
|
||||
splitLists.add(objectsForTask);
|
||||
}
|
||||
}
|
||||
return splitLists;
|
||||
}
|
||||
|
||||
}
|
||||
|
@ -100,6 +100,14 @@
|
||||
</dependencies>
|
||||
|
||||
<build>
|
||||
<resources>
|
||||
<resource>
|
||||
<directory>src/main/java</directory>
|
||||
<includes>
|
||||
<include>**/*.properties</include>
|
||||
</includes>
|
||||
</resource>
|
||||
</resources>
|
||||
<plugins>
|
||||
<plugin>
|
||||
<groupId>org.apache.maven.plugins</groupId>
|
||||
|
@ -6,6 +6,7 @@ import com.alibaba.datax.common.spi.ErrorCode;
|
||||
import com.alibaba.datax.common.statistics.PerfTrace;
|
||||
import com.alibaba.datax.common.statistics.VMInfo;
|
||||
import com.alibaba.datax.common.util.Configuration;
|
||||
import com.alibaba.datax.common.util.MessageSource;
|
||||
import com.alibaba.datax.core.job.JobContainer;
|
||||
import com.alibaba.datax.core.taskgroup.TaskGroupContainer;
|
||||
import com.alibaba.datax.core.util.ConfigParser;
|
||||
@ -73,7 +74,7 @@ public class Engine {
|
||||
boolean traceEnable = allConf.getBool(CoreConstant.DATAX_CORE_CONTAINER_TRACE_ENABLE, true);
|
||||
boolean perfReportEnable = allConf.getBool(CoreConstant.DATAX_CORE_REPORT_DATAX_PERFLOG, true);
|
||||
|
||||
//standlone模式的datax shell任务不进行汇报
|
||||
//standalone模式的 datax shell任务不进行汇报
|
||||
if(instanceId == -1){
|
||||
perfReportEnable = false;
|
||||
}
|
||||
@ -135,6 +136,9 @@ public class Engine {
|
||||
RUNTIME_MODE = cl.getOptionValue("mode");
|
||||
|
||||
Configuration configuration = ConfigParser.parse(jobPath);
|
||||
// 绑定i18n信息
|
||||
MessageSource.init(configuration);
|
||||
MessageSource.reloadResourceBundle(Configuration.class);
|
||||
|
||||
long jobId;
|
||||
if (!"-1".equalsIgnoreCase(jobIdString)) {
|
||||
|
@ -0,0 +1,5 @@
|
||||
very_like_yixiao=\u4e00{0}\u4e8c{1}\u4e09
|
||||
|
||||
engine.1=\u975e standalone \u6a21\u5f0f\u5fc5\u987b\u5728 URL \u4e2d\u63d0\u4f9b\u6709\u6548\u7684 jobId.
|
||||
engine.2=\n\n\u7ecfDataX\u667a\u80fd\u5206\u6790,\u8be5\u4efb\u52a1\u6700\u53ef\u80fd\u7684\u9519\u8bef\u539f\u56e0\u662f:\n{0}
|
||||
|
@ -0,0 +1,5 @@
|
||||
very_like_yixiao=1{0}2{1}3
|
||||
|
||||
engine.1=A valid job ID must be provided in the URL for the non-standalone mode.
|
||||
engine.2=\n\nThrough the intelligent analysis by DataX, the most likely error reason of this task is: \n{0}
|
||||
|
@ -0,0 +1,5 @@
|
||||
very_like_yixiao=1{0}2{1}3
|
||||
|
||||
engine.1=\u975e standalone \u6a21\u5f0f\u5fc5\u987b\u5728 URL \u4e2d\u63d0\u4f9b\u6709\u6548\u7684 jobId.
|
||||
engine.2=\n\n\u7ecfDataX\u667a\u80fd\u5206\u6790,\u8be5\u4efb\u52a1\u6700\u53ef\u80fd\u7684\u9519\u8bef\u539f\u56e0\u662f:\n{0}
|
||||
|
@ -0,0 +1,5 @@
|
||||
very_like_yixiao=\u4e00{0}\u4e8c{1}\u4e09
|
||||
|
||||
engine.1=\u975e standalone \u6a21\u5f0f\u5fc5\u987b\u5728 URL \u4e2d\u63d0\u4f9b\u6709\u6548\u7684 jobId.
|
||||
engine.2=\n\n\u7ecfDataX\u667a\u80fd\u5206\u6790,\u8be5\u4efb\u52a1\u6700\u53ef\u80fd\u7684\u9519\u8bef\u539f\u56e0\u662f:\n{0}
|
||||
|
@ -0,0 +1,10 @@
|
||||
very_like_yixiao=\u4e00{0}\u4e8c{1}\u4e09
|
||||
|
||||
engine.1=\u975e standalone \u6a21\u5f0f\u5fc5\u987b\u5728 URL \u4e2d\u63d0\u4f9b\u6709\u6548\u7684 jobId.
|
||||
engine.2=\n\n\u7ecfDataX\u667a\u80fd\u5206\u6790,\u8be5\u4efb\u52a1\u6700\u53ef\u80fd\u7684\u9519\u8bef\u539f\u56e0\u662f:\n{0}
|
||||
|
||||
very_like_yixiao=一{0}二{1}三
|
||||
|
||||
engine.1=非 standalone 模式必須在 URL 中提供有效的 jobId.
|
||||
engine.2=\n\n經DataX智能分析,該任務最可能的錯誤原因是:\n{0}
|
||||
|
@ -0,0 +1,10 @@
|
||||
very_like_yixiao=\u4e00{0}\u4e8c{1}\u4e09
|
||||
|
||||
engine.1=\u975e standalone \u6a21\u5f0f\u5fc5\u987b\u5728 URL \u4e2d\u63d0\u4f9b\u6709\u6548\u7684 jobId.
|
||||
engine.2=\n\n\u7ecfDataX\u667a\u80fd\u5206\u6790,\u8be5\u4efb\u52a1\u6700\u53ef\u80fd\u7684\u9519\u8bef\u539f\u56e0\u662f:\n{0}
|
||||
|
||||
very_like_yixiao=一{0}二{1}三
|
||||
|
||||
engine.1=非 standalone 模式必須在 URL 中提供有效的 jobId.
|
||||
engine.2=\n\n經DataX智能分析,該任務最可能的錯誤原因是:\n{0}
|
||||
|
@ -11,15 +11,18 @@ import java.math.BigInteger;
|
||||
import java.util.ArrayList;
|
||||
import java.util.Date;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
|
||||
public class DirtyRecord implements Record {
|
||||
private List<Column> columns = new ArrayList<Column>();
|
||||
private Map<String, String> meta;
|
||||
|
||||
public static DirtyRecord asDirtyRecord(final Record record) {
|
||||
DirtyRecord result = new DirtyRecord();
|
||||
for (int i = 0; i < record.getColumnNumber(); i++) {
|
||||
result.addColumn(record.getColumn(i));
|
||||
}
|
||||
result.setMeta(record.getMeta());
|
||||
|
||||
return result;
|
||||
}
|
||||
@ -65,6 +68,16 @@ public class DirtyRecord implements Record {
|
||||
"该方法不支持!");
|
||||
}
|
||||
|
||||
@Override
|
||||
public void setMeta(Map<String, String> meta) {
|
||||
this.meta = meta;
|
||||
}
|
||||
|
||||
@Override
|
||||
public Map<String, String> getMeta() {
|
||||
return this.meta;
|
||||
}
|
||||
|
||||
public List<Column> getColumns() {
|
||||
return columns;
|
||||
}
|
||||
@ -120,6 +133,12 @@ class DirtyColumn extends Column {
|
||||
"该方法不支持!");
|
||||
}
|
||||
|
||||
@Override
|
||||
public Date asDate(String dateFormat) {
|
||||
throw DataXException.asDataXException(FrameworkErrorCode.RUNTIME_ERROR,
|
||||
"该方法不支持!");
|
||||
}
|
||||
|
||||
@Override
|
||||
public byte[] asBytes() {
|
||||
throw DataXException.asDataXException(FrameworkErrorCode.RUNTIME_ERROR,
|
||||
|
@ -27,6 +27,8 @@ public class DefaultRecord implements Record {
|
||||
// 首先是Record本身需要的内存
|
||||
private int memorySize = ClassSize.DefaultRecordHead;
|
||||
|
||||
private Map<String, String> meta;
|
||||
|
||||
public DefaultRecord() {
|
||||
this.columns = new ArrayList<Column>(RECORD_AVERGAE_COLUMN_NUMBER);
|
||||
}
|
||||
@ -83,6 +85,16 @@ public class DefaultRecord implements Record {
|
||||
return memorySize;
|
||||
}
|
||||
|
||||
@Override
|
||||
public void setMeta(Map<String, String> meta) {
|
||||
this.meta = meta;
|
||||
}
|
||||
|
||||
@Override
|
||||
public Map<String, String> getMeta() {
|
||||
return this.meta;
|
||||
}
|
||||
|
||||
private void decrByteSize(final Column column) {
|
||||
if (null == column) {
|
||||
return;
|
||||
|
@ -3,6 +3,8 @@ package com.alibaba.datax.core.transport.record;
|
||||
import com.alibaba.datax.common.element.Column;
|
||||
import com.alibaba.datax.common.element.Record;
|
||||
|
||||
import java.util.Map;
|
||||
|
||||
/**
|
||||
* 作为标示 生产者已经完成生产的标志
|
||||
*
|
||||
@ -41,6 +43,16 @@ public class TerminateRecord implements Record {
|
||||
return 0;
|
||||
}
|
||||
|
||||
@Override
|
||||
public void setMeta(Map<String, String> meta) {
|
||||
|
||||
}
|
||||
|
||||
@Override
|
||||
public Map<String, String> getMeta() {
|
||||
return null;
|
||||
}
|
||||
|
||||
@Override
|
||||
public void setColumn(int i, Column column) {
|
||||
return;
|
||||
|
@ -0,0 +1,58 @@
|
||||
configparser.1=\u63D2\u4EF6[{0},{1}]\u52A0\u8F7D\u5931\u8D25\uFF0C1s\u540E\u91CD\u8BD5... Exception:{2}
|
||||
configparser.2=\u83B7\u53D6\u4F5C\u4E1A\u914D\u7F6E\u4FE1\u606F\u5931\u8D25:{0}
|
||||
configparser.3=\u83B7\u53D6\u4F5C\u4E1A\u914D\u7F6E\u4FE1\u606F\u5931\u8D25:{0}
|
||||
configparser.4=\u83B7\u53D6\u4F5C\u4E1A\u914D\u7F6E\u4FE1\u606F\u5931\u8D25:{0}
|
||||
configparser.5=\u63D2\u4EF6\u52A0\u8F7D\u5931\u8D25\uFF0C\u672A\u5B8C\u6210\u6307\u5B9A\u63D2\u4EF6\u52A0\u8F7D:{0}
|
||||
configparser.6=\u63D2\u4EF6\u52A0\u8F7D\u5931\u8D25,\u5B58\u5728\u91CD\u590D\u63D2\u4EF6:{0}
|
||||
|
||||
dataxserviceutil.1=\u521B\u5EFA\u7B7E\u540D\u5F02\u5E38NoSuchAlgorithmException, [{0}]
|
||||
dataxserviceutil.2=\u521B\u5EFA\u7B7E\u540D\u5F02\u5E38InvalidKeyException, [{0}]
|
||||
dataxserviceutil.3=\u521B\u5EFA\u7B7E\u540D\u5F02\u5E38UnsupportedEncodingException, [{0}]
|
||||
|
||||
errorrecordchecker.1=\u810F\u6570\u636E\u767E\u5206\u6BD4\u9650\u5236\u5E94\u8BE5\u5728[0.0, 1.0]\u4E4B\u95F4
|
||||
errorrecordchecker.2=\u810F\u6570\u636E\u6761\u6570\u73B0\u5728\u5E94\u8BE5\u4E3A\u975E\u8D1F\u6574\u6570
|
||||
errorrecordchecker.3=\u810F\u6570\u636E\u6761\u6570\u68C0\u67E5\u4E0D\u901A\u8FC7\uFF0C\u9650\u5236\u662F[{0}]\u6761\uFF0C\u4F46\u5B9E\u9645\u4E0A\u6355\u83B7\u4E86[{1}]\u6761.
|
||||
errorrecordchecker.4=\u810F\u6570\u636E\u767E\u5206\u6BD4\u68C0\u67E5\u4E0D\u901A\u8FC7\uFF0C\u9650\u5236\u662F[{0}]\uFF0C\u4F46\u5B9E\u9645\u4E0A\u6355\u83B7\u5230[{1}].
|
||||
|
||||
|
||||
errorcode.install_error=DataX\u5F15\u64CE\u5B89\u88C5\u9519\u8BEF, \u8BF7\u8054\u7CFB\u60A8\u7684\u8FD0\u7EF4\u89E3\u51B3 .
|
||||
errorcode.argument_error=DataX\u5F15\u64CE\u8FD0\u884C\u9519\u8BEF\uFF0C\u8BE5\u95EE\u9898\u901A\u5E38\u662F\u7531\u4E8E\u5185\u90E8\u7F16\u7A0B\u9519\u8BEF\u5F15\u8D77\uFF0C\u8BF7\u8054\u7CFBDataX\u5F00\u53D1\u56E2\u961F\u89E3\u51B3 .
|
||||
errorcode.runtime_error=DataX\u5F15\u64CE\u8FD0\u884C\u8FC7\u7A0B\u51FA\u9519\uFF0C\u5177\u4F53\u539F\u56E0\u8BF7\u53C2\u770BDataX\u8FD0\u884C\u7ED3\u675F\u65F6\u7684\u9519\u8BEF\u8BCA\u65AD\u4FE1\u606F .
|
||||
errorcode.config_error=DataX\u5F15\u64CE\u914D\u7F6E\u9519\u8BEF\uFF0C\u8BE5\u95EE\u9898\u901A\u5E38\u662F\u7531\u4E8EDataX\u5B89\u88C5\u9519\u8BEF\u5F15\u8D77\uFF0C\u8BF7\u8054\u7CFB\u60A8\u7684\u8FD0\u7EF4\u89E3\u51B3 .
|
||||
errorcode.secret_error=DataX\u5F15\u64CE\u52A0\u89E3\u5BC6\u51FA\u9519\uFF0C\u8BE5\u95EE\u9898\u901A\u5E38\u662F\u7531\u4E8EDataX\u5BC6\u94A5\u914D\u7F6E\u9519\u8BEF\u5F15\u8D77\uFF0C\u8BF7\u8054\u7CFB\u60A8\u7684\u8FD0\u7EF4\u89E3\u51B3 .
|
||||
errorcode.hook_load_error=\u52A0\u8F7D\u5916\u90E8Hook\u51FA\u73B0\u9519\u8BEF\uFF0C\u901A\u5E38\u662F\u7531\u4E8EDataX\u5B89\u88C5\u5F15\u8D77\u7684
|
||||
errorcode.hook_fail_error=\u6267\u884C\u5916\u90E8Hook\u51FA\u73B0\u9519\u8BEF
|
||||
errorcode.plugin_install_error=DataX\u63D2\u4EF6\u5B89\u88C5\u9519\u8BEF, \u8BE5\u95EE\u9898\u901A\u5E38\u662F\u7531\u4E8EDataX\u5B89\u88C5\u9519\u8BEF\u5F15\u8D77\uFF0C\u8BF7\u8054\u7CFB\u60A8\u7684\u8FD0\u7EF4\u89E3\u51B3 .
|
||||
errorcode.plugin_not_found=DataX\u63D2\u4EF6\u914D\u7F6E\u9519\u8BEF, \u8BE5\u95EE\u9898\u901A\u5E38\u662F\u7531\u4E8EDataX\u5B89\u88C5\u9519\u8BEF\u5F15\u8D77\uFF0C\u8BF7\u8054\u7CFB\u60A8\u7684\u8FD0\u7EF4\u89E3\u51B3 .
|
||||
errorcode.plugin_init_error=DataX\u63D2\u4EF6\u521D\u59CB\u5316\u9519\u8BEF, \u8BE5\u95EE\u9898\u901A\u5E38\u662F\u7531\u4E8EDataX\u5B89\u88C5\u9519\u8BEF\u5F15\u8D77\uFF0C\u8BF7\u8054\u7CFB\u60A8\u7684\u8FD0\u7EF4\u89E3\u51B3 .
|
||||
errorcode.plugin_runtime_error=DataX\u63D2\u4EF6\u8FD0\u884C\u65F6\u51FA\u9519, \u5177\u4F53\u539F\u56E0\u8BF7\u53C2\u770BDataX\u8FD0\u884C\u7ED3\u675F\u65F6\u7684\u9519\u8BEF\u8BCA\u65AD\u4FE1\u606F .
|
||||
errorcode.plugin_dirty_data_limit_exceed=DataX\u4F20\u8F93\u810F\u6570\u636E\u8D85\u8FC7\u7528\u6237\u9884\u671F\uFF0C\u8BE5\u9519\u8BEF\u901A\u5E38\u662F\u7531\u4E8E\u6E90\u7AEF\u6570\u636E\u5B58\u5728\u8F83\u591A\u4E1A\u52A1\u810F\u6570\u636E\u5BFC\u81F4\uFF0C\u8BF7\u4ED4\u7EC6\u68C0\u67E5DataX\u6C47\u62A5\u7684\u810F\u6570\u636E\u65E5\u5FD7\u4FE1\u606F, \u6216\u8005\u60A8\u53EF\u4EE5\u9002\u5F53\u8C03\u5927\u810F\u6570\u636E\u9608\u503C .
|
||||
errorcode.plugin_split_error=DataX\u63D2\u4EF6\u5207\u5206\u51FA\u9519, \u8BE5\u95EE\u9898\u901A\u5E38\u662F\u7531\u4E8EDataX\u5404\u4E2A\u63D2\u4EF6\u7F16\u7A0B\u9519\u8BEF\u5F15\u8D77\uFF0C\u8BF7\u8054\u7CFBDataX\u5F00\u53D1\u56E2\u961F\u89E3\u51B3
|
||||
errorcode.kill_job_timeout_error=kill \u4EFB\u52A1\u8D85\u65F6\uFF0C\u8BF7\u8054\u7CFBPE\u89E3\u51B3
|
||||
errorcode.start_taskgroup_error=taskGroup\u542F\u52A8\u5931\u8D25,\u8BF7\u8054\u7CFBDataX\u5F00\u53D1\u56E2\u961F\u89E3\u51B3
|
||||
errorcode.call_datax_service_failed=\u8BF7\u6C42 DataX Service \u51FA\u9519.
|
||||
errorcode.call_remote_failed=\u8FDC\u7A0B\u8C03\u7528\u5931\u8D25
|
||||
errorcode.killed_exit_value=Job \u6536\u5230\u4E86 Kill \u547D\u4EE4.
|
||||
|
||||
|
||||
httpclientutil.1=\u8BF7\u6C42\u5730\u5740\uFF1A{0}, \u8BF7\u6C42\u65B9\u6CD5\uFF1A{1}, STATUS CODE = {2}, Response Entity: {3}
|
||||
httpclientutil.2=\u8FDC\u7A0B\u63A5\u53E3\u8FD4\u56DE-1,\u5C06\u91CD\u8BD5
|
||||
|
||||
|
||||
secretutil.1=\u7CFB\u7EDF\u7F16\u7A0B\u9519\u8BEF,\u4E0D\u652F\u6301\u7684\u52A0\u5BC6\u7C7B\u578B
|
||||
secretutil.2=\u7CFB\u7EDF\u7F16\u7A0B\u9519\u8BEF,\u4E0D\u652F\u6301\u7684\u52A0\u5BC6\u7C7B\u578B
|
||||
secretutil.3=rsa\u52A0\u5BC6\u51FA\u9519
|
||||
secretutil.4=rsa\u89E3\u5BC6\u51FA\u9519
|
||||
secretutil.5=3\u91CDDES\u52A0\u5BC6\u51FA\u9519
|
||||
secretutil.6=rsa\u89E3\u5BC6\u51FA\u9519
|
||||
secretutil.7=\u6784\u5EFA\u4E09\u91CDDES\u5BC6\u5319\u51FA\u9519
|
||||
secretutil.8=DataX\u914D\u7F6E\u8981\u6C42\u52A0\u89E3\u5BC6\uFF0C\u4F46\u65E0\u6CD5\u627E\u5230\u5BC6\u94A5\u7684\u914D\u7F6E\u6587\u4EF6
|
||||
secretutil.9=\u8BFB\u53D6\u52A0\u89E3\u5BC6\u914D\u7F6E\u6587\u4EF6\u51FA\u9519
|
||||
secretutil.10=DataX\u914D\u7F6E\u7684\u5BC6\u94A5\u7248\u672C\u4E3A[{0}]\uFF0C\u4F46\u5728\u7CFB\u7EDF\u4E2D\u6CA1\u6709\u914D\u7F6E\uFF0C\u4EFB\u52A1\u5BC6\u94A5\u914D\u7F6E\u9519\u8BEF\uFF0C\u4E0D\u5B58\u5728\u60A8\u914D\u7F6E\u7684\u5BC6\u94A5\u7248\u672C
|
||||
secretutil.11=DataX\u914D\u7F6E\u7684\u5BC6\u94A5\u7248\u672C\u4E3A[{0}]\uFF0C\u4F46\u5728\u7CFB\u7EDF\u4E2D\u6CA1\u6709\u914D\u7F6E\uFF0C\u53EF\u80FD\u662F\u4EFB\u52A1\u5BC6\u94A5\u914D\u7F6E\u9519\u8BEF\uFF0C\u4E5F\u53EF\u80FD\u662F\u7CFB\u7EDF\u7EF4\u62A4\u95EE\u9898
|
||||
secretutil.12=DataX\u914D\u7F6E\u7684\u5BC6\u94A5\u7248\u672C\u4E3A[{0}]\uFF0C\u4F46\u5728\u7CFB\u7EDF\u4E2D\u6CA1\u6709\u914D\u7F6E\uFF0C\u4EFB\u52A1\u5BC6\u94A5\u914D\u7F6E\u9519\u8BEF\uFF0C\u4E0D\u5B58\u5728\u60A8\u914D\u7F6E\u7684\u5BC6\u94A5\u7248\u672C
|
||||
secretutil.13=DataX\u914D\u7F6E\u7684\u5BC6\u94A5\u7248\u672C\u4E3A[{0}]\uFF0C\u4F46\u5728\u7CFB\u7EDF\u4E2D\u6CA1\u6709\u914D\u7F6E\uFF0C\u53EF\u80FD\u662F\u4EFB\u52A1\u5BC6\u94A5\u914D\u7F6E\u9519\u8BEF\uFF0C\u4E5F\u53EF\u80FD\u662F\u7CFB\u7EDF\u7EF4\u62A4\u95EE\u9898
|
||||
secretutil.14=DataX\u914D\u7F6E\u8981\u6C42\u52A0\u89E3\u5BC6\uFF0C\u4F46\u914D\u7F6E\u7684\u5BC6\u94A5\u7248\u672C[{0}]\u5B58\u5728\u5BC6\u94A5\u4E3A\u7A7A\u7684\u60C5\u51B5
|
||||
secretutil.15=DataX\u914D\u7F6E\u8981\u6C42\u52A0\u89E3\u5BC6\uFF0C\u4F46\u914D\u7F6E\u7684\u516C\u79C1\u94A5\u5BF9\u5B58\u5728\u4E3A\u7A7A\u7684\u60C5\u51B5\uFF0C\u7248\u672C[{0}]
|
||||
secretutil.16=DataX\u914D\u7F6E\u8981\u6C42\u52A0\u89E3\u5BC6\uFF0C\u4F46\u65E0\u6CD5\u627E\u5230\u52A0\u89E3\u5BC6\u914D\u7F6E
|
||||
|
@ -0,0 +1,58 @@
|
||||
configparser.1=Failed to load the plug-in [{0},{1}]. We will retry in 1s... Exception: {2}
|
||||
configparser.2=Failed to obtain the job configuration information: {0}
|
||||
configparser.3=Failed to obtain the job configuration information: {0}
|
||||
configparser.4=Failed to obtain the job configuration information: {0}
|
||||
configparser.5=Failed to load the plug-in. Loading of the specific plug-in:{0} is not completed
|
||||
configparser.6=Failed to load the plug-in. A duplicate plug-in: {0} exists
|
||||
|
||||
dataxserviceutil.1=Exception in creating signature. NoSuchAlgorithmException, [{0}]
|
||||
dataxserviceutil.2=Exception in creating signature. InvalidKeyException, [{0}]
|
||||
dataxserviceutil.3=Exception in creating signature. UnsupportedEncodingException, [{0}]
|
||||
|
||||
errorrecordchecker.1=The percentage of dirty data should be limited to within [0.0, 1.0]
|
||||
errorrecordchecker.2=The number of dirty data entries should now be a nonnegative integer
|
||||
errorrecordchecker.3=Check for the number of dirty data entries has not passed. The limit is [{0}] entries, but [{1}] entries have been captured.
|
||||
errorrecordchecker.4=Check for the percentage of dirty data has not passed. The limit is [{0}], but [{1}] of dirty data has been captured.
|
||||
|
||||
|
||||
errorcode.install_error=Error in installing DataX engine. Please contact your O&M team to solve the problem.
|
||||
errorcode.argument_error=Error in running DataX engine. This problem is generally caused by an internal programming error. Please contact the DataX developer team to solve the problem.
|
||||
errorcode.runtime_error=The DataX engine encountered an error during running. For the specific cause, refer to the error diagnosis after DataX stops running.
|
||||
errorcode.config_error=Error in DataX engine configuration. This problem is generally caused by a DataX installation error. Please contact your O&M team to solve the problem.
|
||||
errorcode.secret_error=Error in DataX engine encryption or decryption. This problem is generally caused by a DataX key configuration error. Please contact your O&M team to solve the problem.
|
||||
errorcode.hook_load_error=Error in loading the external hook. This problem is generally caused by the DataX installation.
|
||||
errorcode.hook_fail_error=Error in executing the external hook
|
||||
errorcode.plugin_install_error=Error in installing DataX plug-in. This problem is generally caused by a DataX installation error. Please contact your O&M team to solve the problem.
|
||||
errorcode.plugin_not_found=Error in DataX plug-in configuration. This problem is generally caused by a DataX installation error. Please contact your O&M team to solve the problem.
|
||||
errorcode.plugin_init_error=Error in DataX plug-in initialization. This problem is generally caused by a DataX installation error. Please contact your O&M team to solve the problem.
|
||||
errorcode.plugin_runtime_error=The DataX plug-in encountered an error during running. For the specific cause, refer to the error diagnosis after DataX stops running.
|
||||
errorcode.plugin_dirty_data_limit_exceed=The dirty data transmitted by DataX exceeds user expectations. This error often occurs when a lot dirty data exists in the source data. Please carefully check the dirty data log information reported by DataX, or you can tune up the dirty data threshold value.
|
||||
errorcode.plugin_split_error=Error in DataX plug-in slicing. This problem is generally caused by a programming error in some DataX plug-in. Please contact the DataX developer team to solve the problem.
|
||||
errorcode.kill_job_timeout_error=The kill task times out. Please contact the PE to solve the problem
|
||||
errorcode.start_taskgroup_error=Failed to start the task group. Please contact the DataX developer team to solve the problem
|
||||
errorcode.call_datax_service_failed=Error in requesting DataX Service.
|
||||
errorcode.call_remote_failed=Remote call failure
|
||||
errorcode.killed_exit_value=The job has received a Kill command.
|
||||
|
||||
|
||||
httpclientutil.1=Request address: {0}. Request method: {1}. STATUS CODE = {2}, Response Entity: {3}
|
||||
httpclientutil.2=The remote interface returns -1. We will try again
|
||||
|
||||
|
||||
secretutil.1=System programing error. Unsupported encryption type
|
||||
secretutil.2=System programing error. Unsupported encryption type
|
||||
secretutil.3=RSA encryption error
|
||||
secretutil.4=RSA decryption error
|
||||
secretutil.5=Triple DES encryption error
|
||||
secretutil.6=RSA decryption error
|
||||
secretutil.7=Error in building Triple DES key
|
||||
secretutil.8=DataX configuration requires encryption and decryption, but unable to find the key configuration file
|
||||
secretutil.9=Error in reading the encryption and decryption configuration file
|
||||
secretutil.10=The version of the DataX-configured key is [{0}], but there is no configuration in the system. Error in task key configuration. The key version you configured does not exist
|
||||
secretutil.11=The version of the DataX-configured key is [{0}], but there is no configuration in the system. There may be an error in task key configuration, or a problem in system maintenance
|
||||
secretutil.12=The version of the DataX-configured key is [{0}], but there is no configuration in the system. Error in task key configuration. The key version you configured does not exist
|
||||
secretutil.13=The version of the DataX-configured key is [{0}], but there is no configuration in the system. There may be an error in task key configuration, or a problem in system maintenance
|
||||
secretutil.14=DataX configuration requires encryption and decryption, but some key in the configured key version [{0}] is empty
|
||||
secretutil.15=DataX configuration requires encryption and decryption, but some configured public/private key pairs are empty and the version is [{0}]
|
||||
secretutil.16=DataX configuration requires encryption and decryption, but the encryption and decryption configuration cannot be found
|
||||
|
@ -0,0 +1,58 @@
|
||||
configparser.1=\u63D2\u4EF6[{0},{1}]\u52A0\u8F7D\u5931\u8D25\uFF0C1s\u540E\u91CD\u8BD5... Exception:{2}
|
||||
configparser.2=\u83B7\u53D6\u4F5C\u4E1A\u914D\u7F6E\u4FE1\u606F\u5931\u8D25:{0}
|
||||
configparser.3=\u83B7\u53D6\u4F5C\u4E1A\u914D\u7F6E\u4FE1\u606F\u5931\u8D25:{0}
|
||||
configparser.4=\u83B7\u53D6\u4F5C\u4E1A\u914D\u7F6E\u4FE1\u606F\u5931\u8D25:{0}
|
||||
configparser.5=\u63D2\u4EF6\u52A0\u8F7D\u5931\u8D25\uFF0C\u672A\u5B8C\u6210\u6307\u5B9A\u63D2\u4EF6\u52A0\u8F7D:{0}
|
||||
configparser.6=\u63D2\u4EF6\u52A0\u8F7D\u5931\u8D25,\u5B58\u5728\u91CD\u590D\u63D2\u4EF6:{0}
|
||||
|
||||
dataxserviceutil.1=\u521B\u5EFA\u7B7E\u540D\u5F02\u5E38NoSuchAlgorithmException, [{0}]
|
||||
dataxserviceutil.2=\u521B\u5EFA\u7B7E\u540D\u5F02\u5E38InvalidKeyException, [{0}]
|
||||
dataxserviceutil.3=\u521B\u5EFA\u7B7E\u540D\u5F02\u5E38UnsupportedEncodingException, [{0}]
|
||||
|
||||
errorrecordchecker.1=\u810F\u6570\u636E\u767E\u5206\u6BD4\u9650\u5236\u5E94\u8BE5\u5728[0.0, 1.0]\u4E4B\u95F4
|
||||
errorrecordchecker.2=\u810F\u6570\u636E\u6761\u6570\u73B0\u5728\u5E94\u8BE5\u4E3A\u975E\u8D1F\u6574\u6570
|
||||
errorrecordchecker.3=\u810F\u6570\u636E\u6761\u6570\u68C0\u67E5\u4E0D\u901A\u8FC7\uFF0C\u9650\u5236\u662F[{0}]\u6761\uFF0C\u4F46\u5B9E\u9645\u4E0A\u6355\u83B7\u4E86[{1}]\u6761.
|
||||
errorrecordchecker.4=\u810F\u6570\u636E\u767E\u5206\u6BD4\u68C0\u67E5\u4E0D\u901A\u8FC7\uFF0C\u9650\u5236\u662F[{0}]\uFF0C\u4F46\u5B9E\u9645\u4E0A\u6355\u83B7\u5230[{1}].
|
||||
|
||||
|
||||
errorcode.install_error=DataX\u5F15\u64CE\u5B89\u88C5\u9519\u8BEF, \u8BF7\u8054\u7CFB\u60A8\u7684\u8FD0\u7EF4\u89E3\u51B3 .
|
||||
errorcode.argument_error=DataX\u5F15\u64CE\u8FD0\u884C\u9519\u8BEF\uFF0C\u8BE5\u95EE\u9898\u901A\u5E38\u662F\u7531\u4E8E\u5185\u90E8\u7F16\u7A0B\u9519\u8BEF\u5F15\u8D77\uFF0C\u8BF7\u8054\u7CFBDataX\u5F00\u53D1\u56E2\u961F\u89E3\u51B3 .
|
||||
errorcode.runtime_error=DataX\u5F15\u64CE\u8FD0\u884C\u8FC7\u7A0B\u51FA\u9519\uFF0C\u5177\u4F53\u539F\u56E0\u8BF7\u53C2\u770BDataX\u8FD0\u884C\u7ED3\u675F\u65F6\u7684\u9519\u8BEF\u8BCA\u65AD\u4FE1\u606F .
|
||||
errorcode.config_error=DataX\u5F15\u64CE\u914D\u7F6E\u9519\u8BEF\uFF0C\u8BE5\u95EE\u9898\u901A\u5E38\u662F\u7531\u4E8EDataX\u5B89\u88C5\u9519\u8BEF\u5F15\u8D77\uFF0C\u8BF7\u8054\u7CFB\u60A8\u7684\u8FD0\u7EF4\u89E3\u51B3 .
|
||||
errorcode.secret_error=DataX\u5F15\u64CE\u52A0\u89E3\u5BC6\u51FA\u9519\uFF0C\u8BE5\u95EE\u9898\u901A\u5E38\u662F\u7531\u4E8EDataX\u5BC6\u94A5\u914D\u7F6E\u9519\u8BEF\u5F15\u8D77\uFF0C\u8BF7\u8054\u7CFB\u60A8\u7684\u8FD0\u7EF4\u89E3\u51B3 .
|
||||
errorcode.hook_load_error=\u52A0\u8F7D\u5916\u90E8Hook\u51FA\u73B0\u9519\u8BEF\uFF0C\u901A\u5E38\u662F\u7531\u4E8EDataX\u5B89\u88C5\u5F15\u8D77\u7684
|
||||
errorcode.hook_fail_error=\u6267\u884C\u5916\u90E8Hook\u51FA\u73B0\u9519\u8BEF
|
||||
errorcode.plugin_install_error=DataX\u63D2\u4EF6\u5B89\u88C5\u9519\u8BEF, \u8BE5\u95EE\u9898\u901A\u5E38\u662F\u7531\u4E8EDataX\u5B89\u88C5\u9519\u8BEF\u5F15\u8D77\uFF0C\u8BF7\u8054\u7CFB\u60A8\u7684\u8FD0\u7EF4\u89E3\u51B3 .
|
||||
errorcode.plugin_not_found=DataX\u63D2\u4EF6\u914D\u7F6E\u9519\u8BEF, \u8BE5\u95EE\u9898\u901A\u5E38\u662F\u7531\u4E8EDataX\u5B89\u88C5\u9519\u8BEF\u5F15\u8D77\uFF0C\u8BF7\u8054\u7CFB\u60A8\u7684\u8FD0\u7EF4\u89E3\u51B3 .
|
||||
errorcode.plugin_init_error=DataX\u63D2\u4EF6\u521D\u59CB\u5316\u9519\u8BEF, \u8BE5\u95EE\u9898\u901A\u5E38\u662F\u7531\u4E8EDataX\u5B89\u88C5\u9519\u8BEF\u5F15\u8D77\uFF0C\u8BF7\u8054\u7CFB\u60A8\u7684\u8FD0\u7EF4\u89E3\u51B3 .
|
||||
errorcode.plugin_runtime_error=DataX\u63D2\u4EF6\u8FD0\u884C\u65F6\u51FA\u9519, \u5177\u4F53\u539F\u56E0\u8BF7\u53C2\u770BDataX\u8FD0\u884C\u7ED3\u675F\u65F6\u7684\u9519\u8BEF\u8BCA\u65AD\u4FE1\u606F .
|
||||
errorcode.plugin_dirty_data_limit_exceed=DataX\u4F20\u8F93\u810F\u6570\u636E\u8D85\u8FC7\u7528\u6237\u9884\u671F\uFF0C\u8BE5\u9519\u8BEF\u901A\u5E38\u662F\u7531\u4E8E\u6E90\u7AEF\u6570\u636E\u5B58\u5728\u8F83\u591A\u4E1A\u52A1\u810F\u6570\u636E\u5BFC\u81F4\uFF0C\u8BF7\u4ED4\u7EC6\u68C0\u67E5DataX\u6C47\u62A5\u7684\u810F\u6570\u636E\u65E5\u5FD7\u4FE1\u606F, \u6216\u8005\u60A8\u53EF\u4EE5\u9002\u5F53\u8C03\u5927\u810F\u6570\u636E\u9608\u503C .
|
||||
errorcode.plugin_split_error=DataX\u63D2\u4EF6\u5207\u5206\u51FA\u9519, \u8BE5\u95EE\u9898\u901A\u5E38\u662F\u7531\u4E8EDataX\u5404\u4E2A\u63D2\u4EF6\u7F16\u7A0B\u9519\u8BEF\u5F15\u8D77\uFF0C\u8BF7\u8054\u7CFBDataX\u5F00\u53D1\u56E2\u961F\u89E3\u51B3
|
||||
errorcode.kill_job_timeout_error=kill \u4EFB\u52A1\u8D85\u65F6\uFF0C\u8BF7\u8054\u7CFBPE\u89E3\u51B3
|
||||
errorcode.start_taskgroup_error=taskGroup\u542F\u52A8\u5931\u8D25,\u8BF7\u8054\u7CFBDataX\u5F00\u53D1\u56E2\u961F\u89E3\u51B3
|
||||
errorcode.call_datax_service_failed=\u8BF7\u6C42 DataX Service \u51FA\u9519.
|
||||
errorcode.call_remote_failed=\u8FDC\u7A0B\u8C03\u7528\u5931\u8D25
|
||||
errorcode.killed_exit_value=Job \u6536\u5230\u4E86 Kill \u547D\u4EE4.
|
||||
|
||||
|
||||
httpclientutil.1=\u8BF7\u6C42\u5730\u5740\uFF1A{0}, \u8BF7\u6C42\u65B9\u6CD5\uFF1A{1},STATUS CODE = {2}, Response Entity: {3}
|
||||
httpclientutil.2=\u8FDC\u7A0B\u63A5\u53E3\u8FD4\u56DE-1,\u5C06\u91CD\u8BD5
|
||||
|
||||
|
||||
secretutil.1=\u7CFB\u7EDF\u7F16\u7A0B\u9519\u8BEF,\u4E0D\u652F\u6301\u7684\u52A0\u5BC6\u7C7B\u578B
|
||||
secretutil.2=\u7CFB\u7EDF\u7F16\u7A0B\u9519\u8BEF,\u4E0D\u652F\u6301\u7684\u52A0\u5BC6\u7C7B\u578B
|
||||
secretutil.3=rsa\u52A0\u5BC6\u51FA\u9519
|
||||
secretutil.4=rsa\u89E3\u5BC6\u51FA\u9519
|
||||
secretutil.5=3\u91CDDES\u52A0\u5BC6\u51FA\u9519
|
||||
secretutil.6=rsa\u89E3\u5BC6\u51FA\u9519
|
||||
secretutil.7=\u6784\u5EFA\u4E09\u91CDDES\u5BC6\u5319\u51FA\u9519
|
||||
secretutil.8=DataX\u914D\u7F6E\u8981\u6C42\u52A0\u89E3\u5BC6\uFF0C\u4F46\u65E0\u6CD5\u627E\u5230\u5BC6\u94A5\u7684\u914D\u7F6E\u6587\u4EF6
|
||||
secretutil.9=\u8BFB\u53D6\u52A0\u89E3\u5BC6\u914D\u7F6E\u6587\u4EF6\u51FA\u9519
|
||||
secretutil.10=DataX\u914D\u7F6E\u7684\u5BC6\u94A5\u7248\u672C\u4E3A[{0}]\uFF0C\u4F46\u5728\u7CFB\u7EDF\u4E2D\u6CA1\u6709\u914D\u7F6E\uFF0C\u4EFB\u52A1\u5BC6\u94A5\u914D\u7F6E\u9519\u8BEF\uFF0C\u4E0D\u5B58\u5728\u60A8\u914D\u7F6E\u7684\u5BC6\u94A5\u7248\u672C
|
||||
secretutil.11=DataX\u914D\u7F6E\u7684\u5BC6\u94A5\u7248\u672C\u4E3A[{0}]\uFF0C\u4F46\u5728\u7CFB\u7EDF\u4E2D\u6CA1\u6709\u914D\u7F6E\uFF0C\u53EF\u80FD\u662F\u4EFB\u52A1\u5BC6\u94A5\u914D\u7F6E\u9519\u8BEF\uFF0C\u4E5F\u53EF\u80FD\u662F\u7CFB\u7EDF\u7EF4\u62A4\u95EE\u9898
|
||||
secretutil.12=DataX\u914D\u7F6E\u7684\u5BC6\u94A5\u7248\u672C\u4E3A[{0}]\uFF0C\u4F46\u5728\u7CFB\u7EDF\u4E2D\u6CA1\u6709\u914D\u7F6E\uFF0C\u4EFB\u52A1\u5BC6\u94A5\u914D\u7F6E\u9519\u8BEF\uFF0C\u4E0D\u5B58\u5728\u60A8\u914D\u7F6E\u7684\u5BC6\u94A5\u7248\u672C
|
||||
secretutil.13=DataX\u914D\u7F6E\u7684\u5BC6\u94A5\u7248\u672C\u4E3A[{0}]\uFF0C\u4F46\u5728\u7CFB\u7EDF\u4E2D\u6CA1\u6709\u914D\u7F6E\uFF0C\u53EF\u80FD\u662F\u4EFB\u52A1\u5BC6\u94A5\u914D\u7F6E\u9519\u8BEF\uFF0C\u4E5F\u53EF\u80FD\u662F\u7CFB\u7EDF\u7EF4\u62A4\u95EE\u9898
|
||||
secretutil.14=DataX\u914D\u7F6E\u8981\u6C42\u52A0\u89E3\u5BC6\uFF0C\u4F46\u914D\u7F6E\u7684\u5BC6\u94A5\u7248\u672C[{0}]\u5B58\u5728\u5BC6\u94A5\u4E3A\u7A7A\u7684\u60C5\u51B5
|
||||
secretutil.15=DataX\u914D\u7F6E\u8981\u6C42\u52A0\u89E3\u5BC6\uFF0C\u4F46\u914D\u7F6E\u7684\u516C\u79C1\u94A5\u5BF9\u5B58\u5728\u4E3A\u7A7A\u7684\u60C5\u51B5\uFF0C\u7248\u672C[{0}]
|
||||
secretutil.16=DataX\u914D\u7F6E\u8981\u6C42\u52A0\u89E3\u5BC6\uFF0C\u4F46\u65E0\u6CD5\u627E\u5230\u52A0\u89E3\u5BC6\u914D\u7F6E
|
||||
|
@ -0,0 +1,58 @@
|
||||
configparser.1=\u63D2\u4EF6[{0},{1}]\u52A0\u8F7D\u5931\u8D25\uFF0C1s\u540E\u91CD\u8BD5... Exception:{2}
|
||||
configparser.2=\u83B7\u53D6\u4F5C\u4E1A\u914D\u7F6E\u4FE1\u606F\u5931\u8D25:{0}
|
||||
configparser.3=\u83B7\u53D6\u4F5C\u4E1A\u914D\u7F6E\u4FE1\u606F\u5931\u8D25:{0}
|
||||
configparser.4=\u83B7\u53D6\u4F5C\u4E1A\u914D\u7F6E\u4FE1\u606F\u5931\u8D25:{0}
|
||||
configparser.5=\u63D2\u4EF6\u52A0\u8F7D\u5931\u8D25\uFF0C\u672A\u5B8C\u6210\u6307\u5B9A\u63D2\u4EF6\u52A0\u8F7D:{0}
|
||||
configparser.6=\u63D2\u4EF6\u52A0\u8F7D\u5931\u8D25,\u5B58\u5728\u91CD\u590D\u63D2\u4EF6:{0}
|
||||
|
||||
dataxserviceutil.1=\u521B\u5EFA\u7B7E\u540D\u5F02\u5E38NoSuchAlgorithmException, [{0}]
|
||||
dataxserviceutil.2=\u521B\u5EFA\u7B7E\u540D\u5F02\u5E38InvalidKeyException, [{0}]
|
||||
dataxserviceutil.3=\u521B\u5EFA\u7B7E\u540D\u5F02\u5E38UnsupportedEncodingException, [{0}]
|
||||
|
||||
errorrecordchecker.1=\u810F\u6570\u636E\u767E\u5206\u6BD4\u9650\u5236\u5E94\u8BE5\u5728[0.0, 1.0]\u4E4B\u95F4
|
||||
errorrecordchecker.2=\u810F\u6570\u636E\u6761\u6570\u73B0\u5728\u5E94\u8BE5\u4E3A\u975E\u8D1F\u6574\u6570
|
||||
errorrecordchecker.3=\u810F\u6570\u636E\u6761\u6570\u68C0\u67E5\u4E0D\u901A\u8FC7\uFF0C\u9650\u5236\u662F[{0}]\u6761\uFF0C\u4F46\u5B9E\u9645\u4E0A\u6355\u83B7\u4E86[{1}]\u6761.
|
||||
errorrecordchecker.4=\u810F\u6570\u636E\u767E\u5206\u6BD4\u68C0\u67E5\u4E0D\u901A\u8FC7\uFF0C\u9650\u5236\u662F[{0}]\uFF0C\u4F46\u5B9E\u9645\u4E0A\u6355\u83B7\u5230[{1}].
|
||||
|
||||
|
||||
errorcode.install_error=DataX\u5F15\u64CE\u5B89\u88C5\u9519\u8BEF, \u8BF7\u8054\u7CFB\u60A8\u7684\u8FD0\u7EF4\u89E3\u51B3 .
|
||||
errorcode.argument_error=DataX\u5F15\u64CE\u8FD0\u884C\u9519\u8BEF\uFF0C\u8BE5\u95EE\u9898\u901A\u5E38\u662F\u7531\u4E8E\u5185\u90E8\u7F16\u7A0B\u9519\u8BEF\u5F15\u8D77\uFF0C\u8BF7\u8054\u7CFBDataX\u5F00\u53D1\u56E2\u961F\u89E3\u51B3 .
|
||||
errorcode.runtime_error=DataX\u5F15\u64CE\u8FD0\u884C\u8FC7\u7A0B\u51FA\u9519\uFF0C\u5177\u4F53\u539F\u56E0\u8BF7\u53C2\u770BDataX\u8FD0\u884C\u7ED3\u675F\u65F6\u7684\u9519\u8BEF\u8BCA\u65AD\u4FE1\u606F .
|
||||
errorcode.config_error=DataX\u5F15\u64CE\u914D\u7F6E\u9519\u8BEF\uFF0C\u8BE5\u95EE\u9898\u901A\u5E38\u662F\u7531\u4E8EDataX\u5B89\u88C5\u9519\u8BEF\u5F15\u8D77\uFF0C\u8BF7\u8054\u7CFB\u60A8\u7684\u8FD0\u7EF4\u89E3\u51B3 .
|
||||
errorcode.secret_error=DataX\u5F15\u64CE\u52A0\u89E3\u5BC6\u51FA\u9519\uFF0C\u8BE5\u95EE\u9898\u901A\u5E38\u662F\u7531\u4E8EDataX\u5BC6\u94A5\u914D\u7F6E\u9519\u8BEF\u5F15\u8D77\uFF0C\u8BF7\u8054\u7CFB\u60A8\u7684\u8FD0\u7EF4\u89E3\u51B3 .
|
||||
errorcode.hook_load_error=\u52A0\u8F7D\u5916\u90E8Hook\u51FA\u73B0\u9519\u8BEF\uFF0C\u901A\u5E38\u662F\u7531\u4E8EDataX\u5B89\u88C5\u5F15\u8D77\u7684
|
||||
errorcode.hook_fail_error=\u6267\u884C\u5916\u90E8Hook\u51FA\u73B0\u9519\u8BEF
|
||||
errorcode.plugin_install_error=DataX\u63D2\u4EF6\u5B89\u88C5\u9519\u8BEF, \u8BE5\u95EE\u9898\u901A\u5E38\u662F\u7531\u4E8EDataX\u5B89\u88C5\u9519\u8BEF\u5F15\u8D77\uFF0C\u8BF7\u8054\u7CFB\u60A8\u7684\u8FD0\u7EF4\u89E3\u51B3 .
|
||||
errorcode.plugin_not_found=DataX\u63D2\u4EF6\u914D\u7F6E\u9519\u8BEF, \u8BE5\u95EE\u9898\u901A\u5E38\u662F\u7531\u4E8EDataX\u5B89\u88C5\u9519\u8BEF\u5F15\u8D77\uFF0C\u8BF7\u8054\u7CFB\u60A8\u7684\u8FD0\u7EF4\u89E3\u51B3 .
|
||||
errorcode.plugin_init_error=DataX\u63D2\u4EF6\u521D\u59CB\u5316\u9519\u8BEF, \u8BE5\u95EE\u9898\u901A\u5E38\u662F\u7531\u4E8EDataX\u5B89\u88C5\u9519\u8BEF\u5F15\u8D77\uFF0C\u8BF7\u8054\u7CFB\u60A8\u7684\u8FD0\u7EF4\u89E3\u51B3 .
|
||||
errorcode.plugin_runtime_error=DataX\u63D2\u4EF6\u8FD0\u884C\u65F6\u51FA\u9519, \u5177\u4F53\u539F\u56E0\u8BF7\u53C2\u770BDataX\u8FD0\u884C\u7ED3\u675F\u65F6\u7684\u9519\u8BEF\u8BCA\u65AD\u4FE1\u606F .
|
||||
errorcode.plugin_dirty_data_limit_exceed=DataX\u4F20\u8F93\u810F\u6570\u636E\u8D85\u8FC7\u7528\u6237\u9884\u671F\uFF0C\u8BE5\u9519\u8BEF\u901A\u5E38\u662F\u7531\u4E8E\u6E90\u7AEF\u6570\u636E\u5B58\u5728\u8F83\u591A\u4E1A\u52A1\u810F\u6570\u636E\u5BFC\u81F4\uFF0C\u8BF7\u4ED4\u7EC6\u68C0\u67E5DataX\u6C47\u62A5\u7684\u810F\u6570\u636E\u65E5\u5FD7\u4FE1\u606F, \u6216\u8005\u60A8\u53EF\u4EE5\u9002\u5F53\u8C03\u5927\u810F\u6570\u636E\u9608\u503C .
|
||||
errorcode.plugin_split_error=DataX\u63D2\u4EF6\u5207\u5206\u51FA\u9519, \u8BE5\u95EE\u9898\u901A\u5E38\u662F\u7531\u4E8EDataX\u5404\u4E2A\u63D2\u4EF6\u7F16\u7A0B\u9519\u8BEF\u5F15\u8D77\uFF0C\u8BF7\u8054\u7CFBDataX\u5F00\u53D1\u56E2\u961F\u89E3\u51B3
|
||||
errorcode.kill_job_timeout_error=kill \u4EFB\u52A1\u8D85\u65F6\uFF0C\u8BF7\u8054\u7CFBPE\u89E3\u51B3
|
||||
errorcode.start_taskgroup_error=taskGroup\u542F\u52A8\u5931\u8D25,\u8BF7\u8054\u7CFBDataX\u5F00\u53D1\u56E2\u961F\u89E3\u51B3
|
||||
errorcode.call_datax_service_failed=\u8BF7\u6C42 DataX Service \u51FA\u9519.
|
||||
errorcode.call_remote_failed=\u8FDC\u7A0B\u8C03\u7528\u5931\u8D25
|
||||
errorcode.killed_exit_value=Job \u6536\u5230\u4E86 Kill \u547D\u4EE4.
|
||||
|
||||
|
||||
httpclientutil.1=\u8BF7\u6C42\u5730\u5740\uFF1A{0}, \u8BF7\u6C42\u65B9\u6CD5\uFF1A{1},STATUS CODE = {2}, Response Entity: {3}
|
||||
httpclientutil.2=\u8FDC\u7A0B\u63A5\u53E3\u8FD4\u56DE-1,\u5C06\u91CD\u8BD5
|
||||
|
||||
|
||||
secretutil.1=\u7CFB\u7EDF\u7F16\u7A0B\u9519\u8BEF,\u4E0D\u652F\u6301\u7684\u52A0\u5BC6\u7C7B\u578B
|
||||
secretutil.2=\u7CFB\u7EDF\u7F16\u7A0B\u9519\u8BEF,\u4E0D\u652F\u6301\u7684\u52A0\u5BC6\u7C7B\u578B
|
||||
secretutil.3=rsa\u52A0\u5BC6\u51FA\u9519
|
||||
secretutil.4=rsa\u89E3\u5BC6\u51FA\u9519
|
||||
secretutil.5=3\u91CDDES\u52A0\u5BC6\u51FA\u9519
|
||||
secretutil.6=rsa\u89E3\u5BC6\u51FA\u9519
|
||||
secretutil.7=\u6784\u5EFA\u4E09\u91CDDES\u5BC6\u5319\u51FA\u9519
|
||||
secretutil.8=DataX\u914D\u7F6E\u8981\u6C42\u52A0\u89E3\u5BC6\uFF0C\u4F46\u65E0\u6CD5\u627E\u5230\u5BC6\u94A5\u7684\u914D\u7F6E\u6587\u4EF6
|
||||
secretutil.9=\u8BFB\u53D6\u52A0\u89E3\u5BC6\u914D\u7F6E\u6587\u4EF6\u51FA\u9519
|
||||
secretutil.10=DataX\u914D\u7F6E\u7684\u5BC6\u94A5\u7248\u672C\u4E3A[{0}]\uFF0C\u4F46\u5728\u7CFB\u7EDF\u4E2D\u6CA1\u6709\u914D\u7F6E\uFF0C\u4EFB\u52A1\u5BC6\u94A5\u914D\u7F6E\u9519\u8BEF\uFF0C\u4E0D\u5B58\u5728\u60A8\u914D\u7F6E\u7684\u5BC6\u94A5\u7248\u672C
|
||||
secretutil.11=DataX\u914D\u7F6E\u7684\u5BC6\u94A5\u7248\u672C\u4E3A[{0}]\uFF0C\u4F46\u5728\u7CFB\u7EDF\u4E2D\u6CA1\u6709\u914D\u7F6E\uFF0C\u53EF\u80FD\u662F\u4EFB\u52A1\u5BC6\u94A5\u914D\u7F6E\u9519\u8BEF\uFF0C\u4E5F\u53EF\u80FD\u662F\u7CFB\u7EDF\u7EF4\u62A4\u95EE\u9898
|
||||
secretutil.12=DataX\u914D\u7F6E\u7684\u5BC6\u94A5\u7248\u672C\u4E3A[{0}]\uFF0C\u4F46\u5728\u7CFB\u7EDF\u4E2D\u6CA1\u6709\u914D\u7F6E\uFF0C\u4EFB\u52A1\u5BC6\u94A5\u914D\u7F6E\u9519\u8BEF\uFF0C\u4E0D\u5B58\u5728\u60A8\u914D\u7F6E\u7684\u5BC6\u94A5\u7248\u672C
|
||||
secretutil.13=DataX\u914D\u7F6E\u7684\u5BC6\u94A5\u7248\u672C\u4E3A[{0}]\uFF0C\u4F46\u5728\u7CFB\u7EDF\u4E2D\u6CA1\u6709\u914D\u7F6E\uFF0C\u53EF\u80FD\u662F\u4EFB\u52A1\u5BC6\u94A5\u914D\u7F6E\u9519\u8BEF\uFF0C\u4E5F\u53EF\u80FD\u662F\u7CFB\u7EDF\u7EF4\u62A4\u95EE\u9898
|
||||
secretutil.14=DataX\u914D\u7F6E\u8981\u6C42\u52A0\u89E3\u5BC6\uFF0C\u4F46\u914D\u7F6E\u7684\u5BC6\u94A5\u7248\u672C[{0}]\u5B58\u5728\u5BC6\u94A5\u4E3A\u7A7A\u7684\u60C5\u51B5
|
||||
secretutil.15=DataX\u914D\u7F6E\u8981\u6C42\u52A0\u89E3\u5BC6\uFF0C\u4F46\u914D\u7F6E\u7684\u516C\u79C1\u94A5\u5BF9\u5B58\u5728\u4E3A\u7A7A\u7684\u60C5\u51B5\uFF0C\u7248\u672C[{0}]
|
||||
secretutil.16=DataX\u914D\u7F6E\u8981\u6C42\u52A0\u89E3\u5BC6\uFF0C\u4F46\u65E0\u6CD5\u627E\u5230\u52A0\u89E3\u5BC6\u914D\u7F6E
|
||||
|
@ -0,0 +1,116 @@
|
||||
configparser.1=\u63D2\u4EF6[{0},{1}]\u52A0\u8F7D\u5931\u8D25\uFF0C1s\u540E\u91CD\u8BD5... Exception:{2}
|
||||
configparser.2=\u83B7\u53D6\u4F5C\u4E1A\u914D\u7F6E\u4FE1\u606F\u5931\u8D25:{0}
|
||||
configparser.3=\u83B7\u53D6\u4F5C\u4E1A\u914D\u7F6E\u4FE1\u606F\u5931\u8D25:{0}
|
||||
configparser.4=\u83B7\u53D6\u4F5C\u4E1A\u914D\u7F6E\u4FE1\u606F\u5931\u8D25:{0}
|
||||
configparser.5=\u63D2\u4EF6\u52A0\u8F7D\u5931\u8D25\uFF0C\u672A\u5B8C\u6210\u6307\u5B9A\u63D2\u4EF6\u52A0\u8F7D:{0}
|
||||
configparser.6=\u63D2\u4EF6\u52A0\u8F7D\u5931\u8D25,\u5B58\u5728\u91CD\u590D\u63D2\u4EF6:{0}
|
||||
|
||||
dataxserviceutil.1=\u521B\u5EFA\u7B7E\u540D\u5F02\u5E38NoSuchAlgorithmException, [{0}]
|
||||
dataxserviceutil.2=\u521B\u5EFA\u7B7E\u540D\u5F02\u5E38InvalidKeyException, [{0}]
|
||||
dataxserviceutil.3=\u521B\u5EFA\u7B7E\u540D\u5F02\u5E38UnsupportedEncodingException, [{0}]
|
||||
|
||||
errorrecordchecker.1=\u810F\u6570\u636E\u767E\u5206\u6BD4\u9650\u5236\u5E94\u8BE5\u5728[0.0, 1.0]\u4E4B\u95F4
|
||||
errorrecordchecker.2=\u810F\u6570\u636E\u6761\u6570\u73B0\u5728\u5E94\u8BE5\u4E3A\u975E\u8D1F\u6574\u6570
|
||||
errorrecordchecker.3=\u810F\u6570\u636E\u6761\u6570\u68C0\u67E5\u4E0D\u901A\u8FC7\uFF0C\u9650\u5236\u662F[{0}]\u6761\uFF0C\u4F46\u5B9E\u9645\u4E0A\u6355\u83B7\u4E86[{1}]\u6761.
|
||||
errorrecordchecker.4=\u810F\u6570\u636E\u767E\u5206\u6BD4\u68C0\u67E5\u4E0D\u901A\u8FC7\uFF0C\u9650\u5236\u662F[{0}]\uFF0C\u4F46\u5B9E\u9645\u4E0A\u6355\u83B7\u5230[{1}].
|
||||
|
||||
|
||||
errorcode.install_error=DataX\u5F15\u64CE\u5B89\u88C5\u9519\u8BEF, \u8BF7\u8054\u7CFB\u60A8\u7684\u8FD0\u7EF4\u89E3\u51B3 .
|
||||
errorcode.argument_error=DataX\u5F15\u64CE\u8FD0\u884C\u9519\u8BEF\uFF0C\u8BE5\u95EE\u9898\u901A\u5E38\u662F\u7531\u4E8E\u5185\u90E8\u7F16\u7A0B\u9519\u8BEF\u5F15\u8D77\uFF0C\u8BF7\u8054\u7CFBDataX\u5F00\u53D1\u56E2\u961F\u89E3\u51B3 .
|
||||
errorcode.runtime_error=DataX\u5F15\u64CE\u8FD0\u884C\u8FC7\u7A0B\u51FA\u9519\uFF0C\u5177\u4F53\u539F\u56E0\u8BF7\u53C2\u770BDataX\u8FD0\u884C\u7ED3\u675F\u65F6\u7684\u9519\u8BEF\u8BCA\u65AD\u4FE1\u606F .
|
||||
errorcode.config_error=DataX\u5F15\u64CE\u914D\u7F6E\u9519\u8BEF\uFF0C\u8BE5\u95EE\u9898\u901A\u5E38\u662F\u7531\u4E8EDataX\u5B89\u88C5\u9519\u8BEF\u5F15\u8D77\uFF0C\u8BF7\u8054\u7CFB\u60A8\u7684\u8FD0\u7EF4\u89E3\u51B3 .
|
||||
errorcode.secret_error=DataX\u5F15\u64CE\u52A0\u89E3\u5BC6\u51FA\u9519\uFF0C\u8BE5\u95EE\u9898\u901A\u5E38\u662F\u7531\u4E8EDataX\u5BC6\u94A5\u914D\u7F6E\u9519\u8BEF\u5F15\u8D77\uFF0C\u8BF7\u8054\u7CFB\u60A8\u7684\u8FD0\u7EF4\u89E3\u51B3 .
|
||||
errorcode.hook_load_error=\u52A0\u8F7D\u5916\u90E8Hook\u51FA\u73B0\u9519\u8BEF\uFF0C\u901A\u5E38\u662F\u7531\u4E8EDataX\u5B89\u88C5\u5F15\u8D77\u7684
|
||||
errorcode.hook_fail_error=\u6267\u884C\u5916\u90E8Hook\u51FA\u73B0\u9519\u8BEF
|
||||
errorcode.plugin_install_error=DataX\u63D2\u4EF6\u5B89\u88C5\u9519\u8BEF, \u8BE5\u95EE\u9898\u901A\u5E38\u662F\u7531\u4E8EDataX\u5B89\u88C5\u9519\u8BEF\u5F15\u8D77\uFF0C\u8BF7\u8054\u7CFB\u60A8\u7684\u8FD0\u7EF4\u89E3\u51B3 .
|
||||
errorcode.plugin_not_found=DataX\u63D2\u4EF6\u914D\u7F6E\u9519\u8BEF, \u8BE5\u95EE\u9898\u901A\u5E38\u662F\u7531\u4E8EDataX\u5B89\u88C5\u9519\u8BEF\u5F15\u8D77\uFF0C\u8BF7\u8054\u7CFB\u60A8\u7684\u8FD0\u7EF4\u89E3\u51B3 .
|
||||
errorcode.plugin_init_error=DataX\u63D2\u4EF6\u521D\u59CB\u5316\u9519\u8BEF, \u8BE5\u95EE\u9898\u901A\u5E38\u662F\u7531\u4E8EDataX\u5B89\u88C5\u9519\u8BEF\u5F15\u8D77\uFF0C\u8BF7\u8054\u7CFB\u60A8\u7684\u8FD0\u7EF4\u89E3\u51B3 .
|
||||
errorcode.plugin_runtime_error=DataX\u63D2\u4EF6\u8FD0\u884C\u65F6\u51FA\u9519, \u5177\u4F53\u539F\u56E0\u8BF7\u53C2\u770BDataX\u8FD0\u884C\u7ED3\u675F\u65F6\u7684\u9519\u8BEF\u8BCA\u65AD\u4FE1\u606F .
|
||||
errorcode.plugin_dirty_data_limit_exceed=DataX\u4F20\u8F93\u810F\u6570\u636E\u8D85\u8FC7\u7528\u6237\u9884\u671F\uFF0C\u8BE5\u9519\u8BEF\u901A\u5E38\u662F\u7531\u4E8E\u6E90\u7AEF\u6570\u636E\u5B58\u5728\u8F83\u591A\u4E1A\u52A1\u810F\u6570\u636E\u5BFC\u81F4\uFF0C\u8BF7\u4ED4\u7EC6\u68C0\u67E5DataX\u6C47\u62A5\u7684\u810F\u6570\u636E\u65E5\u5FD7\u4FE1\u606F, \u6216\u8005\u60A8\u53EF\u4EE5\u9002\u5F53\u8C03\u5927\u810F\u6570\u636E\u9608\u503C .
|
||||
errorcode.plugin_split_error=DataX\u63D2\u4EF6\u5207\u5206\u51FA\u9519, \u8BE5\u95EE\u9898\u901A\u5E38\u662F\u7531\u4E8EDataX\u5404\u4E2A\u63D2\u4EF6\u7F16\u7A0B\u9519\u8BEF\u5F15\u8D77\uFF0C\u8BF7\u8054\u7CFBDataX\u5F00\u53D1\u56E2\u961F\u89E3\u51B3
|
||||
errorcode.kill_job_timeout_error=kill \u4EFB\u52A1\u8D85\u65F6\uFF0C\u8BF7\u8054\u7CFBPE\u89E3\u51B3
|
||||
errorcode.start_taskgroup_error=taskGroup\u542F\u52A8\u5931\u8D25,\u8BF7\u8054\u7CFBDataX\u5F00\u53D1\u56E2\u961F\u89E3\u51B3
|
||||
errorcode.call_datax_service_failed=\u8BF7\u6C42 DataX Service \u51FA\u9519.
|
||||
errorcode.call_remote_failed=\u8FDC\u7A0B\u8C03\u7528\u5931\u8D25
|
||||
errorcode.killed_exit_value=Job \u6536\u5230\u4E86 Kill \u547D\u4EE4.
|
||||
|
||||
|
||||
httpclientutil.1=\u8BF7\u6C42\u5730\u5740\uFF1A{0}, \u8BF7\u6C42\u65B9\u6CD5\uFF1A{1},STATUS CODE = {2}, Response Entity: {3}
|
||||
httpclientutil.2=\u8FDC\u7A0B\u63A5\u53E3\u8FD4\u56DE-1,\u5C06\u91CD\u8BD5
|
||||
|
||||
|
||||
secretutil.1=\u7CFB\u7EDF\u7F16\u7A0B\u9519\u8BEF,\u4E0D\u652F\u6301\u7684\u52A0\u5BC6\u7C7B\u578B
|
||||
secretutil.2=\u7CFB\u7EDF\u7F16\u7A0B\u9519\u8BEF,\u4E0D\u652F\u6301\u7684\u52A0\u5BC6\u7C7B\u578B
|
||||
secretutil.3=rsa\u52A0\u5BC6\u51FA\u9519
|
||||
secretutil.4=rsa\u89E3\u5BC6\u51FA\u9519
|
||||
secretutil.5=3\u91CDDES\u52A0\u5BC6\u51FA\u9519
|
||||
secretutil.6=rsa\u89E3\u5BC6\u51FA\u9519
|
||||
secretutil.7=\u6784\u5EFA\u4E09\u91CDDES\u5BC6\u5319\u51FA\u9519
|
||||
secretutil.8=DataX\u914D\u7F6E\u8981\u6C42\u52A0\u89E3\u5BC6\uFF0C\u4F46\u65E0\u6CD5\u627E\u5230\u5BC6\u94A5\u7684\u914D\u7F6E\u6587\u4EF6
|
||||
secretutil.9=\u8BFB\u53D6\u52A0\u89E3\u5BC6\u914D\u7F6E\u6587\u4EF6\u51FA\u9519
|
||||
secretutil.10=DataX\u914D\u7F6E\u7684\u5BC6\u94A5\u7248\u672C\u4E3A[{0}]\uFF0C\u4F46\u5728\u7CFB\u7EDF\u4E2D\u6CA1\u6709\u914D\u7F6E\uFF0C\u4EFB\u52A1\u5BC6\u94A5\u914D\u7F6E\u9519\u8BEF\uFF0C\u4E0D\u5B58\u5728\u60A8\u914D\u7F6E\u7684\u5BC6\u94A5\u7248\u672C
|
||||
secretutil.11=DataX\u914D\u7F6E\u7684\u5BC6\u94A5\u7248\u672C\u4E3A[{0}]\uFF0C\u4F46\u5728\u7CFB\u7EDF\u4E2D\u6CA1\u6709\u914D\u7F6E\uFF0C\u53EF\u80FD\u662F\u4EFB\u52A1\u5BC6\u94A5\u914D\u7F6E\u9519\u8BEF\uFF0C\u4E5F\u53EF\u80FD\u662F\u7CFB\u7EDF\u7EF4\u62A4\u95EE\u9898
|
||||
secretutil.12=DataX\u914D\u7F6E\u7684\u5BC6\u94A5\u7248\u672C\u4E3A[{0}]\uFF0C\u4F46\u5728\u7CFB\u7EDF\u4E2D\u6CA1\u6709\u914D\u7F6E\uFF0C\u4EFB\u52A1\u5BC6\u94A5\u914D\u7F6E\u9519\u8BEF\uFF0C\u4E0D\u5B58\u5728\u60A8\u914D\u7F6E\u7684\u5BC6\u94A5\u7248\u672C
|
||||
secretutil.13=DataX\u914D\u7F6E\u7684\u5BC6\u94A5\u7248\u672C\u4E3A[{0}]\uFF0C\u4F46\u5728\u7CFB\u7EDF\u4E2D\u6CA1\u6709\u914D\u7F6E\uFF0C\u53EF\u80FD\u662F\u4EFB\u52A1\u5BC6\u94A5\u914D\u7F6E\u9519\u8BEF\uFF0C\u4E5F\u53EF\u80FD\u662F\u7CFB\u7EDF\u7EF4\u62A4\u95EE\u9898
|
||||
secretutil.14=DataX\u914D\u7F6E\u8981\u6C42\u52A0\u89E3\u5BC6\uFF0C\u4F46\u914D\u7F6E\u7684\u5BC6\u94A5\u7248\u672C[{0}]\u5B58\u5728\u5BC6\u94A5\u4E3A\u7A7A\u7684\u60C5\u51B5
|
||||
secretutil.15=DataX\u914D\u7F6E\u8981\u6C42\u52A0\u89E3\u5BC6\uFF0C\u4F46\u914D\u7F6E\u7684\u516C\u79C1\u94A5\u5BF9\u5B58\u5728\u4E3A\u7A7A\u7684\u60C5\u51B5\uFF0C\u7248\u672C[{0}]
|
||||
secretutil.16=DataX\u914D\u7F6E\u8981\u6C42\u52A0\u89E3\u5BC6\uFF0C\u4F46\u65E0\u6CD5\u627E\u5230\u52A0\u89E3\u5BC6\u914D\u7F6E
|
||||
|
||||
configparser.1=\u5916\u639B\u7A0B\u5F0F[{0},{1}]\u8F09\u5165\u5931\u6557\uFF0C1s\u5F8C\u91CD\u8A66... Exception:{2}
|
||||
configparser.2=\u7372\u53D6\u4F5C\u696D\u914D\u7F6E\u8CC7\u8A0A\u5931\u6557:{0}
|
||||
configparser.3=\u7372\u53D6\u4F5C\u696D\u914D\u7F6E\u8CC7\u8A0A\u5931\u6557:{0}
|
||||
configparser.4=\u7372\u53D6\u4F5C\u696D\u914D\u7F6E\u8CC7\u8A0A\u5931\u6557:{0}
|
||||
configparser.5=\u5916\u639B\u7A0B\u5F0F\u8F09\u5165\u5931\u6557\uFF0C\u672A\u5B8C\u6210\u6307\u5B9A\u5916\u639B\u7A0B\u5F0F\u8F09\u5165:{0}
|
||||
configparser.6=\u5916\u639B\u7A0B\u5F0F\u8F09\u5165\u5931\u6557,\u5B58\u5728\u91CD\u8907\u5916\u639B\u7A0B\u5F0F:{0}
|
||||
|
||||
dataxserviceutil.1=\u5EFA\u7ACB\u7C3D\u540D\u7570\u5E38NoSuchAlgorithmException, [{0}]
|
||||
dataxserviceutil.2=\u5EFA\u7ACB\u7C3D\u540D\u7570\u5E38InvalidKeyException, [{0}]
|
||||
dataxserviceutil.3=\u5EFA\u7ACB\u7C3D\u540D\u7570\u5E38UnsupportedEncodingException, [{0}]
|
||||
|
||||
errorrecordchecker.1=\u9AD2\u6578\u64DA\u767E\u5206\u6BD4\u9650\u5236\u61C9\u8A72\u5728[0.0, 1.0]\u4E4B\u9593
|
||||
errorrecordchecker.2=\u9AD2\u6578\u64DA\u689D\u6578\u73FE\u5728\u61C9\u8A72\u70BA\u975E\u8CA0\u6574\u6578
|
||||
errorrecordchecker.3=\u9AD2\u6578\u64DA\u689D\u6578\u6AA2\u67E5\u4E0D\u901A\u904E\uFF0C\u9650\u5236\u662F[{0}]\u689D\uFF0C\u4F46\u5BE6\u969B\u4E0A\u6355\u7372\u4E86[{1}]\u689D.
|
||||
errorrecordchecker.4=\u9AD2\u6578\u64DA\u767E\u5206\u6BD4\u6AA2\u67E5\u4E0D\u901A\u904E\uFF0C\u9650\u5236\u662F[{0}]\uFF0C\u4F46\u5BE6\u969B\u4E0A\u6355\u7372\u5230[{1}].
|
||||
|
||||
|
||||
errorcode.install_error=DataX\u5F15\u64CE\u5B89\u88DD\u932F\u8AA4, \u8ACB\u806F\u7D61\u60A8\u7684\u904B\u7DAD\u89E3\u6C7A .
|
||||
errorcode.argument_error=DataX\u5F15\u64CE\u904B\u884C\u932F\u8AA4\uFF0C\u8A72\u554F\u984C\u901A\u5E38\u662F\u7531\u65BC\u5167\u90E8\u7DE8\u7A0B\u932F\u8AA4\u5F15\u8D77\uFF0C\u8ACB\u806F\u7D61DataX\u958B\u767C\u5718\u968A\u89E3\u6C7A .
|
||||
errorcode.runtime_error=DataX\u5F15\u64CE\u904B\u884C\u904E\u7A0B\u51FA\u932F\uFF0C\u5177\u9AD4\u539F\u56E0\u8ACB\u53C3\u770BDataX\u904B\u884C\u7D50\u675F\u6642\u7684\u932F\u8AA4\u8A3A\u65B7\u8CC7\u8A0A .
|
||||
errorcode.config_error=DataX\u5F15\u64CE\u914D\u7F6E\u932F\u8AA4\uFF0C\u8A72\u554F\u984C\u901A\u5E38\u662F\u7531\u65BCDataX\u5B89\u88DD\u932F\u8AA4\u5F15\u8D77\uFF0C\u8ACB\u806F\u7D61\u60A8\u7684\u904B\u7DAD\u89E3\u6C7A .
|
||||
errorcode.secret_error=DataX\u5F15\u64CE\u52A0\u89E3\u5BC6\u51FA\u932F\uFF0C\u8A72\u554F\u984C\u901A\u5E38\u662F\u7531\u65BCDataX\u5BC6\u9470\u914D\u7F6E\u932F\u8AA4\u5F15\u8D77\uFF0C\u8ACB\u806F\u7D61\u60A8\u7684\u904B\u7DAD\u89E3\u6C7A .
|
||||
errorcode.hook_load_error=\u8F09\u5165\u5916\u90E8Hook\u51FA\u73FE\u932F\u8AA4\uFF0C\u901A\u5E38\u662F\u7531\u65BCDataX\u5B89\u88DD\u5F15\u8D77\u7684
|
||||
errorcode.hook_fail_error=\u57F7\u884C\u5916\u90E8Hook\u51FA\u73FE\u932F\u8AA4
|
||||
errorcode.plugin_install_error=DataX\u5916\u639B\u7A0B\u5F0F\u5B89\u88DD\u932F\u8AA4, \u8A72\u554F\u984C\u901A\u5E38\u662F\u7531\u65BCDataX\u5B89\u88DD\u932F\u8AA4\u5F15\u8D77\uFF0C\u8ACB\u806F\u7D61\u60A8\u7684\u904B\u7DAD\u89E3\u6C7A .
|
||||
errorcode.plugin_not_found=DataX\u5916\u639B\u7A0B\u5F0F\u914D\u7F6E\u932F\u8AA4, \u8A72\u554F\u984C\u901A\u5E38\u662F\u7531\u65BCDataX\u5B89\u88DD\u932F\u8AA4\u5F15\u8D77\uFF0C\u8ACB\u806F\u7D61\u60A8\u7684\u904B\u7DAD\u89E3\u6C7A .
|
||||
errorcode.plugin_init_error=DataX\u5916\u639B\u7A0B\u5F0F\u521D\u59CB\u5316\u932F\u8AA4, \u8A72\u554F\u984C\u901A\u5E38\u662F\u7531\u65BCDataX\u5B89\u88DD\u932F\u8AA4\u5F15\u8D77\uFF0C\u8ACB\u806F\u7D61\u60A8\u7684\u904B\u7DAD\u89E3\u6C7A .
|
||||
errorcode.plugin_runtime_error=DataX\u5916\u639B\u7A0B\u5F0F\u904B\u884C\u6642\u51FA\u932F, \u5177\u9AD4\u539F\u56E0\u8ACB\u53C3\u770BDataX\u904B\u884C\u7D50\u675F\u6642\u7684\u932F\u8AA4\u8A3A\u65B7\u8CC7\u8A0A .
|
||||
errorcode.plugin_dirty_data_limit_exceed=DataX\u50B3\u8F38\u9AD2\u6578\u64DA\u8D85\u904E\u7528\u6236\u9810\u671F\uFF0C\u8A72\u932F\u8AA4\u901A\u5E38\u662F\u7531\u65BC\u6E90\u7AEF\u6578\u64DA\u5B58\u5728\u8F03\u591A\u696D\u52D9\u9AD2\u6578\u64DA\u5C0E\u81F4\uFF0C\u8ACB\u4ED4\u7D30\u6AA2\u67E5DataX\u5F59\u5831\u7684\u9AD2\u6578\u64DA\u65E5\u8A8C\u8CC7\u8A0A, \u6216\u8005\u60A8\u53EF\u4EE5\u9069\u7576\u8ABF\u5927\u9AD2\u6578\u64DA\u95BE\u503C .
|
||||
errorcode.plugin_split_error=DataX\u5916\u639B\u7A0B\u5F0F\u5207\u5206\u51FA\u932F, \u8A72\u554F\u984C\u901A\u5E38\u662F\u7531\u65BCDataX\u5404\u500B\u5916\u639B\u7A0B\u5F0F\u7DE8\u7A0B\u932F\u8AA4\u5F15\u8D77\uFF0C\u8ACB\u806F\u7D61DataX\u958B\u767C\u5718\u968A\u89E3\u6C7A
|
||||
errorcode.kill_job_timeout_error=kill \u4EFB\u52D9\u903E\u6642\uFF0C\u8ACB\u806F\u7D61PE\u89E3\u6C7A
|
||||
errorcode.start_taskgroup_error=taskGroup\u555F\u52D5\u5931\u6557,\u8ACB\u806F\u7D61DataX\u958B\u767C\u5718\u968A\u89E3\u6C7A
|
||||
errorcode.call_datax_service_failed=\u8ACB\u6C42 DataX Service \u51FA\u932F.
|
||||
errorcode.call_remote_failed=\u9060\u7A0B\u8ABF\u7528\u5931\u6557
|
||||
errorcode.killed_exit_value=Job \u6536\u5230\u4E86 Kill \u547D\u4EE4.
|
||||
|
||||
|
||||
httpclientutil.1=\u8ACB\u6C42\u5730\u5740\uFF1A{0}, \u8ACB\u6C42\u65B9\u6CD5\uFF1A{1},STATUS CODE = {2}, Response Entity: {3}
|
||||
httpclientutil.2=\u9060\u7A0B\u63A5\u53E3\u8FD4\u56DE-1,\u5C07\u91CD\u8A66
|
||||
|
||||
|
||||
secretutil.1=\u7CFB\u7D71\u7DE8\u7A0B\u932F\u8AA4,\u4E0D\u652F\u63F4\u7684\u52A0\u5BC6\u985E\u578B
|
||||
secretutil.2=\u7CFB\u7D71\u7DE8\u7A0B\u932F\u8AA4,\u4E0D\u652F\u63F4\u7684\u52A0\u5BC6\u985E\u578B
|
||||
secretutil.3=rsa\u52A0\u5BC6\u51FA\u932F
|
||||
secretutil.4=rsa\u89E3\u5BC6\u51FA\u932F
|
||||
secretutil.5=3\u91CDDES\u52A0\u5BC6\u51FA\u932F
|
||||
secretutil.6=rsa\u89E3\u5BC6\u51FA\u932F
|
||||
secretutil.7=\u69CB\u5EFA\u4E09\u91CDDES\u5BC6\u5319\u51FA\u932F
|
||||
secretutil.8=DataX\u914D\u7F6E\u8981\u6C42\u52A0\u89E3\u5BC6\uFF0C\u4F46\u7121\u6CD5\u627E\u5230\u5BC6\u9470\u7684\u914D\u7F6E\u6A94\u6848
|
||||
secretutil.9=\u8B80\u53D6\u52A0\u89E3\u5BC6\u914D\u7F6E\u6A94\u6848\u51FA\u932F
|
||||
secretutil.10=DataX\u914D\u7F6E\u7684\u5BC6\u9470\u7248\u672C\u70BA[{0}]\uFF0C\u4F46\u5728\u7CFB\u7D71\u4E2D\u6C92\u6709\u914D\u7F6E\uFF0C\u4EFB\u52D9\u5BC6\u9470\u914D\u7F6E\u932F\u8AA4\uFF0C\u4E0D\u5B58\u5728\u60A8\u914D\u7F6E\u7684\u5BC6\u9470\u7248\u672C
|
||||
secretutil.11=DataX\u914D\u7F6E\u7684\u5BC6\u9470\u7248\u672C\u70BA[{0}]\uFF0C\u4F46\u5728\u7CFB\u7D71\u4E2D\u6C92\u6709\u914D\u7F6E\uFF0C\u53EF\u80FD\u662F\u4EFB\u52D9\u5BC6\u9470\u914D\u7F6E\u932F\u8AA4\uFF0C\u4E5F\u53EF\u80FD\u662F\u7CFB\u7D71\u7DAD\u8B77\u554F\u984C
|
||||
secretutil.12=DataX\u914D\u7F6E\u7684\u5BC6\u9470\u7248\u672C\u70BA[{0}]\uFF0C\u4F46\u5728\u7CFB\u7D71\u4E2D\u6C92\u6709\u914D\u7F6E\uFF0C\u4EFB\u52D9\u5BC6\u9470\u914D\u7F6E\u932F\u8AA4\uFF0C\u4E0D\u5B58\u5728\u60A8\u914D\u7F6E\u7684\u5BC6\u9470\u7248\u672C
|
||||
secretutil.13=DataX\u914D\u7F6E\u7684\u5BC6\u9470\u7248\u672C\u70BA[{0}]\uFF0C\u4F46\u5728\u7CFB\u7D71\u4E2D\u6C92\u6709\u914D\u7F6E\uFF0C\u53EF\u80FD\u662F\u4EFB\u52D9\u5BC6\u9470\u914D\u7F6E\u932F\u8AA4\uFF0C\u4E5F\u53EF\u80FD\u662F\u7CFB\u7D71\u7DAD\u8B77\u554F\u984C
|
||||
secretutil.14=DataX\u914D\u7F6E\u8981\u6C42\u52A0\u89E3\u5BC6\uFF0C\u4F46\u914D\u7F6E\u7684\u5BC6\u9470\u7248\u672C[{0}]\u5B58\u5728\u5BC6\u9470\u70BA\u7A7A\u7684\u60C5\u6CC1
|
||||
secretutil.15=DataX\u914D\u7F6E\u8981\u6C42\u52A0\u89E3\u5BC6\uFF0C\u4F46\u914D\u7F6E\u7684\u516C\u79C1\u9470\u5C0D\u5B58\u5728\u70BA\u7A7A\u7684\u60C5\u6CC1\uFF0C\u7248\u672C[{0}]
|
||||
secretutil.16=DataX\u914D\u7F6E\u8981\u6C42\u52A0\u89E3\u5BC6\uFF0C\u4F46\u7121\u6CD5\u627E\u5230\u52A0\u89E3\u5BC6\u914D\u7F6E
|
||||
|
@ -0,0 +1,116 @@
|
||||
configparser.1=\u63D2\u4EF6[{0},{1}]\u52A0\u8F7D\u5931\u8D25\uFF0C1s\u540E\u91CD\u8BD5... Exception:{2}
|
||||
configparser.2=\u83B7\u53D6\u4F5C\u4E1A\u914D\u7F6E\u4FE1\u606F\u5931\u8D25:{0}
|
||||
configparser.3=\u83B7\u53D6\u4F5C\u4E1A\u914D\u7F6E\u4FE1\u606F\u5931\u8D25:{0}
|
||||
configparser.4=\u83B7\u53D6\u4F5C\u4E1A\u914D\u7F6E\u4FE1\u606F\u5931\u8D25:{0}
|
||||
configparser.5=\u63D2\u4EF6\u52A0\u8F7D\u5931\u8D25\uFF0C\u672A\u5B8C\u6210\u6307\u5B9A\u63D2\u4EF6\u52A0\u8F7D:{0}
|
||||
configparser.6=\u63D2\u4EF6\u52A0\u8F7D\u5931\u8D25,\u5B58\u5728\u91CD\u590D\u63D2\u4EF6:{0}
|
||||
|
||||
dataxserviceutil.1=\u521B\u5EFA\u7B7E\u540D\u5F02\u5E38NoSuchAlgorithmException, [{0}]
|
||||
dataxserviceutil.2=\u521B\u5EFA\u7B7E\u540D\u5F02\u5E38InvalidKeyException, [{0}]
|
||||
dataxserviceutil.3=\u521B\u5EFA\u7B7E\u540D\u5F02\u5E38UnsupportedEncodingException, [{0}]
|
||||
|
||||
errorrecordchecker.1=\u810F\u6570\u636E\u767E\u5206\u6BD4\u9650\u5236\u5E94\u8BE5\u5728[0.0, 1.0]\u4E4B\u95F4
|
||||
errorrecordchecker.2=\u810F\u6570\u636E\u6761\u6570\u73B0\u5728\u5E94\u8BE5\u4E3A\u975E\u8D1F\u6574\u6570
|
||||
errorrecordchecker.3=\u810F\u6570\u636E\u6761\u6570\u68C0\u67E5\u4E0D\u901A\u8FC7\uFF0C\u9650\u5236\u662F[{0}]\u6761\uFF0C\u4F46\u5B9E\u9645\u4E0A\u6355\u83B7\u4E86[{1}]\u6761.
|
||||
errorrecordchecker.4=\u810F\u6570\u636E\u767E\u5206\u6BD4\u68C0\u67E5\u4E0D\u901A\u8FC7\uFF0C\u9650\u5236\u662F[{0}]\uFF0C\u4F46\u5B9E\u9645\u4E0A\u6355\u83B7\u5230[{1}].
|
||||
|
||||
|
||||
errorcode.install_error=DataX\u5F15\u64CE\u5B89\u88C5\u9519\u8BEF, \u8BF7\u8054\u7CFB\u60A8\u7684\u8FD0\u7EF4\u89E3\u51B3 .
|
||||
errorcode.argument_error=DataX\u5F15\u64CE\u8FD0\u884C\u9519\u8BEF\uFF0C\u8BE5\u95EE\u9898\u901A\u5E38\u662F\u7531\u4E8E\u5185\u90E8\u7F16\u7A0B\u9519\u8BEF\u5F15\u8D77\uFF0C\u8BF7\u8054\u7CFBDataX\u5F00\u53D1\u56E2\u961F\u89E3\u51B3 .
|
||||
errorcode.runtime_error=DataX\u5F15\u64CE\u8FD0\u884C\u8FC7\u7A0B\u51FA\u9519\uFF0C\u5177\u4F53\u539F\u56E0\u8BF7\u53C2\u770BDataX\u8FD0\u884C\u7ED3\u675F\u65F6\u7684\u9519\u8BEF\u8BCA\u65AD\u4FE1\u606F .
|
||||
errorcode.config_error=DataX\u5F15\u64CE\u914D\u7F6E\u9519\u8BEF\uFF0C\u8BE5\u95EE\u9898\u901A\u5E38\u662F\u7531\u4E8EDataX\u5B89\u88C5\u9519\u8BEF\u5F15\u8D77\uFF0C\u8BF7\u8054\u7CFB\u60A8\u7684\u8FD0\u7EF4\u89E3\u51B3 .
|
||||
errorcode.secret_error=DataX\u5F15\u64CE\u52A0\u89E3\u5BC6\u51FA\u9519\uFF0C\u8BE5\u95EE\u9898\u901A\u5E38\u662F\u7531\u4E8EDataX\u5BC6\u94A5\u914D\u7F6E\u9519\u8BEF\u5F15\u8D77\uFF0C\u8BF7\u8054\u7CFB\u60A8\u7684\u8FD0\u7EF4\u89E3\u51B3 .
|
||||
errorcode.hook_load_error=\u52A0\u8F7D\u5916\u90E8Hook\u51FA\u73B0\u9519\u8BEF\uFF0C\u901A\u5E38\u662F\u7531\u4E8EDataX\u5B89\u88C5\u5F15\u8D77\u7684
|
||||
errorcode.hook_fail_error=\u6267\u884C\u5916\u90E8Hook\u51FA\u73B0\u9519\u8BEF
|
||||
errorcode.plugin_install_error=DataX\u63D2\u4EF6\u5B89\u88C5\u9519\u8BEF, \u8BE5\u95EE\u9898\u901A\u5E38\u662F\u7531\u4E8EDataX\u5B89\u88C5\u9519\u8BEF\u5F15\u8D77\uFF0C\u8BF7\u8054\u7CFB\u60A8\u7684\u8FD0\u7EF4\u89E3\u51B3 .
|
||||
errorcode.plugin_not_found=DataX\u63D2\u4EF6\u914D\u7F6E\u9519\u8BEF, \u8BE5\u95EE\u9898\u901A\u5E38\u662F\u7531\u4E8EDataX\u5B89\u88C5\u9519\u8BEF\u5F15\u8D77\uFF0C\u8BF7\u8054\u7CFB\u60A8\u7684\u8FD0\u7EF4\u89E3\u51B3 .
|
||||
errorcode.plugin_init_error=DataX\u63D2\u4EF6\u521D\u59CB\u5316\u9519\u8BEF, \u8BE5\u95EE\u9898\u901A\u5E38\u662F\u7531\u4E8EDataX\u5B89\u88C5\u9519\u8BEF\u5F15\u8D77\uFF0C\u8BF7\u8054\u7CFB\u60A8\u7684\u8FD0\u7EF4\u89E3\u51B3 .
|
||||
errorcode.plugin_runtime_error=DataX\u63D2\u4EF6\u8FD0\u884C\u65F6\u51FA\u9519, \u5177\u4F53\u539F\u56E0\u8BF7\u53C2\u770BDataX\u8FD0\u884C\u7ED3\u675F\u65F6\u7684\u9519\u8BEF\u8BCA\u65AD\u4FE1\u606F .
|
||||
errorcode.plugin_dirty_data_limit_exceed=DataX\u4F20\u8F93\u810F\u6570\u636E\u8D85\u8FC7\u7528\u6237\u9884\u671F\uFF0C\u8BE5\u9519\u8BEF\u901A\u5E38\u662F\u7531\u4E8E\u6E90\u7AEF\u6570\u636E\u5B58\u5728\u8F83\u591A\u4E1A\u52A1\u810F\u6570\u636E\u5BFC\u81F4\uFF0C\u8BF7\u4ED4\u7EC6\u68C0\u67E5DataX\u6C47\u62A5\u7684\u810F\u6570\u636E\u65E5\u5FD7\u4FE1\u606F, \u6216\u8005\u60A8\u53EF\u4EE5\u9002\u5F53\u8C03\u5927\u810F\u6570\u636E\u9608\u503C .
|
||||
errorcode.plugin_split_error=DataX\u63D2\u4EF6\u5207\u5206\u51FA\u9519, \u8BE5\u95EE\u9898\u901A\u5E38\u662F\u7531\u4E8EDataX\u5404\u4E2A\u63D2\u4EF6\u7F16\u7A0B\u9519\u8BEF\u5F15\u8D77\uFF0C\u8BF7\u8054\u7CFBDataX\u5F00\u53D1\u56E2\u961F\u89E3\u51B3
|
||||
errorcode.kill_job_timeout_error=kill \u4EFB\u52A1\u8D85\u65F6\uFF0C\u8BF7\u8054\u7CFBPE\u89E3\u51B3
|
||||
errorcode.start_taskgroup_error=taskGroup\u542F\u52A8\u5931\u8D25,\u8BF7\u8054\u7CFBDataX\u5F00\u53D1\u56E2\u961F\u89E3\u51B3
|
||||
errorcode.call_datax_service_failed=\u8BF7\u6C42 DataX Service \u51FA\u9519.
|
||||
errorcode.call_remote_failed=\u8FDC\u7A0B\u8C03\u7528\u5931\u8D25
|
||||
errorcode.killed_exit_value=Job \u6536\u5230\u4E86 Kill \u547D\u4EE4.
|
||||
|
||||
|
||||
httpclientutil.1=\u8BF7\u6C42\u5730\u5740\uFF1A{0}, \u8BF7\u6C42\u65B9\u6CD5\uFF1A{1},STATUS CODE = {2}, Response Entity: {3}
|
||||
httpclientutil.2=\u8FDC\u7A0B\u63A5\u53E3\u8FD4\u56DE-1,\u5C06\u91CD\u8BD5
|
||||
|
||||
|
||||
secretutil.1=\u7CFB\u7EDF\u7F16\u7A0B\u9519\u8BEF,\u4E0D\u652F\u6301\u7684\u52A0\u5BC6\u7C7B\u578B
|
||||
secretutil.2=\u7CFB\u7EDF\u7F16\u7A0B\u9519\u8BEF,\u4E0D\u652F\u6301\u7684\u52A0\u5BC6\u7C7B\u578B
|
||||
secretutil.3=rsa\u52A0\u5BC6\u51FA\u9519
|
||||
secretutil.4=rsa\u89E3\u5BC6\u51FA\u9519
|
||||
secretutil.5=3\u91CDDES\u52A0\u5BC6\u51FA\u9519
|
||||
secretutil.6=rsa\u89E3\u5BC6\u51FA\u9519
|
||||
secretutil.7=\u6784\u5EFA\u4E09\u91CDDES\u5BC6\u5319\u51FA\u9519
|
||||
secretutil.8=DataX\u914D\u7F6E\u8981\u6C42\u52A0\u89E3\u5BC6\uFF0C\u4F46\u65E0\u6CD5\u627E\u5230\u5BC6\u94A5\u7684\u914D\u7F6E\u6587\u4EF6
|
||||
secretutil.9=\u8BFB\u53D6\u52A0\u89E3\u5BC6\u914D\u7F6E\u6587\u4EF6\u51FA\u9519
|
||||
secretutil.10=DataX\u914D\u7F6E\u7684\u5BC6\u94A5\u7248\u672C\u4E3A[{0}]\uFF0C\u4F46\u5728\u7CFB\u7EDF\u4E2D\u6CA1\u6709\u914D\u7F6E\uFF0C\u4EFB\u52A1\u5BC6\u94A5\u914D\u7F6E\u9519\u8BEF\uFF0C\u4E0D\u5B58\u5728\u60A8\u914D\u7F6E\u7684\u5BC6\u94A5\u7248\u672C
|
||||
secretutil.11=DataX\u914D\u7F6E\u7684\u5BC6\u94A5\u7248\u672C\u4E3A[{0}]\uFF0C\u4F46\u5728\u7CFB\u7EDF\u4E2D\u6CA1\u6709\u914D\u7F6E\uFF0C\u53EF\u80FD\u662F\u4EFB\u52A1\u5BC6\u94A5\u914D\u7F6E\u9519\u8BEF\uFF0C\u4E5F\u53EF\u80FD\u662F\u7CFB\u7EDF\u7EF4\u62A4\u95EE\u9898
|
||||
secretutil.12=DataX\u914D\u7F6E\u7684\u5BC6\u94A5\u7248\u672C\u4E3A[{0}]\uFF0C\u4F46\u5728\u7CFB\u7EDF\u4E2D\u6CA1\u6709\u914D\u7F6E\uFF0C\u4EFB\u52A1\u5BC6\u94A5\u914D\u7F6E\u9519\u8BEF\uFF0C\u4E0D\u5B58\u5728\u60A8\u914D\u7F6E\u7684\u5BC6\u94A5\u7248\u672C
|
||||
secretutil.13=DataX\u914D\u7F6E\u7684\u5BC6\u94A5\u7248\u672C\u4E3A[{0}]\uFF0C\u4F46\u5728\u7CFB\u7EDF\u4E2D\u6CA1\u6709\u914D\u7F6E\uFF0C\u53EF\u80FD\u662F\u4EFB\u52A1\u5BC6\u94A5\u914D\u7F6E\u9519\u8BEF\uFF0C\u4E5F\u53EF\u80FD\u662F\u7CFB\u7EDF\u7EF4\u62A4\u95EE\u9898
|
||||
secretutil.14=DataX\u914D\u7F6E\u8981\u6C42\u52A0\u89E3\u5BC6\uFF0C\u4F46\u914D\u7F6E\u7684\u5BC6\u94A5\u7248\u672C[{0}]\u5B58\u5728\u5BC6\u94A5\u4E3A\u7A7A\u7684\u60C5\u51B5
|
||||
secretutil.15=DataX\u914D\u7F6E\u8981\u6C42\u52A0\u89E3\u5BC6\uFF0C\u4F46\u914D\u7F6E\u7684\u516C\u79C1\u94A5\u5BF9\u5B58\u5728\u4E3A\u7A7A\u7684\u60C5\u51B5\uFF0C\u7248\u672C[{0}]
|
||||
secretutil.16=DataX\u914D\u7F6E\u8981\u6C42\u52A0\u89E3\u5BC6\uFF0C\u4F46\u65E0\u6CD5\u627E\u5230\u52A0\u89E3\u5BC6\u914D\u7F6E
|
||||
|
||||
configparser.1=\u5916\u639B\u7A0B\u5F0F[{0},{1}]\u8F09\u5165\u5931\u6557\uFF0C1s\u5F8C\u91CD\u8A66... Exception:{2}
|
||||
configparser.2=\u7372\u53D6\u4F5C\u696D\u914D\u7F6E\u8CC7\u8A0A\u5931\u6557:{0}
|
||||
configparser.3=\u7372\u53D6\u4F5C\u696D\u914D\u7F6E\u8CC7\u8A0A\u5931\u6557:{0}
|
||||
configparser.4=\u7372\u53D6\u4F5C\u696D\u914D\u7F6E\u8CC7\u8A0A\u5931\u6557:{0}
|
||||
configparser.5=\u5916\u639B\u7A0B\u5F0F\u8F09\u5165\u5931\u6557\uFF0C\u672A\u5B8C\u6210\u6307\u5B9A\u5916\u639B\u7A0B\u5F0F\u8F09\u5165:{0}
|
||||
configparser.6=\u5916\u639B\u7A0B\u5F0F\u8F09\u5165\u5931\u6557,\u5B58\u5728\u91CD\u8907\u5916\u639B\u7A0B\u5F0F:{0}
|
||||
|
||||
dataxserviceutil.1=\u5EFA\u7ACB\u7C3D\u540D\u7570\u5E38NoSuchAlgorithmException, [{0}]
|
||||
dataxserviceutil.2=\u5EFA\u7ACB\u7C3D\u540D\u7570\u5E38InvalidKeyException, [{0}]
|
||||
dataxserviceutil.3=\u5EFA\u7ACB\u7C3D\u540D\u7570\u5E38UnsupportedEncodingException, [{0}]
|
||||
|
||||
errorrecordchecker.1=\u9AD2\u6578\u64DA\u767E\u5206\u6BD4\u9650\u5236\u61C9\u8A72\u5728[0.0, 1.0]\u4E4B\u9593
|
||||
errorrecordchecker.2=\u9AD2\u6578\u64DA\u689D\u6578\u73FE\u5728\u61C9\u8A72\u70BA\u975E\u8CA0\u6574\u6578
|
||||
errorrecordchecker.3=\u9AD2\u6578\u64DA\u689D\u6578\u6AA2\u67E5\u4E0D\u901A\u904E\uFF0C\u9650\u5236\u662F[{0}]\u689D\uFF0C\u4F46\u5BE6\u969B\u4E0A\u6355\u7372\u4E86[{1}]\u689D.
|
||||
errorrecordchecker.4=\u9AD2\u6578\u64DA\u767E\u5206\u6BD4\u6AA2\u67E5\u4E0D\u901A\u904E\uFF0C\u9650\u5236\u662F[{0}]\uFF0C\u4F46\u5BE6\u969B\u4E0A\u6355\u7372\u5230[{1}].
|
||||
|
||||
|
||||
errorcode.install_error=DataX\u5F15\u64CE\u5B89\u88DD\u932F\u8AA4, \u8ACB\u806F\u7D61\u60A8\u7684\u904B\u7DAD\u89E3\u6C7A .
|
||||
errorcode.argument_error=DataX\u5F15\u64CE\u904B\u884C\u932F\u8AA4\uFF0C\u8A72\u554F\u984C\u901A\u5E38\u662F\u7531\u65BC\u5167\u90E8\u7DE8\u7A0B\u932F\u8AA4\u5F15\u8D77\uFF0C\u8ACB\u806F\u7D61DataX\u958B\u767C\u5718\u968A\u89E3\u6C7A .
|
||||
errorcode.runtime_error=DataX\u5F15\u64CE\u904B\u884C\u904E\u7A0B\u51FA\u932F\uFF0C\u5177\u9AD4\u539F\u56E0\u8ACB\u53C3\u770BDataX\u904B\u884C\u7D50\u675F\u6642\u7684\u932F\u8AA4\u8A3A\u65B7\u8CC7\u8A0A .
|
||||
errorcode.config_error=DataX\u5F15\u64CE\u914D\u7F6E\u932F\u8AA4\uFF0C\u8A72\u554F\u984C\u901A\u5E38\u662F\u7531\u65BCDataX\u5B89\u88DD\u932F\u8AA4\u5F15\u8D77\uFF0C\u8ACB\u806F\u7D61\u60A8\u7684\u904B\u7DAD\u89E3\u6C7A .
|
||||
errorcode.secret_error=DataX\u5F15\u64CE\u52A0\u89E3\u5BC6\u51FA\u932F\uFF0C\u8A72\u554F\u984C\u901A\u5E38\u662F\u7531\u65BCDataX\u5BC6\u9470\u914D\u7F6E\u932F\u8AA4\u5F15\u8D77\uFF0C\u8ACB\u806F\u7D61\u60A8\u7684\u904B\u7DAD\u89E3\u6C7A .
|
||||
errorcode.hook_load_error=\u8F09\u5165\u5916\u90E8Hook\u51FA\u73FE\u932F\u8AA4\uFF0C\u901A\u5E38\u662F\u7531\u65BCDataX\u5B89\u88DD\u5F15\u8D77\u7684
|
||||
errorcode.hook_fail_error=\u57F7\u884C\u5916\u90E8Hook\u51FA\u73FE\u932F\u8AA4
|
||||
errorcode.plugin_install_error=DataX\u5916\u639B\u7A0B\u5F0F\u5B89\u88DD\u932F\u8AA4, \u8A72\u554F\u984C\u901A\u5E38\u662F\u7531\u65BCDataX\u5B89\u88DD\u932F\u8AA4\u5F15\u8D77\uFF0C\u8ACB\u806F\u7D61\u60A8\u7684\u904B\u7DAD\u89E3\u6C7A .
|
||||
errorcode.plugin_not_found=DataX\u5916\u639B\u7A0B\u5F0F\u914D\u7F6E\u932F\u8AA4, \u8A72\u554F\u984C\u901A\u5E38\u662F\u7531\u65BCDataX\u5B89\u88DD\u932F\u8AA4\u5F15\u8D77\uFF0C\u8ACB\u806F\u7D61\u60A8\u7684\u904B\u7DAD\u89E3\u6C7A .
|
||||
errorcode.plugin_init_error=DataX\u5916\u639B\u7A0B\u5F0F\u521D\u59CB\u5316\u932F\u8AA4, \u8A72\u554F\u984C\u901A\u5E38\u662F\u7531\u65BCDataX\u5B89\u88DD\u932F\u8AA4\u5F15\u8D77\uFF0C\u8ACB\u806F\u7D61\u60A8\u7684\u904B\u7DAD\u89E3\u6C7A .
|
||||
errorcode.plugin_runtime_error=DataX\u5916\u639B\u7A0B\u5F0F\u904B\u884C\u6642\u51FA\u932F, \u5177\u9AD4\u539F\u56E0\u8ACB\u53C3\u770BDataX\u904B\u884C\u7D50\u675F\u6642\u7684\u932F\u8AA4\u8A3A\u65B7\u8CC7\u8A0A .
|
||||
errorcode.plugin_dirty_data_limit_exceed=DataX\u50B3\u8F38\u9AD2\u6578\u64DA\u8D85\u904E\u7528\u6236\u9810\u671F\uFF0C\u8A72\u932F\u8AA4\u901A\u5E38\u662F\u7531\u65BC\u6E90\u7AEF\u6578\u64DA\u5B58\u5728\u8F03\u591A\u696D\u52D9\u9AD2\u6578\u64DA\u5C0E\u81F4\uFF0C\u8ACB\u4ED4\u7D30\u6AA2\u67E5DataX\u5F59\u5831\u7684\u9AD2\u6578\u64DA\u65E5\u8A8C\u8CC7\u8A0A, \u6216\u8005\u60A8\u53EF\u4EE5\u9069\u7576\u8ABF\u5927\u9AD2\u6578\u64DA\u95BE\u503C .
|
||||
errorcode.plugin_split_error=DataX\u5916\u639B\u7A0B\u5F0F\u5207\u5206\u51FA\u932F, \u8A72\u554F\u984C\u901A\u5E38\u662F\u7531\u65BCDataX\u5404\u500B\u5916\u639B\u7A0B\u5F0F\u7DE8\u7A0B\u932F\u8AA4\u5F15\u8D77\uFF0C\u8ACB\u806F\u7D61DataX\u958B\u767C\u5718\u968A\u89E3\u6C7A
|
||||
errorcode.kill_job_timeout_error=kill \u4EFB\u52D9\u903E\u6642\uFF0C\u8ACB\u806F\u7D61PE\u89E3\u6C7A
|
||||
errorcode.start_taskgroup_error=taskGroup\u555F\u52D5\u5931\u6557,\u8ACB\u806F\u7D61DataX\u958B\u767C\u5718\u968A\u89E3\u6C7A
|
||||
errorcode.call_datax_service_failed=\u8ACB\u6C42 DataX Service \u51FA\u932F.
|
||||
errorcode.call_remote_failed=\u9060\u7A0B\u8ABF\u7528\u5931\u6557
|
||||
errorcode.killed_exit_value=Job \u6536\u5230\u4E86 Kill \u547D\u4EE4.
|
||||
|
||||
|
||||
httpclientutil.1=\u8ACB\u6C42\u5730\u5740\uFF1A{0}, \u8ACB\u6C42\u65B9\u6CD5\uFF1A{1},STATUS CODE = {2}, Response Entity: {3}
|
||||
httpclientutil.2=\u9060\u7A0B\u63A5\u53E3\u8FD4\u56DE-1,\u5C07\u91CD\u8A66
|
||||
|
||||
|
||||
secretutil.1=\u7CFB\u7D71\u7DE8\u7A0B\u932F\u8AA4,\u4E0D\u652F\u63F4\u7684\u52A0\u5BC6\u985E\u578B
|
||||
secretutil.2=\u7CFB\u7D71\u7DE8\u7A0B\u932F\u8AA4,\u4E0D\u652F\u63F4\u7684\u52A0\u5BC6\u985E\u578B
|
||||
secretutil.3=rsa\u52A0\u5BC6\u51FA\u932F
|
||||
secretutil.4=rsa\u89E3\u5BC6\u51FA\u932F
|
||||
secretutil.5=3\u91CDDES\u52A0\u5BC6\u51FA\u932F
|
||||
secretutil.6=rsa\u89E3\u5BC6\u51FA\u932F
|
||||
secretutil.7=\u69CB\u5EFA\u4E09\u91CDDES\u5BC6\u5319\u51FA\u932F
|
||||
secretutil.8=DataX\u914D\u7F6E\u8981\u6C42\u52A0\u89E3\u5BC6\uFF0C\u4F46\u7121\u6CD5\u627E\u5230\u5BC6\u9470\u7684\u914D\u7F6E\u6A94\u6848
|
||||
secretutil.9=\u8B80\u53D6\u52A0\u89E3\u5BC6\u914D\u7F6E\u6A94\u6848\u51FA\u932F
|
||||
secretutil.10=DataX\u914D\u7F6E\u7684\u5BC6\u9470\u7248\u672C\u70BA[{0}]\uFF0C\u4F46\u5728\u7CFB\u7D71\u4E2D\u6C92\u6709\u914D\u7F6E\uFF0C\u4EFB\u52D9\u5BC6\u9470\u914D\u7F6E\u932F\u8AA4\uFF0C\u4E0D\u5B58\u5728\u60A8\u914D\u7F6E\u7684\u5BC6\u9470\u7248\u672C
|
||||
secretutil.11=DataX\u914D\u7F6E\u7684\u5BC6\u9470\u7248\u672C\u70BA[{0}]\uFF0C\u4F46\u5728\u7CFB\u7D71\u4E2D\u6C92\u6709\u914D\u7F6E\uFF0C\u53EF\u80FD\u662F\u4EFB\u52D9\u5BC6\u9470\u914D\u7F6E\u932F\u8AA4\uFF0C\u4E5F\u53EF\u80FD\u662F\u7CFB\u7D71\u7DAD\u8B77\u554F\u984C
|
||||
secretutil.12=DataX\u914D\u7F6E\u7684\u5BC6\u9470\u7248\u672C\u70BA[{0}]\uFF0C\u4F46\u5728\u7CFB\u7D71\u4E2D\u6C92\u6709\u914D\u7F6E\uFF0C\u4EFB\u52D9\u5BC6\u9470\u914D\u7F6E\u932F\u8AA4\uFF0C\u4E0D\u5B58\u5728\u60A8\u914D\u7F6E\u7684\u5BC6\u9470\u7248\u672C
|
||||
secretutil.13=DataX\u914D\u7F6E\u7684\u5BC6\u9470\u7248\u672C\u70BA[{0}]\uFF0C\u4F46\u5728\u7CFB\u7D71\u4E2D\u6C92\u6709\u914D\u7F6E\uFF0C\u53EF\u80FD\u662F\u4EFB\u52D9\u5BC6\u9470\u914D\u7F6E\u932F\u8AA4\uFF0C\u4E5F\u53EF\u80FD\u662F\u7CFB\u7D71\u7DAD\u8B77\u554F\u984C
|
||||
secretutil.14=DataX\u914D\u7F6E\u8981\u6C42\u52A0\u89E3\u5BC6\uFF0C\u4F46\u914D\u7F6E\u7684\u5BC6\u9470\u7248\u672C[{0}]\u5B58\u5728\u5BC6\u9470\u70BA\u7A7A\u7684\u60C5\u6CC1
|
||||
secretutil.15=DataX\u914D\u7F6E\u8981\u6C42\u52A0\u89E3\u5BC6\uFF0C\u4F46\u914D\u7F6E\u7684\u516C\u79C1\u9470\u5C0D\u5B58\u5728\u70BA\u7A7A\u7684\u60C5\u6CC1\uFF0C\u7248\u672C[{0}]
|
||||
secretutil.16=DataX\u914D\u7F6E\u8981\u6C42\u52A0\u89E3\u5BC6\uFF0C\u4F46\u7121\u6CD5\u627E\u5230\u52A0\u89E3\u5BC6\u914D\u7F6E
|
||||
|
@ -2,6 +2,6 @@
|
||||
"name": "hbase11xsqlreader",
|
||||
"class": "com.alibaba.datax.plugin.reader.hbase11xsqlreader.HbaseSQLReader",
|
||||
"description": "useScene: prod. mechanism: Scan to read data.",
|
||||
"developer": "liwei.li, bug reported to : liwei.li@alibaba-inc.com"
|
||||
"developer": "alibaba"
|
||||
}
|
||||
|
||||
|
@ -16,6 +16,17 @@
|
||||
<hadoop.version>2.7.1</hadoop.version>
|
||||
</properties>
|
||||
<dependencies>
|
||||
<dependency>
|
||||
<groupId>org.apache.logging.log4j</groupId>
|
||||
<artifactId>log4j-api</artifactId>
|
||||
<version>2.17.1</version>
|
||||
</dependency>
|
||||
|
||||
<dependency>
|
||||
<groupId>org.apache.logging.log4j</groupId>
|
||||
<artifactId>log4j-core</artifactId>
|
||||
<version>2.17.1</version>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>com.alibaba.datax</groupId>
|
||||
<artifactId>datax-common</artifactId>
|
||||
@ -51,6 +62,11 @@
|
||||
<artifactId>hadoop-yarn-common</artifactId>
|
||||
<version>${hadoop.version}</version>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>com.aliyun.oss</groupId>
|
||||
<artifactId>hadoop-aliyun</artifactId>
|
||||
<version>2.7.2</version>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>org.apache.hadoop</groupId>
|
||||
<artifactId>hadoop-mapreduce-client-core</artifactId>
|
||||
|
@ -19,6 +19,17 @@
|
||||
</properties>
|
||||
|
||||
<dependencies>
|
||||
<dependency>
|
||||
<groupId>org.apache.logging.log4j</groupId>
|
||||
<artifactId>log4j-api</artifactId>
|
||||
<version>2.17.1</version>
|
||||
</dependency>
|
||||
|
||||
<dependency>
|
||||
<groupId>org.apache.logging.log4j</groupId>
|
||||
<artifactId>log4j-core</artifactId>
|
||||
<version>2.17.1</version>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>com.alibaba.datax</groupId>
|
||||
<artifactId>datax-common</artifactId>
|
||||
@ -30,6 +41,11 @@
|
||||
</exclusion>
|
||||
</exclusions>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>com.aliyun.oss</groupId>
|
||||
<artifactId>hadoop-aliyun</artifactId>
|
||||
<version>2.7.2</version>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>org.slf4j</groupId>
|
||||
<artifactId>slf4j-api</artifactId>
|
||||
|
@ -6,10 +6,13 @@ import com.alibaba.datax.common.exception.DataXException;
|
||||
import com.alibaba.datax.common.plugin.RecordReceiver;
|
||||
import com.alibaba.datax.common.plugin.TaskPluginCollector;
|
||||
import com.alibaba.datax.common.util.Configuration;
|
||||
import com.alibaba.datax.plugin.unstructuredstorage.util.ColumnTypeUtil;
|
||||
import com.alibaba.datax.plugin.unstructuredstorage.util.HdfsUtil;
|
||||
import com.alibaba.fastjson.JSON;
|
||||
import com.alibaba.fastjson.JSONObject;
|
||||
import com.google.common.collect.Lists;
|
||||
import org.apache.commons.lang3.StringUtils;
|
||||
import org.apache.commons.lang3.Validate;
|
||||
import org.apache.commons.lang3.tuple.MutablePair;
|
||||
import org.apache.hadoop.fs.*;
|
||||
import org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat;
|
||||
@ -24,6 +27,10 @@ import org.apache.hadoop.mapred.*;
|
||||
import org.apache.hadoop.security.UserGroupInformation;
|
||||
import org.slf4j.Logger;
|
||||
import org.slf4j.LoggerFactory;
|
||||
import parquet.schema.OriginalType;
|
||||
import parquet.schema.PrimitiveType;
|
||||
import parquet.schema.Types;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.text.SimpleDateFormat;
|
||||
import java.util.*;
|
||||
@ -556,4 +563,67 @@ public class HdfsHelper {
|
||||
transportResult.setLeft(recordList);
|
||||
return transportResult;
|
||||
}
|
||||
|
||||
|
||||
public static String generateParquetSchemaFromColumnAndType(List<Configuration> columns) {
|
||||
Map<String, ColumnTypeUtil.DecimalInfo> decimalColInfo = new HashMap<>(16);
|
||||
ColumnTypeUtil.DecimalInfo PARQUET_DEFAULT_DECIMAL_INFO = new ColumnTypeUtil.DecimalInfo(10, 2);
|
||||
Types.MessageTypeBuilder typeBuilder = Types.buildMessage();
|
||||
for (Configuration column : columns) {
|
||||
String name = column.getString("name");
|
||||
String colType = column.getString("type");
|
||||
Validate.notNull(name, "column.name can't be null");
|
||||
Validate.notNull(colType, "column.type can't be null");
|
||||
switch (colType.toLowerCase()) {
|
||||
case "tinyint":
|
||||
case "smallint":
|
||||
case "int":
|
||||
typeBuilder.optional(PrimitiveType.PrimitiveTypeName.INT32).named(name);
|
||||
break;
|
||||
case "bigint":
|
||||
case "long":
|
||||
typeBuilder.optional(PrimitiveType.PrimitiveTypeName.INT64).named(name);
|
||||
break;
|
||||
case "float":
|
||||
typeBuilder.optional(PrimitiveType.PrimitiveTypeName.FLOAT).named(name);
|
||||
break;
|
||||
case "double":
|
||||
typeBuilder.optional(PrimitiveType.PrimitiveTypeName.DOUBLE).named(name);
|
||||
break;
|
||||
case "binary":
|
||||
typeBuilder.optional(PrimitiveType.PrimitiveTypeName.BINARY).named(name);
|
||||
break;
|
||||
case "char":
|
||||
case "varchar":
|
||||
case "string":
|
||||
typeBuilder.optional(PrimitiveType.PrimitiveTypeName.BINARY).as(OriginalType.UTF8).named(name);
|
||||
break;
|
||||
case "boolean":
|
||||
typeBuilder.optional(PrimitiveType.PrimitiveTypeName.BOOLEAN).named(name);
|
||||
break;
|
||||
case "timestamp":
|
||||
typeBuilder.optional(PrimitiveType.PrimitiveTypeName.INT96).named(name);
|
||||
break;
|
||||
case "date":
|
||||
typeBuilder.optional(PrimitiveType.PrimitiveTypeName.INT32).as(OriginalType.DATE).named(name);
|
||||
break;
|
||||
default:
|
||||
if (ColumnTypeUtil.isDecimalType(colType)) {
|
||||
ColumnTypeUtil.DecimalInfo decimalInfo = ColumnTypeUtil.getDecimalInfo(colType, PARQUET_DEFAULT_DECIMAL_INFO);
|
||||
typeBuilder.optional(PrimitiveType.PrimitiveTypeName.FIXED_LEN_BYTE_ARRAY)
|
||||
.as(OriginalType.DECIMAL)
|
||||
.precision(decimalInfo.getPrecision())
|
||||
.scale(decimalInfo.getScale())
|
||||
.length(HdfsUtil.computeMinBytesForPrecision(decimalInfo.getPrecision()))
|
||||
.named(name);
|
||||
|
||||
decimalColInfo.put(name, decimalInfo);
|
||||
} else {
|
||||
typeBuilder.optional(PrimitiveType.PrimitiveTypeName.BINARY).named(name);
|
||||
}
|
||||
break;
|
||||
}
|
||||
}
|
||||
return typeBuilder.named("m").toString();
|
||||
}
|
||||
}
|
||||
|
@ -9,9 +9,11 @@ import com.google.common.collect.Sets;
|
||||
import org.apache.commons.io.Charsets;
|
||||
import org.apache.commons.io.IOUtils;
|
||||
import org.apache.commons.lang3.StringUtils;
|
||||
import org.apache.commons.lang3.Validate;
|
||||
import org.apache.hadoop.fs.Path;
|
||||
import org.slf4j.Logger;
|
||||
import org.slf4j.LoggerFactory;
|
||||
import parquet.schema.MessageTypeParser;
|
||||
|
||||
import java.util.*;
|
||||
|
||||
@ -323,8 +325,55 @@ public class HdfsWriter extends Writer {
|
||||
}
|
||||
return tmpFilePath;
|
||||
}
|
||||
public void unitizeParquetConfig(Configuration writerSliceConfig) {
|
||||
String parquetSchema = writerSliceConfig.getString(Key.PARQUET_SCHEMA);
|
||||
if (StringUtils.isNotBlank(parquetSchema)) {
|
||||
LOG.info("parquetSchema has config. use parquetSchema:\n{}", parquetSchema);
|
||||
return;
|
||||
}
|
||||
|
||||
List<Configuration> columns = writerSliceConfig.getListConfiguration(Key.COLUMN);
|
||||
if (columns == null || columns.isEmpty()) {
|
||||
throw DataXException.asDataXException("parquetSchema or column can't be blank!");
|
||||
}
|
||||
|
||||
parquetSchema = generateParquetSchemaFromColumn(columns);
|
||||
// 为了兼容历史逻辑,对之前的逻辑做保留,但是如果配置的时候报错,则走新逻辑
|
||||
try {
|
||||
MessageTypeParser.parseMessageType(parquetSchema);
|
||||
} catch (Throwable e) {
|
||||
LOG.warn("The generated parquetSchema {} is illegal, try to generate parquetSchema in another way", parquetSchema);
|
||||
parquetSchema = HdfsHelper.generateParquetSchemaFromColumnAndType(columns);
|
||||
LOG.info("The last generated parquet schema is {}", parquetSchema);
|
||||
}
|
||||
writerSliceConfig.set(Key.PARQUET_SCHEMA, parquetSchema);
|
||||
LOG.info("dataxParquetMode use default fields.");
|
||||
writerSliceConfig.set(Key.DATAX_PARQUET_MODE, "fields");
|
||||
}
|
||||
|
||||
private String generateParquetSchemaFromColumn(List<Configuration> columns) {
|
||||
StringBuffer parquetSchemaStringBuffer = new StringBuffer();
|
||||
parquetSchemaStringBuffer.append("message m {");
|
||||
for (Configuration column: columns) {
|
||||
String name = column.getString("name");
|
||||
Validate.notNull(name, "column.name can't be null");
|
||||
|
||||
String type = column.getString("type");
|
||||
Validate.notNull(type, "column.type can't be null");
|
||||
|
||||
String parquetColumn = String.format("optional %s %s;", type, name);
|
||||
parquetSchemaStringBuffer.append(parquetColumn);
|
||||
}
|
||||
parquetSchemaStringBuffer.append("}");
|
||||
String parquetSchema = parquetSchemaStringBuffer.toString();
|
||||
LOG.info("generate parquetSchema:\n{}", parquetSchema);
|
||||
return parquetSchema;
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
|
||||
|
||||
public static class Task extends Writer.Task {
|
||||
private static final Logger LOG = LoggerFactory.getLogger(Task.class);
|
||||
|
||||
|
@ -33,4 +33,17 @@ public class Key {
|
||||
public static final String KERBEROS_PRINCIPAL = "kerberosPrincipal";
|
||||
// hadoop config
|
||||
public static final String HADOOP_CONFIG = "hadoopConfig";
|
||||
|
||||
// useOldRawDataTransf
|
||||
public final static String PARQUET_FILE_USE_RAW_DATA_TRANSF = "useRawDataTransf";
|
||||
|
||||
public final static String DATAX_PARQUET_MODE = "dataxParquetMode";
|
||||
|
||||
// hdfs username 默认值 admin
|
||||
public final static String HDFS_USERNAME = "hdfsUsername";
|
||||
|
||||
public static final String PROTECTION = "protection";
|
||||
|
||||
public static final String PARQUET_SCHEMA = "parquetSchema";
|
||||
public static final String PARQUET_MERGE_RESULT = "parquetMergeResult";
|
||||
}
|
||||
|
204
hologresjdbcwriter/doc/hologresjdbcwriter.md
Normal file
204
hologresjdbcwriter/doc/hologresjdbcwriter.md
Normal file
@ -0,0 +1,204 @@
|
||||
# DataX HologresJdbcWriter
|
||||
|
||||
|
||||
---
|
||||
|
||||
|
||||
## 1 快速介绍
|
||||
|
||||
HologresJdbcWriter 插件实现了写入数据到 Hologres目的表的功能。在底层实现上,HologresJdbcWriter通过JDBC连接远程 Hologres 数据库,并执行相应的 insert into ... on conflict sql 语句将数据写入 Hologres,内部会分批次提交入库。
|
||||
|
||||
<br />
|
||||
|
||||
* HologresJdbcWriter 只支持单表同步
|
||||
|
||||
## 2 实现原理
|
||||
|
||||
HologresJdbcWriter 通过 DataX 框架获取 Reader 生成的协议数据,根据你配置生成相应的SQL插入语句
|
||||
|
||||
* `insert into... on conflict `
|
||||
|
||||
|
||||
## 3 功能说明
|
||||
|
||||
### 3.1 配置样例
|
||||
|
||||
* 这里使用一份从内存产生到 HologresJdbcWriter导入的数据。
|
||||
|
||||
```json
|
||||
{
|
||||
"job": {
|
||||
"setting": {
|
||||
"speed": {
|
||||
"channel": 1
|
||||
}
|
||||
},
|
||||
"content": [
|
||||
{
|
||||
"reader": {
|
||||
"name": "streamreader",
|
||||
"parameter": {
|
||||
"column" : [
|
||||
{
|
||||
"value": "DataX",
|
||||
"type": "string"
|
||||
},
|
||||
{
|
||||
"value": 19880808,
|
||||
"type": "long"
|
||||
},
|
||||
{
|
||||
"value": "1988-08-08 08:08:08",
|
||||
"type": "date"
|
||||
},
|
||||
{
|
||||
"value": true,
|
||||
"type": "bool"
|
||||
},
|
||||
{
|
||||
"value": "test",
|
||||
"type": "bytes"
|
||||
}
|
||||
],
|
||||
"sliceRecordCount": 1000
|
||||
}
|
||||
},
|
||||
"writer": {
|
||||
"name": "hologresjdbcwriter",
|
||||
"parameter": {
|
||||
"username": "xx",
|
||||
"password": "xx",
|
||||
"column": [
|
||||
"id",
|
||||
"name"
|
||||
],
|
||||
"preSql": [
|
||||
"delete from test"
|
||||
],
|
||||
"connection": [
|
||||
{
|
||||
"jdbcUrl": "jdbc:postgresql://127.0.0.1:3002/datax",
|
||||
"table": [
|
||||
"test"
|
||||
]
|
||||
}
|
||||
],
|
||||
"writeMode" : "REPLACE",
|
||||
"client" : {
|
||||
"writeThreadSize" : 3
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
|
||||
### 3.2 参数说明
|
||||
|
||||
* **jdbcUrl**
|
||||
|
||||
* 描述:目的数据库的 JDBC 连接信息 ,jdbcUrl必须包含在connection配置单元中。
|
||||
|
||||
注意:1、在一个数据库上只能配置一个值。
|
||||
2、jdbcUrl按照PostgreSQL官方规范,并可以填写连接附加参数信息。具体请参看PostgreSQL官方文档或者咨询对应 DBA。
|
||||
|
||||
|
||||
* 必选:是 <br />
|
||||
|
||||
* 默认值:无 <br />
|
||||
|
||||
* **username**
|
||||
|
||||
* 描述:目的数据库的用户名 <br />
|
||||
|
||||
* 必选:是 <br />
|
||||
|
||||
* 默认值:无 <br />
|
||||
|
||||
* **password**
|
||||
|
||||
* 描述:目的数据库的密码 <br />
|
||||
|
||||
* 必选:是 <br />
|
||||
|
||||
* 默认值:无 <br />
|
||||
|
||||
* **table**
|
||||
|
||||
* 描述:目的表的表名称。只支持写入一个表。
|
||||
|
||||
注意:table 和 jdbcUrl 必须包含在 connection 配置单元中
|
||||
|
||||
* 必选:是 <br />
|
||||
|
||||
* 默认值:无 <br />
|
||||
|
||||
* **column**
|
||||
|
||||
* 描述:目的表需要写入数据的字段,字段之间用英文逗号分隔。例如: "column": ["id","name","age"]。如果要依次写入全部列,使用\*表示, 例如: "column": ["\*"]
|
||||
|
||||
注意:1、我们强烈不推荐你这样配置,因为当你目的表字段个数、类型等有改动时,你的任务可能运行不正确或者失败
|
||||
2、此处 column 不能配置任何常量值
|
||||
|
||||
* 必选:是 <br />
|
||||
|
||||
* 默认值:否 <br />
|
||||
|
||||
* **preSql**
|
||||
|
||||
* 描述:写入数据到目的表前,会先执行这里的标准语句。如果 Sql 中有你需要操作到的表名称,请使用 `@table` 表示,这样在实际执行 Sql 语句时,会对变量按照实际表名称进行替换。 <br />
|
||||
|
||||
* 必选:否 <br />
|
||||
|
||||
* 默认值:无 <br />
|
||||
|
||||
* **postSql**
|
||||
|
||||
* 描述:写入数据到目的表后,会执行这里的标准语句。(原理同 preSql ) <br />
|
||||
|
||||
* 必选:否 <br />
|
||||
|
||||
* 默认值:无 <br />
|
||||
|
||||
* **batchSize**
|
||||
|
||||
* 描述:一次性批量提交的记录数大小,该值可以极大减少DataX与HologresJdbcWriter的网络交互次数,并提升整体吞吐量。但是该值设置过大可能会造成DataX运行进程OOM情况。<br />
|
||||
|
||||
* 必选:否 <br />
|
||||
|
||||
* 默认值:512 <br />
|
||||
|
||||
* **writeMode**
|
||||
|
||||
* 描述:当写入hologres有主键表时,控制主键冲突后的策略。REPLACE表示冲突后hologres表的所有字段都被覆盖(未在writer中配置的字段将填充null);UPDATE表示冲突后hologres表writer配置的字段将被覆盖;IGNORE表示冲突后丢弃新数据,不覆盖。 <br />
|
||||
|
||||
* 必选:否 <br />
|
||||
|
||||
* 默认值:REPLACE <br />
|
||||
|
||||
* **client.writeThreadSize**
|
||||
|
||||
* 描述:写入hologres的连接池大小,多个连接将并行写入数据。 <br />
|
||||
|
||||
* 必选:否 <br />
|
||||
|
||||
* 默认值:1 <br />
|
||||
|
||||
### 3.3 类型转换
|
||||
|
||||
目前 HologresJdbcWriter支持大部分 Hologres类型,但也存在部分没有支持的情况,请注意检查你的类型。
|
||||
|
||||
下面列出 HologresJdbcWriter针对 Hologres类型转换列表:
|
||||
|
||||
| DataX 内部类型| Hologres 数据类型 |
|
||||
| -------- | ----- |
|
||||
| Long |bigint, integer, smallint |
|
||||
| Double |double precision, money, numeric, real |
|
||||
| String |varchar, char, text, bit|
|
||||
| Date |date, time, timestamp |
|
||||
| Boolean |bool|
|
||||
| Bytes |bytea|
|
90
hologresjdbcwriter/pom.xml
Normal file
90
hologresjdbcwriter/pom.xml
Normal file
@ -0,0 +1,90 @@
|
||||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<project xmlns="http://maven.apache.org/POM/4.0.0"
|
||||
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
|
||||
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
|
||||
<parent>
|
||||
<artifactId>datax-all</artifactId>
|
||||
<groupId>com.alibaba.datax</groupId>
|
||||
<version>0.0.1-SNAPSHOT</version>
|
||||
</parent>
|
||||
<modelVersion>4.0.0</modelVersion>
|
||||
|
||||
<artifactId>hologresjdbcwriter</artifactId>
|
||||
<name>hologresjdbcwriter</name>
|
||||
<packaging>jar</packaging>
|
||||
<description>writer data into hologres using jdbc</description>
|
||||
|
||||
<properties>
|
||||
<jdk-version>1.8
|
||||
|
||||
</jdk-version>
|
||||
</properties>
|
||||
<dependencies>
|
||||
<dependency>
|
||||
<groupId>com.alibaba.datax</groupId>
|
||||
<artifactId>datax-common</artifactId>
|
||||
<version>${datax-project-version}</version>
|
||||
<exclusions>
|
||||
<exclusion>
|
||||
<artifactId>slf4j-log4j12</artifactId>
|
||||
<groupId>org.slf4j</groupId>
|
||||
</exclusion>
|
||||
</exclusions>
|
||||
</dependency>
|
||||
|
||||
<dependency>
|
||||
<groupId>org.slf4j</groupId>
|
||||
<artifactId>slf4j-api</artifactId>
|
||||
</dependency>
|
||||
|
||||
<dependency>
|
||||
<groupId>ch.qos.logback</groupId>
|
||||
<artifactId>logback-classic</artifactId>
|
||||
</dependency>
|
||||
|
||||
<dependency>
|
||||
<groupId>com.alibaba.datax</groupId>
|
||||
<artifactId>plugin-rdbms-util</artifactId>
|
||||
<version>${datax-project-version}</version>
|
||||
</dependency>
|
||||
|
||||
<dependency>
|
||||
<groupId>com.alibaba.hologres</groupId>
|
||||
<artifactId>holo-client</artifactId>
|
||||
<version>2.1.0</version>
|
||||
</dependency>
|
||||
|
||||
</dependencies>
|
||||
<build>
|
||||
<plugins>
|
||||
<!-- compiler plugin -->
|
||||
<plugin>
|
||||
<artifactId>maven-compiler-plugin</artifactId>
|
||||
<configuration>
|
||||
<source>${jdk-version}</source>
|
||||
<target>${jdk-version}</target>
|
||||
<encoding>${project-sourceEncoding}</encoding>
|
||||
</configuration>
|
||||
</plugin>
|
||||
<!-- assembly plugin -->
|
||||
<plugin>
|
||||
<artifactId>maven-assembly-plugin</artifactId>
|
||||
<configuration>
|
||||
<descriptors>
|
||||
<descriptor>src/main/assembly/package.xml</descriptor>
|
||||
</descriptors>
|
||||
<finalName>datax</finalName>
|
||||
</configuration>
|
||||
<executions>
|
||||
<execution>
|
||||
<id>dwzip</id>
|
||||
<phase>package</phase>
|
||||
<goals>
|
||||
<goal>single</goal>
|
||||
</goals>
|
||||
</execution>
|
||||
</executions>
|
||||
</plugin>
|
||||
</plugins>
|
||||
</build>
|
||||
</project>
|
35
hologresjdbcwriter/src/main/assembly/package.xml
Executable file
35
hologresjdbcwriter/src/main/assembly/package.xml
Executable file
@ -0,0 +1,35 @@
|
||||
<assembly
|
||||
xmlns="http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.0"
|
||||
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
|
||||
xsi:schemaLocation="http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.0 http://maven.apache.org/xsd/assembly-1.1.0.xsd">
|
||||
<id></id>
|
||||
<formats>
|
||||
<format>dir</format>
|
||||
</formats>
|
||||
<includeBaseDirectory>false</includeBaseDirectory>
|
||||
<fileSets>
|
||||
<fileSet>
|
||||
<directory>src/main/resources</directory>
|
||||
<includes>
|
||||
<include>plugin.json</include>
|
||||
<include>plugin_job_template.json</include>
|
||||
</includes>
|
||||
<outputDirectory>plugin/writer/hologresjdbcwriter</outputDirectory>
|
||||
</fileSet>
|
||||
<fileSet>
|
||||
<directory>target/</directory>
|
||||
<includes>
|
||||
<include>hologresjdbcwriter-0.0.1-SNAPSHOT.jar</include>
|
||||
</includes>
|
||||
<outputDirectory>plugin/writer/hologresjdbcwriter</outputDirectory>
|
||||
</fileSet>
|
||||
</fileSets>
|
||||
|
||||
<dependencySets>
|
||||
<dependencySet>
|
||||
<useProjectArtifact>false</useProjectArtifact>
|
||||
<outputDirectory>plugin/writer/hologresjdbcwriter/libs</outputDirectory>
|
||||
<scope>runtime</scope>
|
||||
</dependencySet>
|
||||
</dependencySets>
|
||||
</assembly>
|
@ -0,0 +1,526 @@
|
||||
package com.alibaba.datax.plugin.writer.hologresjdbcwriter;
|
||||
|
||||
import com.alibaba.datax.common.element.Column;
|
||||
import com.alibaba.datax.common.element.DateColumn;
|
||||
import com.alibaba.datax.common.element.LongColumn;
|
||||
import com.alibaba.datax.common.element.Record;
|
||||
import com.alibaba.datax.common.exception.DataXException;
|
||||
import com.alibaba.datax.common.plugin.RecordReceiver;
|
||||
import com.alibaba.datax.common.plugin.TaskPluginCollector;
|
||||
import com.alibaba.datax.common.util.Configuration;
|
||||
import com.alibaba.datax.common.util.RetryUtil;
|
||||
import com.alibaba.datax.plugin.rdbms.util.DBUtil;
|
||||
import com.alibaba.datax.plugin.rdbms.util.DBUtilErrorCode;
|
||||
import com.alibaba.datax.plugin.rdbms.util.DataBaseType;
|
||||
import com.alibaba.datax.plugin.writer.hologresjdbcwriter.util.ConfLoader;
|
||||
import com.alibaba.datax.plugin.writer.hologresjdbcwriter.util.OriginalConfPretreatmentUtil;
|
||||
import com.alibaba.datax.plugin.writer.hologresjdbcwriter.util.WriterUtil;
|
||||
import com.alibaba.fastjson.JSONArray;
|
||||
import com.alibaba.fastjson.JSONObject;
|
||||
import com.alibaba.hologres.client.HoloClient;
|
||||
import com.alibaba.hologres.client.HoloConfig;
|
||||
import com.alibaba.hologres.client.Put;
|
||||
import com.alibaba.hologres.client.exception.HoloClientWithDetailsException;
|
||||
import com.alibaba.hologres.client.model.TableSchema;
|
||||
import org.apache.commons.lang3.StringUtils;
|
||||
import org.slf4j.Logger;
|
||||
import org.slf4j.LoggerFactory;
|
||||
|
||||
import java.sql.Connection;
|
||||
import java.sql.DriverManager;
|
||||
import java.sql.SQLException;
|
||||
import java.sql.Time;
|
||||
import java.sql.Timestamp;
|
||||
import java.sql.Types;
|
||||
import java.util.HashSet;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
import java.util.Set;
|
||||
|
||||
public class BaseWriter {
|
||||
|
||||
protected static final Set<String> ignoreConfList;
|
||||
|
||||
static {
|
||||
ignoreConfList = new HashSet<>();
|
||||
ignoreConfList.add("jdbcUrl");
|
||||
ignoreConfList.add("username");
|
||||
ignoreConfList.add("password");
|
||||
ignoreConfList.add("writeMode");
|
||||
}
|
||||
|
||||
enum WriteMode {
|
||||
IGNORE,
|
||||
UPDATE,
|
||||
REPLACE
|
||||
}
|
||||
|
||||
private static WriteMode getWriteMode(String text) {
|
||||
text = text.toUpperCase();
|
||||
switch (text) {
|
||||
case "IGNORE":
|
||||
return WriteMode.IGNORE;
|
||||
case "UPDATE":
|
||||
return WriteMode.UPDATE;
|
||||
case "REPLACE":
|
||||
return WriteMode.REPLACE;
|
||||
default:
|
||||
throw DataXException.asDataXException(DBUtilErrorCode.ILLEGAL_VALUE, "writeMode只支持IGNORE,UPDATE,REPLACE,无法识别 " + text);
|
||||
}
|
||||
}
|
||||
|
||||
public static class Job {
|
||||
private DataBaseType dataBaseType;
|
||||
|
||||
private static final Logger LOG = LoggerFactory
|
||||
.getLogger(BaseWriter.Job.class);
|
||||
|
||||
public Job(DataBaseType dataBaseType) {
|
||||
this.dataBaseType = dataBaseType;
|
||||
OriginalConfPretreatmentUtil.DATABASE_TYPE = this.dataBaseType;
|
||||
}
|
||||
|
||||
public void init(Configuration originalConfig) {
|
||||
OriginalConfPretreatmentUtil.doPretreatment(originalConfig, this.dataBaseType);
|
||||
checkConf(originalConfig);
|
||||
LOG.debug("After job init(), originalConfig now is:[\n{}\n]",
|
||||
originalConfig.toJSON());
|
||||
}
|
||||
|
||||
private void checkConf(Configuration originalConfig) {
|
||||
getWriteMode(originalConfig.getString(Key.WRITE_MODE, "REPLACE"));
|
||||
List<String> userConfiguredColumns = originalConfig.getList(Key.COLUMN, String.class);
|
||||
List<JSONObject> conns = originalConfig.getList(Constant.CONN_MARK,
|
||||
JSONObject.class);
|
||||
if (conns.size() > 1) {
|
||||
throw DataXException.asDataXException(DBUtilErrorCode.ILLEGAL_VALUE, "只支持单表同步");
|
||||
}
|
||||
int tableNumber = originalConfig.getInt(Constant.TABLE_NUMBER_MARK);
|
||||
if (tableNumber > 1) {
|
||||
throw DataXException.asDataXException(DBUtilErrorCode.ILLEGAL_VALUE, "只支持单表同步");
|
||||
}
|
||||
JSONObject connConf = conns.get(0);
|
||||
String jdbcUrl = connConf.getString(Key.JDBC_URL);
|
||||
String username = originalConfig.getString(Key.USERNAME);
|
||||
String password = originalConfig.getString(Key.PASSWORD);
|
||||
|
||||
String table = connConf.getJSONArray(Key.TABLE).getString(0);
|
||||
|
||||
Map<String, Object> clientConf = originalConfig.getMap("client");
|
||||
|
||||
HoloConfig config = new HoloConfig();
|
||||
config.setJdbcUrl(jdbcUrl);
|
||||
config.setUsername(username);
|
||||
config.setPassword(password);
|
||||
if (clientConf != null) {
|
||||
try {
|
||||
config = ConfLoader.load(clientConf, config, ignoreConfList);
|
||||
} catch (Exception e) {
|
||||
throw DataXException
|
||||
.asDataXException(
|
||||
DBUtilErrorCode.CONF_ERROR,
|
||||
"配置解析失败.");
|
||||
}
|
||||
}
|
||||
|
||||
try (HoloClient client = new HoloClient(config)) {
|
||||
TableSchema schema = client.getTableSchema(table);
|
||||
LOG.info("table {} column info:", schema.getTableNameObj().getFullName());
|
||||
for (com.alibaba.hologres.client.model.Column column : schema.getColumnSchema()) {
|
||||
LOG.info("name:{},type:{},typeName:{},nullable:{},defaultValue:{}", column.getName(), column.getType(), column.getTypeName(), column.getAllowNull(), column.getDefaultValue());
|
||||
}
|
||||
for (String userColumn : userConfiguredColumns) {
|
||||
if (schema.getColumnIndex(userColumn) == null) {
|
||||
throw DataXException.asDataXException(DBUtilErrorCode.CONF_ERROR, "配置的列 " + userColumn + " 不存在");
|
||||
}
|
||||
}
|
||||
} catch (Exception e) {
|
||||
throw DataXException.asDataXException(DBUtilErrorCode.CONN_DB_ERROR, "获取表schema失败", e);
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
// 一般来说,是需要推迟到 task 中进行pre 的执行(单表情况例外)
|
||||
public void prepare(Configuration originalConfig) {
|
||||
|
||||
try {
|
||||
String username = originalConfig.getString(Key.USERNAME);
|
||||
String password = originalConfig.getString(Key.PASSWORD);
|
||||
|
||||
List<Object> conns = originalConfig.getList(Constant.CONN_MARK,
|
||||
Object.class);
|
||||
Configuration connConf = Configuration.from(conns.get(0)
|
||||
.toString());
|
||||
|
||||
String jdbcUrl = connConf.getString(Key.JDBC_URL);
|
||||
originalConfig.set(Key.JDBC_URL, jdbcUrl);
|
||||
|
||||
String table = connConf.getList(Key.TABLE, String.class).get(0);
|
||||
originalConfig.set(Key.TABLE, table);
|
||||
|
||||
List<String> preSqls = originalConfig.getList(Key.PRE_SQL,
|
||||
String.class);
|
||||
List<String> renderedPreSqls = WriterUtil.renderPreOrPostSqls(
|
||||
preSqls, table);
|
||||
|
||||
originalConfig.remove(Constant.CONN_MARK);
|
||||
if (null != renderedPreSqls && !renderedPreSqls.isEmpty()) {
|
||||
// 说明有 preSql 配置,则此处删除掉
|
||||
originalConfig.remove(Key.PRE_SQL);
|
||||
String tempJdbcUrl = jdbcUrl.replace("postgresql", "hologres");
|
||||
try (Connection conn = DriverManager.getConnection(
|
||||
tempJdbcUrl, username, password)) {
|
||||
LOG.info("Begin to execute preSqls:[{}]. context info:{}.",
|
||||
StringUtils.join(renderedPreSqls, ";"), tempJdbcUrl);
|
||||
|
||||
WriterUtil.executeSqls(conn, renderedPreSqls, tempJdbcUrl, dataBaseType);
|
||||
}
|
||||
}
|
||||
LOG.debug("After job prepare(), originalConfig now is:[\n{}\n]",
|
||||
originalConfig.toJSON());
|
||||
} catch (SQLException e) {
|
||||
throw DataXException.asDataXException(DBUtilErrorCode.SQL_EXECUTE_FAIL, e);
|
||||
}
|
||||
}
|
||||
|
||||
public List<Configuration> split(Configuration originalConfig,
|
||||
int mandatoryNumber) {
|
||||
return WriterUtil.doSplit(originalConfig, mandatoryNumber);
|
||||
}
|
||||
|
||||
// 一般来说,是需要推迟到 task 中进行post 的执行(单表情况例外)
|
||||
public void post(Configuration originalConfig) {
|
||||
|
||||
String username = originalConfig.getString(Key.USERNAME);
|
||||
String password = originalConfig.getString(Key.PASSWORD);
|
||||
|
||||
String jdbcUrl = originalConfig.getString(Key.JDBC_URL);
|
||||
|
||||
String table = originalConfig.getString(Key.TABLE);
|
||||
|
||||
List<String> postSqls = originalConfig.getList(Key.POST_SQL,
|
||||
String.class);
|
||||
List<String> renderedPostSqls = WriterUtil.renderPreOrPostSqls(
|
||||
postSqls, table);
|
||||
|
||||
if (null != renderedPostSqls && !renderedPostSqls.isEmpty()) {
|
||||
// 说明有 postSql 配置,则此处删除掉
|
||||
originalConfig.remove(Key.POST_SQL);
|
||||
String tempJdbcUrl = jdbcUrl.replace("postgresql", "hologres");
|
||||
Connection conn = DBUtil.getConnection(this.dataBaseType,
|
||||
tempJdbcUrl, username, password);
|
||||
|
||||
LOG.info(
|
||||
"Begin to execute postSqls:[{}]. context info:{}.",
|
||||
StringUtils.join(renderedPostSqls, ";"), tempJdbcUrl);
|
||||
WriterUtil.executeSqls(conn, renderedPostSqls, tempJdbcUrl, dataBaseType);
|
||||
DBUtil.closeDBResources(null, null, conn);
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
public void destroy(Configuration originalConfig) {
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
public static class Task {
|
||||
protected static final Logger LOG = LoggerFactory
|
||||
.getLogger(BaseWriter.Task.class);
|
||||
|
||||
protected DataBaseType dataBaseType;
|
||||
|
||||
protected String username;
|
||||
protected String password;
|
||||
protected String jdbcUrl;
|
||||
protected String table;
|
||||
protected List<String> columns;
|
||||
protected int batchSize;
|
||||
protected int batchByteSize;
|
||||
protected int columnNumber = 0;
|
||||
protected TaskPluginCollector taskPluginCollector;
|
||||
|
||||
// 作为日志显示信息时,需要附带的通用信息。比如信息所对应的数据库连接等信息,针对哪个表做的操作
|
||||
protected static String BASIC_MESSAGE;
|
||||
|
||||
protected WriteMode writeMode;
|
||||
protected String arrayDelimiter;
|
||||
protected boolean emptyAsNull;
|
||||
|
||||
protected HoloConfig config;
|
||||
|
||||
public Task(DataBaseType dataBaseType) {
|
||||
this.dataBaseType = dataBaseType;
|
||||
}
|
||||
|
||||
public void init(Configuration writerSliceConfig) {
|
||||
this.username = writerSliceConfig.getString(Key.USERNAME);
|
||||
this.password = writerSliceConfig.getString(Key.PASSWORD);
|
||||
this.jdbcUrl = writerSliceConfig.getString(Key.JDBC_URL);
|
||||
this.table = writerSliceConfig.getString(Key.TABLE);
|
||||
|
||||
this.columns = writerSliceConfig.getList(Key.COLUMN, String.class);
|
||||
this.columnNumber = this.columns.size();
|
||||
|
||||
this.arrayDelimiter = writerSliceConfig.getString(Key.Array_Delimiter);
|
||||
|
||||
this.batchSize = writerSliceConfig.getInt(Key.BATCH_SIZE, Constant.DEFAULT_BATCH_SIZE);
|
||||
this.batchByteSize = writerSliceConfig.getInt(Key.BATCH_BYTE_SIZE, Constant.DEFAULT_BATCH_BYTE_SIZE);
|
||||
|
||||
writeMode = getWriteMode(writerSliceConfig.getString(Key.WRITE_MODE, "REPLACE"));
|
||||
emptyAsNull = writerSliceConfig.getBool(Key.EMPTY_AS_NULL, true);
|
||||
|
||||
Map<String, Object> clientConf = writerSliceConfig.getMap("client");
|
||||
|
||||
config = new HoloConfig();
|
||||
config.setJdbcUrl(this.jdbcUrl);
|
||||
config.setUsername(username);
|
||||
config.setPassword(password);
|
||||
config.setWriteMode(writeMode == WriteMode.IGNORE ? com.alibaba.hologres.client.model.WriteMode.INSERT_OR_IGNORE : (writeMode == WriteMode.UPDATE ? com.alibaba.hologres.client.model.WriteMode.INSERT_OR_UPDATE : com.alibaba.hologres.client.model.WriteMode.INSERT_OR_REPLACE));
|
||||
config.setWriteBatchSize(this.batchSize);
|
||||
config.setWriteBatchTotalByteSize(this.batchByteSize);
|
||||
config.setMetaCacheTTL(3600000L);
|
||||
config.setEnableDefaultForNotNullColumn(false);
|
||||
config.setRetryCount(5);
|
||||
config.setAppName("datax");
|
||||
|
||||
if (clientConf != null) {
|
||||
try {
|
||||
config = ConfLoader.load(clientConf, config, ignoreConfList);
|
||||
} catch (Exception e) {
|
||||
throw DataXException
|
||||
.asDataXException(
|
||||
DBUtilErrorCode.CONF_ERROR,
|
||||
"配置解析失败.");
|
||||
}
|
||||
}
|
||||
|
||||
BASIC_MESSAGE = String.format("jdbcUrl:[%s], table:[%s]",
|
||||
this.jdbcUrl, this.table);
|
||||
}
|
||||
|
||||
public void prepare(Configuration writerSliceConfig) {
|
||||
|
||||
}
|
||||
|
||||
public void startWriteWithConnection(RecordReceiver recordReceiver, TaskPluginCollector taskPluginCollector) {
|
||||
this.taskPluginCollector = taskPluginCollector;
|
||||
|
||||
try (HoloClient client = new HoloClient(config)) {
|
||||
Record record;
|
||||
TableSchema schema = RetryUtil.executeWithRetry(() -> client.getTableSchema(this.table), 3, 5000L, true);
|
||||
while ((record = recordReceiver.getFromReader()) != null) {
|
||||
if (record.getColumnNumber() != this.columnNumber) {
|
||||
// 源头读取字段列数与目的表字段写入列数不相等,直接报错
|
||||
throw DataXException
|
||||
.asDataXException(
|
||||
DBUtilErrorCode.CONF_ERROR,
|
||||
String.format(
|
||||
"列配置信息有错误. 因为您配置的任务中,源头读取字段数:%s 与 目的表要写入的字段数:%s 不相等. 请检查您的配置并作出修改.",
|
||||
record.getColumnNumber(),
|
||||
this.columnNumber));
|
||||
}
|
||||
Put put = convertToPut(record, schema);
|
||||
if (null != put) {
|
||||
try {
|
||||
client.put(put);
|
||||
} catch (HoloClientWithDetailsException detail) {
|
||||
handleDirtyData(detail);
|
||||
}
|
||||
}
|
||||
}
|
||||
try {
|
||||
client.flush();
|
||||
} catch (HoloClientWithDetailsException detail) {
|
||||
handleDirtyData(detail);
|
||||
}
|
||||
} catch (Exception e) {
|
||||
throw DataXException.asDataXException(
|
||||
DBUtilErrorCode.WRITE_DATA_ERROR, e);
|
||||
}
|
||||
}
|
||||
|
||||
private void handleDirtyData(HoloClientWithDetailsException detail) {
|
||||
for (int i = 0; i < detail.size(); ++i) {
|
||||
com.alibaba.hologres.client.model.Record failRecord = detail.getFailRecord(i);
|
||||
if (failRecord.getAttachmentList() != null) {
|
||||
for (Object obj : failRecord.getAttachmentList()) {
|
||||
taskPluginCollector.collectDirtyRecord((Record) obj, detail.getException(i));
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
public void startWrite(RecordReceiver recordReceiver,
|
||||
TaskPluginCollector taskPluginCollector) {
|
||||
startWriteWithConnection(recordReceiver, taskPluginCollector);
|
||||
}
|
||||
|
||||
public void post(Configuration writerSliceConfig) {
|
||||
|
||||
}
|
||||
|
||||
public void destroy(Configuration writerSliceConfig) {
|
||||
}
|
||||
|
||||
// 直接使用了两个类变量:columnNumber,resultSetMetaData
|
||||
protected Put convertToPut(Record record, TableSchema schema) {
|
||||
try {
|
||||
Put put = new Put(schema);
|
||||
put.getRecord().addAttachment(record);
|
||||
for (int i = 0; i < this.columnNumber; i++) {
|
||||
fillColumn(put, schema, schema.getColumnIndex(this.columns.get(i)), record.getColumn(i));
|
||||
}
|
||||
return put;
|
||||
} catch (Exception e) {
|
||||
taskPluginCollector.collectDirtyRecord(record, e);
|
||||
return null;
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
protected void fillColumn(Put data, TableSchema schema, int index, Column column) throws SQLException {
|
||||
com.alibaba.hologres.client.model.Column holoColumn = schema.getColumn(index);
|
||||
switch (holoColumn.getType()) {
|
||||
case Types.CHAR:
|
||||
case Types.NCHAR:
|
||||
case Types.CLOB:
|
||||
case Types.NCLOB:
|
||||
case Types.VARCHAR:
|
||||
case Types.LONGVARCHAR:
|
||||
case Types.NVARCHAR:
|
||||
case Types.LONGNVARCHAR:
|
||||
String value = column.asString();
|
||||
if (emptyAsNull && value != null && value.length() == 0) {
|
||||
data.setObject(index, null);
|
||||
} else {
|
||||
data.setObject(index, value);
|
||||
}
|
||||
break;
|
||||
|
||||
case Types.SMALLINT:
|
||||
if (column.getByteSize() > 0) {
|
||||
data.setObject(index, column.asBigInteger().shortValue());
|
||||
} else if (emptyAsNull) {
|
||||
data.setObject(index, null);
|
||||
}
|
||||
break;
|
||||
case Types.INTEGER:
|
||||
if (column.getByteSize() > 0) {
|
||||
data.setObject(index, column.asBigInteger().intValue());
|
||||
} else if (emptyAsNull) {
|
||||
data.setObject(index, null);
|
||||
}
|
||||
break;
|
||||
case Types.BIGINT:
|
||||
if (column.getByteSize() > 0) {
|
||||
data.setObject(index, column.asBigInteger().longValue());
|
||||
} else if (emptyAsNull) {
|
||||
data.setObject(index, null);
|
||||
}
|
||||
break;
|
||||
case Types.NUMERIC:
|
||||
case Types.DECIMAL:
|
||||
if (column.getByteSize() > 0) {
|
||||
data.setObject(index, column.asBigDecimal());
|
||||
} else if (emptyAsNull) {
|
||||
data.setObject(index, null);
|
||||
}
|
||||
break;
|
||||
case Types.FLOAT:
|
||||
case Types.REAL:
|
||||
if (column.getByteSize() > 0) {
|
||||
data.setObject(index, column.asBigDecimal().floatValue());
|
||||
} else if (emptyAsNull) {
|
||||
data.setObject(index, null);
|
||||
}
|
||||
break;
|
||||
case Types.DOUBLE:
|
||||
if (column.getByteSize() > 0) {
|
||||
data.setObject(index, column.asDouble());
|
||||
} else if (emptyAsNull) {
|
||||
data.setObject(index, null);
|
||||
}
|
||||
break;
|
||||
case Types.TIME:
|
||||
if (column.getByteSize() > 0) {
|
||||
if (column instanceof LongColumn || column instanceof DateColumn) {
|
||||
data.setObject(index, new Time(column.asLong()));
|
||||
} else {
|
||||
data.setObject(index, column.asString());
|
||||
}
|
||||
} else if (emptyAsNull) {
|
||||
data.setObject(index, null);
|
||||
}
|
||||
break;
|
||||
case Types.DATE:
|
||||
if (column.getByteSize() > 0) {
|
||||
if (column instanceof LongColumn || column instanceof DateColumn) {
|
||||
data.setObject(index, column.asLong());
|
||||
} else {
|
||||
data.setObject(index, column.asString());
|
||||
}
|
||||
} else if (emptyAsNull) {
|
||||
data.setObject(index, null);
|
||||
}
|
||||
break;
|
||||
case Types.TIMESTAMP:
|
||||
if (column.getByteSize() > 0) {
|
||||
if (column instanceof LongColumn || column instanceof DateColumn) {
|
||||
data.setObject(index, new Timestamp(column.asLong()));
|
||||
} else {
|
||||
data.setObject(index, column.asString());
|
||||
}
|
||||
} else if (emptyAsNull) {
|
||||
data.setObject(index, null);
|
||||
}
|
||||
break;
|
||||
|
||||
case Types.BINARY:
|
||||
case Types.VARBINARY:
|
||||
case Types.BLOB:
|
||||
case Types.LONGVARBINARY:
|
||||
String byteValue = column.asString();
|
||||
if (null != byteValue) {
|
||||
data.setObject(index, column
|
||||
.asBytes());
|
||||
}
|
||||
break;
|
||||
case Types.BOOLEAN:
|
||||
case Types.BIT:
|
||||
if (column.getByteSize() == 0) {
|
||||
break;
|
||||
}
|
||||
try {
|
||||
Boolean boolValue = column.asBoolean();
|
||||
data.setObject(index, boolValue);
|
||||
} catch (Exception e) {
|
||||
data.setObject(index, !"0".equals(column.asString()));
|
||||
}
|
||||
break;
|
||||
case Types.ARRAY:
|
||||
String arrayString = column.asString();
|
||||
Object arrayObject = null;
|
||||
if (null == arrayString || (emptyAsNull && "".equals(arrayString))) {
|
||||
data.setObject(index, null);
|
||||
break;
|
||||
} else if (arrayDelimiter != null && arrayDelimiter.length() > 0) {
|
||||
arrayObject = arrayString.split(this.arrayDelimiter);
|
||||
} else {
|
||||
arrayObject = JSONArray.parseArray(arrayString);
|
||||
}
|
||||
data.setObject(index, arrayObject);
|
||||
break;
|
||||
default:
|
||||
throw DataXException
|
||||
.asDataXException(
|
||||
DBUtilErrorCode.UNSUPPORTED_TYPE,
|
||||
String.format(
|
||||
"您的配置文件中的列配置信息有误. 因为DataX 不支持数据库写入这种字段类型. 字段名:[%s], 字段类型:[%d], 字段Java类型:[%s]. 请修改表中该字段的类型或者不同步该字段.",
|
||||
holoColumn.getName(),
|
||||
holoColumn.getType(),
|
||||
holoColumn.getTypeName()));
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
@ -0,0 +1,15 @@
|
||||
package com.alibaba.datax.plugin.writer.hologresjdbcwriter;
|
||||
|
||||
/**
|
||||
* 用于插件解析用户配置时,需要进行标识(MARK)的常量的声明.
|
||||
*/
|
||||
public final class Constant {
|
||||
public static final int DEFAULT_BATCH_SIZE = 512;
|
||||
|
||||
public static final int DEFAULT_BATCH_BYTE_SIZE = 50 * 1024 * 1024;
|
||||
|
||||
public static String CONN_MARK = "connection";
|
||||
|
||||
public static String TABLE_NUMBER_MARK = "tableNumber";
|
||||
|
||||
}
|
@ -0,0 +1,78 @@
|
||||
package com.alibaba.datax.plugin.writer.hologresjdbcwriter;
|
||||
|
||||
import com.alibaba.datax.common.plugin.RecordReceiver;
|
||||
import com.alibaba.datax.common.spi.Writer;
|
||||
import com.alibaba.datax.common.util.Configuration;
|
||||
import com.alibaba.datax.plugin.rdbms.util.DataBaseType;
|
||||
|
||||
import java.util.List;
|
||||
|
||||
public class HologresJdbcWriter extends Writer {
|
||||
private static final DataBaseType DATABASE_TYPE = DataBaseType.PostgreSQL;
|
||||
|
||||
public static class Job extends Writer.Job {
|
||||
private Configuration originalConfig = null;
|
||||
private BaseWriter.Job baseWriterMaster;
|
||||
|
||||
@Override
|
||||
public void init() {
|
||||
this.originalConfig = super.getPluginJobConf();
|
||||
this.baseWriterMaster = new BaseWriter.Job(DATABASE_TYPE);
|
||||
this.baseWriterMaster.init(this.originalConfig);
|
||||
}
|
||||
|
||||
@Override
|
||||
public void prepare() {
|
||||
this.baseWriterMaster.prepare(this.originalConfig);
|
||||
}
|
||||
|
||||
@Override
|
||||
public List<Configuration> split(int mandatoryNumber) {
|
||||
return this.baseWriterMaster.split(this.originalConfig, mandatoryNumber);
|
||||
}
|
||||
|
||||
@Override
|
||||
public void post() {
|
||||
this.baseWriterMaster.post(this.originalConfig);
|
||||
}
|
||||
|
||||
@Override
|
||||
public void destroy() {
|
||||
this.baseWriterMaster.destroy(this.originalConfig);
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
public static class Task extends Writer.Task {
|
||||
private Configuration writerSliceConfig;
|
||||
private BaseWriter.Task baseWriterSlave;
|
||||
|
||||
@Override
|
||||
public void init() {
|
||||
this.writerSliceConfig = super.getPluginJobConf();
|
||||
this.baseWriterSlave = new BaseWriter.Task(DATABASE_TYPE);
|
||||
this.baseWriterSlave.init(this.writerSliceConfig);
|
||||
}
|
||||
|
||||
@Override
|
||||
public void prepare() {
|
||||
this.baseWriterSlave.prepare(this.writerSliceConfig);
|
||||
}
|
||||
|
||||
public void startWrite(RecordReceiver recordReceiver) {
|
||||
this.baseWriterSlave.startWrite(recordReceiver, super.getTaskPluginCollector());
|
||||
}
|
||||
|
||||
@Override
|
||||
public void post() {
|
||||
this.baseWriterSlave.post(this.writerSliceConfig);
|
||||
}
|
||||
|
||||
@Override
|
||||
public void destroy() {
|
||||
this.baseWriterSlave.destroy(this.writerSliceConfig);
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
}
|
@ -0,0 +1,31 @@
|
||||
package com.alibaba.datax.plugin.writer.hologresjdbcwriter;
|
||||
|
||||
public final class Key {
|
||||
public final static String JDBC_URL = "jdbcUrl";
|
||||
|
||||
public final static String USERNAME = "username";
|
||||
|
||||
public final static String PASSWORD = "password";
|
||||
|
||||
public final static String TABLE = "table";
|
||||
|
||||
public final static String COLUMN = "column";
|
||||
|
||||
public final static String Array_Delimiter = "arrayDelimiter";
|
||||
|
||||
public final static String WRITE_MODE = "writeMode";
|
||||
|
||||
public final static String PRE_SQL = "preSql";
|
||||
|
||||
public final static String POST_SQL = "postSql";
|
||||
|
||||
//默认值:256
|
||||
public final static String BATCH_SIZE = "batchSize";
|
||||
|
||||
//默认值:50m
|
||||
public final static String BATCH_BYTE_SIZE = "batchByteSize";
|
||||
|
||||
public final static String EMPTY_AS_NULL = "emptyAsNull";
|
||||
|
||||
|
||||
}
|
@ -0,0 +1,59 @@
|
||||
package com.alibaba.datax.plugin.writer.hologresjdbcwriter.util;
|
||||
|
||||
import com.alibaba.hologres.client.model.WriteMode;
|
||||
import org.slf4j.Logger;
|
||||
import org.slf4j.LoggerFactory;
|
||||
|
||||
import java.lang.reflect.Field;
|
||||
import java.util.Map;
|
||||
import java.util.Set;
|
||||
|
||||
public class ConfLoader {
|
||||
public static Logger LOG = LoggerFactory.getLogger(ConfLoader.class);
|
||||
|
||||
static public <T> T load(Map<String, Object> props, T config, Set<String> ignoreList) throws Exception {
|
||||
Field[] fields = config.getClass().getDeclaredFields();
|
||||
for (Map.Entry<String, Object> entry : props.entrySet()) {
|
||||
String key = entry.getKey();
|
||||
String value = entry.getValue().toString();
|
||||
if (ignoreList.contains(key)) {
|
||||
LOG.info("Config Skip {}", key);
|
||||
continue;
|
||||
}
|
||||
boolean match = false;
|
||||
for (Field field : fields) {
|
||||
if (field.getName().equals(key)) {
|
||||
match = true;
|
||||
field.setAccessible(true);
|
||||
Class<?> type = field.getType();
|
||||
if (type.equals(String.class)) {
|
||||
field.set(config, value);
|
||||
} else if (type.equals(int.class)) {
|
||||
field.set(config, Integer.parseInt(value));
|
||||
} else if (type.equals(long.class)) {
|
||||
field.set(config, Long.parseLong(value));
|
||||
} else if (type.equals(boolean.class)) {
|
||||
field.set(config, Boolean.parseBoolean(value));
|
||||
} else if (WriteMode.class.equals(type)) {
|
||||
field.set(config, WriteMode.valueOf(value));
|
||||
} else {
|
||||
throw new Exception("invalid type " + type + " for param " + key);
|
||||
}
|
||||
if ("password".equals(key)) {
|
||||
StringBuilder sb = new StringBuilder();
|
||||
for (int i = 0; i < value.length(); ++i) {
|
||||
sb.append("*");
|
||||
}
|
||||
LOG.info("Config {}={}", key, sb.toString());
|
||||
} else {
|
||||
LOG.info("Config {}={}", key, value);
|
||||
}
|
||||
}
|
||||
}
|
||||
if (!match) {
|
||||
throw new Exception("param " + key + " not found in HoloConfig");
|
||||
}
|
||||
}
|
||||
return config;
|
||||
}
|
||||
}
|
@ -0,0 +1,82 @@
|
||||
package com.alibaba.datax.plugin.writer.hologresjdbcwriter.util;
|
||||
|
||||
import com.alibaba.datax.common.exception.DataXException;
|
||||
import com.alibaba.datax.common.util.Configuration;
|
||||
import com.alibaba.datax.plugin.rdbms.util.DBUtilErrorCode;
|
||||
import com.alibaba.datax.plugin.rdbms.util.DataBaseType;
|
||||
import com.alibaba.datax.plugin.rdbms.util.TableExpandUtil;
|
||||
import com.alibaba.datax.plugin.writer.hologresjdbcwriter.Constant;
|
||||
import com.alibaba.datax.plugin.writer.hologresjdbcwriter.Key;
|
||||
import org.apache.commons.lang3.StringUtils;
|
||||
import org.slf4j.Logger;
|
||||
import org.slf4j.LoggerFactory;
|
||||
|
||||
import java.util.List;
|
||||
|
||||
public final class OriginalConfPretreatmentUtil {
|
||||
private static final Logger LOG = LoggerFactory
|
||||
.getLogger(OriginalConfPretreatmentUtil.class);
|
||||
|
||||
public static DataBaseType DATABASE_TYPE;
|
||||
|
||||
public static void doPretreatment(Configuration originalConfig, DataBaseType dataBaseType) {
|
||||
// 检查 username/password 配置(必填)
|
||||
originalConfig.getNecessaryValue(Key.USERNAME, DBUtilErrorCode.REQUIRED_VALUE);
|
||||
originalConfig.getNecessaryValue(Key.PASSWORD, DBUtilErrorCode.REQUIRED_VALUE);
|
||||
|
||||
doCheckBatchSize(originalConfig);
|
||||
simplifyConf(originalConfig);
|
||||
}
|
||||
|
||||
public static void doCheckBatchSize(Configuration originalConfig) {
|
||||
// 检查batchSize 配置(选填,如果未填写,则设置为默认值)
|
||||
int batchSize = originalConfig.getInt(Key.BATCH_SIZE, Constant.DEFAULT_BATCH_SIZE);
|
||||
if (batchSize < 1) {
|
||||
throw DataXException.asDataXException(DBUtilErrorCode.ILLEGAL_VALUE, String.format(
|
||||
"您的batchSize配置有误. 您所配置的写入数据库表的 batchSize:%s 不能小于1. 推荐配置范围为:[256-1024] (保持128的倍数), 该值越大, 内存溢出可能性越大. 请检查您的配置并作出修改.",
|
||||
batchSize));
|
||||
}
|
||||
|
||||
originalConfig.set(Key.BATCH_SIZE, batchSize);
|
||||
}
|
||||
|
||||
public static void simplifyConf(Configuration originalConfig) {
|
||||
List<Object> connections = originalConfig.getList(Constant.CONN_MARK,
|
||||
Object.class);
|
||||
|
||||
int tableNum = 0;
|
||||
|
||||
for (int i = 0, len = connections.size(); i < len; i++) {
|
||||
Configuration connConf = Configuration.from(connections.get(i).toString());
|
||||
|
||||
String jdbcUrl = connConf.getString(Key.JDBC_URL);
|
||||
if (StringUtils.isBlank(jdbcUrl)) {
|
||||
throw DataXException.asDataXException(DBUtilErrorCode.REQUIRED_VALUE, "您未配置的写入数据库表的 jdbcUrl.");
|
||||
}
|
||||
|
||||
List<String> tables = connConf.getList(Key.TABLE, String.class);
|
||||
|
||||
if (null == tables || tables.isEmpty()) {
|
||||
throw DataXException.asDataXException(DBUtilErrorCode.REQUIRED_VALUE,
|
||||
"您未配置写入数据库表的表名称. 根据配置DataX找不到您配置的表. 请检查您的配置并作出修改.");
|
||||
}
|
||||
|
||||
// 对每一个connection 上配置的table 项进行解析
|
||||
List<String> expandedTables = TableExpandUtil
|
||||
.expandTableConf(DATABASE_TYPE, tables);
|
||||
|
||||
if (null == expandedTables || expandedTables.isEmpty()) {
|
||||
throw DataXException.asDataXException(DBUtilErrorCode.CONF_ERROR,
|
||||
"您配置的写入数据库表名称错误. DataX找不到您配置的表,请检查您的配置并作出修改.");
|
||||
}
|
||||
|
||||
tableNum += expandedTables.size();
|
||||
|
||||
originalConfig.set(String.format("%s[%d].%s", Constant.CONN_MARK,
|
||||
i, Key.TABLE), expandedTables);
|
||||
}
|
||||
|
||||
originalConfig.set(Constant.TABLE_NUMBER_MARK, tableNum);
|
||||
}
|
||||
|
||||
}
|
@ -0,0 +1,111 @@
|
||||
package com.alibaba.datax.plugin.writer.hologresjdbcwriter.util;
|
||||
|
||||
import com.alibaba.datax.common.exception.DataXException;
|
||||
import com.alibaba.datax.common.util.Configuration;
|
||||
import com.alibaba.datax.plugin.rdbms.util.DBUtil;
|
||||
import com.alibaba.datax.plugin.rdbms.util.DBUtilErrorCode;
|
||||
import com.alibaba.datax.plugin.rdbms.util.DataBaseType;
|
||||
import com.alibaba.datax.plugin.rdbms.util.RdbmsException;
|
||||
import com.alibaba.datax.plugin.rdbms.writer.Constant;
|
||||
import com.alibaba.datax.plugin.rdbms.writer.Key;
|
||||
import org.apache.commons.lang3.StringUtils;
|
||||
import org.slf4j.Logger;
|
||||
import org.slf4j.LoggerFactory;
|
||||
|
||||
import java.sql.Connection;
|
||||
import java.sql.Statement;
|
||||
import java.util.ArrayList;
|
||||
import java.util.Collections;
|
||||
import java.util.List;
|
||||
|
||||
public final class WriterUtil {
|
||||
private static final Logger LOG = LoggerFactory.getLogger(WriterUtil.class);
|
||||
|
||||
//TODO 切分报错
|
||||
public static List<Configuration> doSplit(Configuration simplifiedConf,
|
||||
int adviceNumber) {
|
||||
|
||||
List<Configuration> splitResultConfigs = new ArrayList<Configuration>();
|
||||
|
||||
int tableNumber = simplifiedConf.getInt(Constant.TABLE_NUMBER_MARK);
|
||||
|
||||
//处理单表的情况
|
||||
if (tableNumber == 1) {
|
||||
//由于在之前的 master prepare 中已经把 table,jdbcUrl 提取出来,所以这里处理十分简单
|
||||
for (int j = 0; j < adviceNumber; j++) {
|
||||
splitResultConfigs.add(simplifiedConf.clone());
|
||||
}
|
||||
|
||||
return splitResultConfigs;
|
||||
}
|
||||
|
||||
if (tableNumber != adviceNumber) {
|
||||
throw DataXException.asDataXException(DBUtilErrorCode.CONF_ERROR,
|
||||
String.format("您的配置文件中的列配置信息有误. 您要写入的目的端的表个数是:%s , 但是根据系统建议需要切分的份数是:%s. 请检查您的配置并作出修改.",
|
||||
tableNumber, adviceNumber));
|
||||
}
|
||||
|
||||
String jdbcUrl;
|
||||
List<String> preSqls = simplifiedConf.getList(Key.PRE_SQL, String.class);
|
||||
List<String> postSqls = simplifiedConf.getList(Key.POST_SQL, String.class);
|
||||
|
||||
List<Object> conns = simplifiedConf.getList(Constant.CONN_MARK,
|
||||
Object.class);
|
||||
|
||||
for (Object conn : conns) {
|
||||
Configuration sliceConfig = simplifiedConf.clone();
|
||||
|
||||
Configuration connConf = Configuration.from(conn.toString());
|
||||
jdbcUrl = connConf.getString(Key.JDBC_URL);
|
||||
sliceConfig.set(Key.JDBC_URL, jdbcUrl);
|
||||
|
||||
sliceConfig.remove(Constant.CONN_MARK);
|
||||
|
||||
List<String> tables = connConf.getList(Key.TABLE, String.class);
|
||||
|
||||
for (String table : tables) {
|
||||
Configuration tempSlice = sliceConfig.clone();
|
||||
tempSlice.set(Key.TABLE, table);
|
||||
tempSlice.set(Key.PRE_SQL, renderPreOrPostSqls(preSqls, table));
|
||||
tempSlice.set(Key.POST_SQL, renderPreOrPostSqls(postSqls, table));
|
||||
|
||||
splitResultConfigs.add(tempSlice);
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
return splitResultConfigs;
|
||||
}
|
||||
|
||||
public static List<String> renderPreOrPostSqls(List<String> preOrPostSqls, String tableName) {
|
||||
if (null == preOrPostSqls) {
|
||||
return Collections.emptyList();
|
||||
}
|
||||
|
||||
List<String> renderedSqls = new ArrayList<String>();
|
||||
for (String sql : preOrPostSqls) {
|
||||
//preSql为空时,不加入执行队列
|
||||
if (StringUtils.isNotBlank(sql)) {
|
||||
renderedSqls.add(sql.replace(Constant.TABLE_NAME_PLACEHOLDER, tableName));
|
||||
}
|
||||
}
|
||||
|
||||
return renderedSqls;
|
||||
}
|
||||
|
||||
public static void executeSqls(Connection conn, List<String> sqls, String basicMessage,DataBaseType dataBaseType) {
|
||||
Statement stmt = null;
|
||||
String currentSql = null;
|
||||
try {
|
||||
stmt = conn.createStatement();
|
||||
for (String sql : sqls) {
|
||||
currentSql = sql;
|
||||
DBUtil.executeSqlWithoutResultSet(stmt, sql);
|
||||
}
|
||||
} catch (Exception e) {
|
||||
throw RdbmsException.asQueryException(dataBaseType,e,currentSql,null,null);
|
||||
} finally {
|
||||
DBUtil.closeDBResources(null, stmt, null);
|
||||
}
|
||||
}
|
||||
}
|
6
hologresjdbcwriter/src/main/resources/plugin.json
Normal file
6
hologresjdbcwriter/src/main/resources/plugin.json
Normal file
@ -0,0 +1,6 @@
|
||||
{
|
||||
"name": "hologreswriter",
|
||||
"class": "com.alibaba.datax.plugin.writer.hologreswriter.HologresWriter",
|
||||
"description": "",
|
||||
"developer": "alibaba"
|
||||
}
|
@ -0,0 +1,11 @@
|
||||
{
|
||||
"name": "hologreswriter",
|
||||
"parameter": {
|
||||
"url": "",
|
||||
"username": "",
|
||||
"password": "",
|
||||
"database": "",
|
||||
"table": "",
|
||||
"partition": ""
|
||||
}
|
||||
}
|
BIN
images/datax.logo.png
Normal file
BIN
images/datax.logo.png
Normal file
Binary file not shown.
After Width: | Height: | Size: 102 KiB |
@ -1,4 +1,4 @@
|
||||
Copyright 1999-2017 Alibaba Group Holding Ltd.
|
||||
Copyright 1999-2022 Alibaba Group Holding Ltd.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
|
@ -116,10 +116,10 @@ MongoDBWriter通过Datax框架获取Reader生成的数据,然后将Datax支持
|
||||
"type": "int"
|
||||
}
|
||||
],
|
||||
"upsertInfo": {
|
||||
"isUpsert": "true",
|
||||
"upsertKey": "unique_id"
|
||||
}
|
||||
"writeMode": {
|
||||
"isReplace": "true",
|
||||
"replaceKey": "unique_id"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
@ -135,11 +135,11 @@ MongoDBWriter通过Datax框架获取Reader生成的数据,然后将Datax支持
|
||||
* collectionName: MonogoDB的集合名。【必填】
|
||||
* column:MongoDB的文档列名。【必填】
|
||||
* name:Column的名字。【必填】
|
||||
* type:Column的类型。【选填】
|
||||
* type:Column的类型。【必填】
|
||||
* splitter:特殊分隔符,当且仅当要处理的字符串要用分隔符分隔为字符数组时,才使用这个参数,通过这个参数指定的分隔符,将字符串分隔存储到MongoDB的数组中。【选填】
|
||||
* upsertInfo:指定了传输数据时更新的信息。【选填】
|
||||
* isUpsert:当设置为true时,表示针对相同的upsertKey做更新操作。【选填】
|
||||
* upsertKey:upsertKey指定了每行记录的业务主键。用来做更新时使用。【选填】
|
||||
* writeMode:指定了传输数据时更新的信息。【选填】
|
||||
* isReplace:当设置为true时,表示针对相同的replaceKey做更新操作。【选填】
|
||||
* replaceKey:replaceKey指定了每行记录的业务主键。用来做更新时使用。【选填】
|
||||
|
||||
#### 5 类型转换
|
||||
|
||||
|
@ -197,9 +197,9 @@ MysqlReader插件实现了从Mysql读取数据。在底层实现上,MysqlReade
|
||||
|
||||
* **querySql**
|
||||
|
||||
* 描述:在有些业务场景下,where这一配置项不足以描述所筛选的条件,用户可以通过该配置型来自定义筛选SQL。当用户配置了这一项之后,DataX系统就会忽略table,column这些配置型,直接使用这个配置项的内容对数据进行筛选,例如需要进行多表join后同步数据,使用select a,b from table_a join table_b on table_a.id = table_b.id <br />
|
||||
* 描述:在有些业务场景下,where这一配置项不足以描述所筛选的条件,用户可以通过该配置型来自定义筛选SQL。当用户配置了这一项之后,DataX系统就会忽略column这些配置型,直接使用这个配置项的内容对数据进行筛选,例如需要进行多表join后同步数据,使用select a,b from table_a join table_b on table_a.id = table_b.id <br />
|
||||
|
||||
`当用户配置querySql时,MysqlReader直接忽略table、column、where条件的配置`,querySql优先级大于table、column、where选项。
|
||||
`当用户配置querySql时,MysqlReader直接忽略column、where条件的配置`,querySql优先级大于column、where选项。querySql和table不能同时存在
|
||||
|
||||
* 必选:否 <br />
|
||||
|
||||
|
@ -3,6 +3,7 @@ package com.alibaba.datax.plugin.reader.oceanbasev10reader;
|
||||
import java.sql.Connection;
|
||||
import java.util.List;
|
||||
|
||||
import com.alibaba.datax.plugin.reader.oceanbasev10reader.ext.ObReaderKey;
|
||||
import org.slf4j.Logger;
|
||||
import org.slf4j.LoggerFactory;
|
||||
|
||||
@ -52,6 +53,21 @@ public class OceanBaseReader extends Reader {
|
||||
|
||||
@Override
|
||||
public List<Configuration> split(int adviceNumber) {
|
||||
String splitPk = originalConfig.getString(Key.SPLIT_PK);
|
||||
List<String> quotedColumns = originalConfig.getList(Key.COLUMN_LIST, String.class);
|
||||
if (splitPk != null && splitPk.length() > 0 && quotedColumns != null) {
|
||||
String escapeChar = ObReaderUtils.isOracleMode(originalConfig.getString(ObReaderKey.OB_COMPATIBILITY_MODE))
|
||||
? "\"" : "`";
|
||||
if (!splitPk.startsWith(escapeChar) && !splitPk.endsWith(escapeChar)) {
|
||||
splitPk = escapeChar + splitPk + escapeChar;
|
||||
}
|
||||
for (String column : quotedColumns) {
|
||||
if (column.equals(splitPk)) {
|
||||
LOG.info("splitPk is an ob reserved keyword, set to {}", splitPk);
|
||||
originalConfig.set(Key.SPLIT_PK, splitPk);
|
||||
}
|
||||
}
|
||||
}
|
||||
return this.readerJob.split(this.originalConfig, adviceNumber);
|
||||
}
|
||||
|
||||
@ -86,6 +102,7 @@ public class OceanBaseReader extends Reader {
|
||||
String obJdbcUrl = jdbcUrl.replace("jdbc:mysql:", "jdbc:oceanbase:");
|
||||
Connection conn = DBUtil.getConnection(DataBaseType.OceanBase, obJdbcUrl, username, password);
|
||||
String compatibleMode = ObReaderUtils.getCompatibleMode(conn);
|
||||
config.set(ObReaderKey.OB_COMPATIBILITY_MODE, compatibleMode);
|
||||
if (ObReaderUtils.isOracleMode(compatibleMode)) {
|
||||
ObReaderUtils.compatibleMode = ObReaderUtils.OB_COMPATIBLE_MODE_ORACLE;
|
||||
}
|
||||
|
@ -0,0 +1,11 @@
|
||||
package com.alibaba.datax.plugin.reader.oceanbasev10reader.ext;
|
||||
|
||||
/**
|
||||
* @author johnrobbet
|
||||
*/
|
||||
public class Constant {
|
||||
|
||||
public static String WEAK_READ_QUERY_SQL_TEMPLATE_WITHOUT_WHERE = "select /*+read_consistency(weak)*/ %s from %s ";
|
||||
|
||||
public static String WEAK_READ_QUERY_SQL_TEMPLATE = "select /*+read_consistency(weak)*/ %s from %s where (%s)";
|
||||
}
|
@ -0,0 +1,16 @@
|
||||
package com.alibaba.datax.plugin.reader.oceanbasev10reader.ext;
|
||||
|
||||
/**
|
||||
* @author johnrobbet
|
||||
*/
|
||||
public class ObReaderKey {
|
||||
|
||||
public final static String READ_BY_PARTITION = "readByPartition";
|
||||
|
||||
public final static String PARTITION_NAME = "partitionName";
|
||||
|
||||
public final static String PARTITION_TYPE = "partitionType";
|
||||
|
||||
public final static String OB_COMPATIBILITY_MODE = "obCompatibilityMode";
|
||||
|
||||
}
|
@ -1,15 +1,16 @@
|
||||
package com.alibaba.datax.plugin.reader.oceanbasev10reader.ext;
|
||||
|
||||
import java.util.Arrays;
|
||||
import java.util.List;
|
||||
|
||||
import com.alibaba.datax.common.constant.CommonConstant;
|
||||
import com.alibaba.datax.common.util.Configuration;
|
||||
import com.alibaba.datax.plugin.rdbms.reader.CommonRdbmsReader;
|
||||
import com.alibaba.datax.plugin.rdbms.reader.Key;
|
||||
import com.alibaba.datax.plugin.rdbms.util.DataBaseType;
|
||||
import com.alibaba.datax.plugin.rdbms.writer.Constant;
|
||||
import com.alibaba.datax.plugin.rdbms.reader.Constant;
|
||||
import com.alibaba.datax.plugin.reader.oceanbasev10reader.OceanBaseReader;
|
||||
import com.alibaba.datax.plugin.reader.oceanbasev10reader.util.ObReaderUtils;
|
||||
import com.alibaba.datax.plugin.reader.oceanbasev10reader.util.PartitionSplitUtil;
|
||||
import com.alibaba.fastjson.JSONObject;
|
||||
import org.slf4j.Logger;
|
||||
import org.slf4j.LoggerFactory;
|
||||
@ -29,37 +30,62 @@ public class ReaderJob extends CommonRdbmsReader.Job {
|
||||
ObReaderUtils.escapeDatabaseKeywords(columns);
|
||||
originalConfig.set(Key.COLUMN, columns);
|
||||
|
||||
List<JSONObject> conns = originalConfig.getList(com.alibaba.datax.plugin.rdbms.reader.Constant.CONN_MARK, JSONObject.class);
|
||||
List<JSONObject> conns = originalConfig.getList(Constant.CONN_MARK, JSONObject.class);
|
||||
for (int i = 0; i < conns.size(); i++) {
|
||||
JSONObject conn = conns.get(i);
|
||||
Configuration connConfig = Configuration.from(conn.toString());
|
||||
List<String> tables = connConfig.getList(Key.TABLE, String.class);
|
||||
ObReaderUtils.escapeDatabaseKeywords(tables);
|
||||
originalConfig.set(String.format("%s[%d].%s", com.alibaba.datax.plugin.rdbms.reader.Constant.CONN_MARK, i, Key.TABLE), tables);
|
||||
|
||||
// tables will be null when querySql is configured
|
||||
if (tables != null) {
|
||||
ObReaderUtils.escapeDatabaseKeywords(tables);
|
||||
originalConfig.set(String.format("%s[%d].%s", Constant.CONN_MARK, i, Key.TABLE),
|
||||
tables);
|
||||
}
|
||||
}
|
||||
super.init(originalConfig);
|
||||
}
|
||||
|
||||
@Override
|
||||
public List<Configuration> split(Configuration originalConfig, int adviceNumber) {
|
||||
List<Configuration> list = super.split(originalConfig, adviceNumber);
|
||||
List<Configuration> list;
|
||||
// readByPartition is lower priority than splitPk.
|
||||
// and readByPartition only works in table mode.
|
||||
if (!isSplitPkValid(originalConfig) &&
|
||||
originalConfig.getBool(Constant.IS_TABLE_MODE) &&
|
||||
originalConfig.getBool(ObReaderKey.READ_BY_PARTITION, false)) {
|
||||
LOG.info("try to split reader job by partition.");
|
||||
list = PartitionSplitUtil.splitByPartition(originalConfig);
|
||||
} else {
|
||||
LOG.info("try to split reader job by splitPk.");
|
||||
list = super.split(originalConfig, adviceNumber);
|
||||
}
|
||||
|
||||
for (Configuration config : list) {
|
||||
String jdbcUrl = config.getString(Key.JDBC_URL);
|
||||
String obRegionName = getObRegionName(jdbcUrl);
|
||||
config.set(CommonConstant.LOAD_BALANCE_RESOURCE_MARK, obRegionName);
|
||||
}
|
||||
|
||||
return list;
|
||||
}
|
||||
|
||||
private boolean isSplitPkValid(Configuration originalConfig) {
|
||||
String splitPk = originalConfig.getString(Key.SPLIT_PK);
|
||||
return splitPk != null && splitPk.trim().length() > 0;
|
||||
}
|
||||
|
||||
private String getObRegionName(String jdbcUrl) {
|
||||
if (jdbcUrl.startsWith(Constant.OB10_SPLIT_STRING)) {
|
||||
String[] ss = jdbcUrl.split(Constant.OB10_SPLIT_STRING_PATTERN);
|
||||
final String obJdbcDelimiter = com.alibaba.datax.plugin.rdbms.writer.Constant.OB10_SPLIT_STRING;
|
||||
if (jdbcUrl.startsWith(obJdbcDelimiter)) {
|
||||
String[] ss = jdbcUrl.split(obJdbcDelimiter);
|
||||
if (ss.length >= 2) {
|
||||
String tenant = ss[1].trim();
|
||||
String[] sss = tenant.split(":");
|
||||
return sss[0];
|
||||
}
|
||||
}
|
||||
|
||||
return null;
|
||||
}
|
||||
}
|
||||
|
File diff suppressed because one or more lines are too long
@ -0,0 +1,35 @@
|
||||
package com.alibaba.datax.plugin.reader.oceanbasev10reader.util;
|
||||
|
||||
import java.util.ArrayList;
|
||||
import java.util.List;
|
||||
|
||||
/**
|
||||
* @author johnrobbet
|
||||
*/
|
||||
public class PartInfo {
|
||||
|
||||
private PartType partType;
|
||||
|
||||
List<String> partList;
|
||||
|
||||
public PartInfo(PartType partType) {
|
||||
this.partType = partType;
|
||||
this.partList = new ArrayList();
|
||||
}
|
||||
|
||||
public String getPartType () {
|
||||
return partType.getTypeString();
|
||||
}
|
||||
|
||||
public void addPart(List partList) {
|
||||
this.partList.addAll(partList);
|
||||
}
|
||||
|
||||
public List<String> getPartList() {
|
||||
return partList;
|
||||
}
|
||||
|
||||
public boolean isPartitionTable() {
|
||||
return partType != PartType.NONPARTITION && partList.size() > 0;
|
||||
}
|
||||
}
|
@ -0,0 +1,23 @@
|
||||
package com.alibaba.datax.plugin.reader.oceanbasev10reader.util;
|
||||
|
||||
/**
|
||||
* @author johnrobbet
|
||||
*/
|
||||
|
||||
public enum PartType {
|
||||
NONPARTITION("NONPARTITION"),
|
||||
PARTITION("PARTITION"),
|
||||
SUBPARTITION("SUBPARTITION");
|
||||
|
||||
private String typeString;
|
||||
|
||||
PartType (String typeString) {
|
||||
this.typeString = typeString;
|
||||
}
|
||||
|
||||
public String getTypeString() {
|
||||
return typeString;
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -0,0 +1,165 @@
|
||||
package com.alibaba.datax.plugin.reader.oceanbasev10reader.util;
|
||||
|
||||
import com.alibaba.datax.common.util.Configuration;
|
||||
import com.alibaba.datax.plugin.rdbms.reader.Constant;
|
||||
import com.alibaba.datax.plugin.rdbms.reader.Key;
|
||||
import com.alibaba.datax.plugin.rdbms.reader.util.HintUtil;
|
||||
import com.alibaba.datax.plugin.rdbms.util.DBUtil;
|
||||
import com.alibaba.datax.plugin.rdbms.util.DataBaseType;
|
||||
import com.alibaba.datax.plugin.reader.oceanbasev10reader.ext.ObReaderKey;
|
||||
import org.slf4j.Logger;
|
||||
import org.slf4j.LoggerFactory;
|
||||
|
||||
import java.sql.Connection;
|
||||
import java.sql.ResultSet;
|
||||
import java.sql.Statement;
|
||||
import java.util.ArrayList;
|
||||
import java.util.List;
|
||||
|
||||
/**
|
||||
* @author johnrobbet
|
||||
*/
|
||||
public class PartitionSplitUtil {
|
||||
private static final Logger LOG = LoggerFactory.getLogger(PartitionSplitUtil.class);
|
||||
|
||||
public static List<Configuration> splitByPartition (Configuration configuration) {
|
||||
List<Configuration> allSlices = new ArrayList<>();
|
||||
List<Object> conns = configuration.getList(Constant.CONN_MARK, Object.class);
|
||||
for (int i = 0, len = conns.size(); i < len; i++) {
|
||||
Configuration sliceConfig = configuration.clone();
|
||||
Configuration connConf = Configuration.from(conns.get(i).toString());
|
||||
String jdbcUrl = connConf.getString(Key.JDBC_URL);
|
||||
sliceConfig.set(Key.JDBC_URL, jdbcUrl);
|
||||
sliceConfig.remove(Constant.CONN_MARK);
|
||||
|
||||
List<String> tables = connConf.getList(Key.TABLE, String.class);
|
||||
for (String table : tables) {
|
||||
Configuration tempSlice = sliceConfig.clone();
|
||||
tempSlice.set(Key.TABLE, table);
|
||||
allSlices.addAll(splitSinglePartitionTable(tempSlice));
|
||||
}
|
||||
}
|
||||
|
||||
return allSlices;
|
||||
}
|
||||
|
||||
private static List<Configuration> splitSinglePartitionTable(Configuration configuration) {
|
||||
String table = configuration.getString(Key.TABLE);
|
||||
String where = configuration.getString(Key.WHERE, null);
|
||||
String column = configuration.getString(Key.COLUMN);
|
||||
final boolean weakRead = configuration.getBool(Key.WEAK_READ, true);
|
||||
|
||||
List<Configuration> slices = new ArrayList();
|
||||
PartInfo partInfo = getObPartInfoBySQL(configuration, table);
|
||||
if (partInfo != null && partInfo.isPartitionTable()) {
|
||||
String partitionType = partInfo.getPartType();
|
||||
for (String partitionName : partInfo.getPartList()) {
|
||||
LOG.info(String.format("add %s %s for table %s", partitionType, partitionName, table));
|
||||
Configuration slice = configuration.clone();
|
||||
slice.set(ObReaderKey.PARTITION_NAME, partitionName);
|
||||
slice.set(ObReaderKey.PARTITION_TYPE, partitionType);
|
||||
slice.set(Key.QUERY_SQL,
|
||||
ObReaderUtils.buildQuerySql(weakRead, column,
|
||||
String.format("%s partition(%s)", table, partitionName), where));
|
||||
slices.add(slice);
|
||||
}
|
||||
} else {
|
||||
LOG.info("fail to get table part info or table is not partitioned, proceed as non-partitioned table.");
|
||||
|
||||
Configuration slice = configuration.clone();
|
||||
slice.set(Key.QUERY_SQL, ObReaderUtils.buildQuerySql(weakRead, column, table, where));
|
||||
slices.add(slice);
|
||||
}
|
||||
|
||||
return slices;
|
||||
}
|
||||
|
||||
private static PartInfo getObPartInfoBySQL(Configuration config, String table) {
|
||||
PartInfo partInfo = new PartInfo(PartType.NONPARTITION);
|
||||
List<String> partList;
|
||||
Connection conn = null;
|
||||
try {
|
||||
String jdbcUrl = config.getString(Key.JDBC_URL);
|
||||
String username = config.getString(Key.USERNAME);
|
||||
String password = config.getString(Key.PASSWORD);
|
||||
String dbname = ObReaderUtils.getDbNameFromJdbcUrl(jdbcUrl);
|
||||
String allTable = "__all_table";
|
||||
|
||||
conn = DBUtil.getConnection(DataBaseType.OceanBase, jdbcUrl, username, password);
|
||||
String obVersion = getResultsFromSql(conn, "select version()").get(0);
|
||||
|
||||
LOG.info("obVersion: " + obVersion);
|
||||
|
||||
if (ObReaderUtils.compareObVersion("2.2.76", obVersion) < 0) {
|
||||
allTable = "__all_table_v2";
|
||||
}
|
||||
|
||||
String queryPart = String.format(
|
||||
"select p.part_name " +
|
||||
"from oceanbase.__all_part p, oceanbase.%s t, oceanbase.__all_database d " +
|
||||
"where p.table_id = t.table_id " +
|
||||
"and d.database_id = t.database_id " +
|
||||
"and d.database_name = '%s' " +
|
||||
"and t.table_name = '%s'", allTable, dbname, table);
|
||||
String querySubPart = String.format(
|
||||
"select p.sub_part_name " +
|
||||
"from oceanbase.__all_sub_part p, oceanbase.%s t, oceanbase.__all_database d " +
|
||||
"where p.table_id = t.table_id " +
|
||||
"and d.database_id = t.database_id " +
|
||||
"and d.database_name = '%s' " +
|
||||
"and t.table_name = '%s'", allTable, dbname, table);
|
||||
if (config.getString(ObReaderKey.OB_COMPATIBILITY_MODE).equals("ORACLE")) {
|
||||
queryPart = String.format(
|
||||
"select partition_name from all_tab_partitions where TABLE_OWNER = '%s' and table_name = '%s'",
|
||||
dbname.toUpperCase(), table.toUpperCase());
|
||||
querySubPart = String.format(
|
||||
"select subpartition_name from all_tab_subpartitions where TABLE_OWNER = '%s' and table_name = '%s'",
|
||||
dbname.toUpperCase(), table.toUpperCase());
|
||||
}
|
||||
|
||||
PartType partType = PartType.SUBPARTITION;
|
||||
|
||||
// try subpartition first
|
||||
partList = getResultsFromSql(conn, querySubPart);
|
||||
|
||||
// if table is not sub-partitioned, the try partition
|
||||
if (partList.isEmpty()) {
|
||||
partList = getResultsFromSql(conn, queryPart);
|
||||
partType = PartType.PARTITION;
|
||||
}
|
||||
|
||||
if (!partList.isEmpty()) {
|
||||
partInfo = new PartInfo(partType);
|
||||
partInfo.addPart(partList);
|
||||
}
|
||||
} catch (Exception ex) {
|
||||
LOG.error("error when get partition list: " + ex.getMessage());
|
||||
} finally {
|
||||
DBUtil.closeDBResources(null, conn);
|
||||
}
|
||||
|
||||
return partInfo;
|
||||
}
|
||||
|
||||
private static List<String> getResultsFromSql(Connection conn, String sql) {
|
||||
List<String> list = new ArrayList();
|
||||
Statement stmt = null;
|
||||
ResultSet rs = null;
|
||||
|
||||
LOG.info("executing sql: " + sql);
|
||||
|
||||
try {
|
||||
stmt = conn.createStatement();
|
||||
rs = stmt.executeQuery(sql);
|
||||
while (rs.next()) {
|
||||
list.add(rs.getString(1));
|
||||
}
|
||||
} catch (Exception e) {
|
||||
LOG.error("error when executing sql: " + e.getMessage());
|
||||
} finally {
|
||||
DBUtil.closeDBResources(rs, stmt, null);
|
||||
}
|
||||
|
||||
return list;
|
||||
}
|
||||
}
|
@ -19,15 +19,6 @@ public class TaskContext {
|
||||
private boolean weakRead = true;
|
||||
private String userSavePoint;
|
||||
private String compatibleMode = ObReaderUtils.OB_COMPATIBLE_MODE_MYSQL;
|
||||
|
||||
public String getPartitionName() {
|
||||
return partitionName;
|
||||
}
|
||||
|
||||
public void setPartitionName(String partitionName) {
|
||||
this.partitionName = partitionName;
|
||||
}
|
||||
|
||||
private String partitionName;
|
||||
|
||||
// 断点续读的保存点
|
||||
@ -174,4 +165,12 @@ public class TaskContext {
|
||||
public void setCompatibleMode(String compatibleMode) {
|
||||
this.compatibleMode = compatibleMode;
|
||||
}
|
||||
|
||||
public String getPartitionName() {
|
||||
return partitionName;
|
||||
}
|
||||
|
||||
public void setPartitionName(String partitionName) {
|
||||
this.partitionName = partitionName;
|
||||
}
|
||||
}
|
||||
|
@ -0,0 +1,22 @@
|
||||
package com.alibaba.datax.plugin.reader.oceanbasev10reader.util;
|
||||
|
||||
import org.junit.Test;
|
||||
|
||||
public class ObReaderUtilsTest {
|
||||
|
||||
@Test
|
||||
public void getDbTest() {
|
||||
assert ObReaderUtils.getDbNameFromJdbcUrl("jdbc:mysql://127.0.0.1:3306/testdb").equalsIgnoreCase("testdb");
|
||||
assert ObReaderUtils.getDbNameFromJdbcUrl("jdbc:oceanbase://127.0.0.1:2883/testdb").equalsIgnoreCase("testdb");
|
||||
assert ObReaderUtils.getDbNameFromJdbcUrl("||_dsc_ob10_dsc_||obcluster:mysql||_dsc_ob10_dsc_||jdbc:mysql://127.0.0.1:3306/testdb").equalsIgnoreCase("testdb");
|
||||
assert ObReaderUtils.getDbNameFromJdbcUrl("||_dsc_ob10_dsc_||obcluster:oracle||_dsc_ob10_dsc_||jdbc:oceanbase://127.0.0.1:3306/testdb").equalsIgnoreCase("testdb");
|
||||
}
|
||||
|
||||
@Test
|
||||
public void compareObVersionTest() {
|
||||
assert ObReaderUtils.compareObVersion("2.2.70", "3.2.2") == -1;
|
||||
assert ObReaderUtils.compareObVersion("2.2.70", "2.2.50") == 1;
|
||||
assert ObReaderUtils.compareObVersion("2.2.70", "3.1.2") == -1;
|
||||
assert ObReaderUtils.compareObVersion("3.1.2", "3.1.2") == 0;
|
||||
}
|
||||
}
|
@ -36,18 +36,18 @@
|
||||
<artifactId>guava</artifactId>
|
||||
<version>16.0.1</version>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>org.bouncycastle</groupId>
|
||||
<artifactId>bcprov-jdk15on</artifactId>
|
||||
<version>1.52</version>
|
||||
<scope>system</scope>
|
||||
<systemPath>${basedir}/src/main/libs/bcprov-jdk15on-1.52.jar</systemPath>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>com.aliyun.odps</groupId>
|
||||
<artifactId>odps-sdk-core</artifactId>
|
||||
<version>0.20.7-public</version>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>org.xerial</groupId>
|
||||
<artifactId>sqlite-jdbc</artifactId>
|
||||
<version>3.34.0</version>
|
||||
</dependency>
|
||||
|
||||
<!-- ref:http://odps.alibaba-inc.com/doc/prddoc/odps_sdk_v2/sdk.html -->
|
||||
<dependency>
|
||||
<groupId>com.aliyun.odps</groupId>
|
||||
<artifactId>odps-sdk-core</artifactId>
|
||||
<version>0.38.4-public</version>
|
||||
</dependency>
|
||||
|
||||
<dependency>
|
||||
<groupId>org.mockito</groupId>
|
||||
@ -87,29 +87,22 @@
|
||||
<version>1.4.10</version>
|
||||
<scope>test</scope>
|
||||
</dependency>
|
||||
|
||||
<dependency>
|
||||
<groupId>org.mockito</groupId>
|
||||
<artifactId>mockito-core</artifactId>
|
||||
<version>1.8.5</version>
|
||||
<scope>test</scope>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>org.powermock</groupId>
|
||||
<artifactId>powermock-api-mockito</artifactId>
|
||||
<version>1.4.10</version>
|
||||
<scope>test</scope>
|
||||
</dependency>
|
||||
|
||||
<dependency>
|
||||
<groupId>org.powermock</groupId>
|
||||
<artifactId>powermock-module-junit4</artifactId>
|
||||
<version>1.4.10</version>
|
||||
<scope>test</scope>
|
||||
<groupId>commons-codec</groupId>
|
||||
<artifactId>commons-codec</artifactId>
|
||||
<version>1.8</version>
|
||||
</dependency>
|
||||
</dependencies>
|
||||
|
||||
<build>
|
||||
<resources>
|
||||
<resource>
|
||||
<directory>src/main/java</directory>
|
||||
<includes>
|
||||
<include>**/*.properties</include>
|
||||
</includes>
|
||||
</resource>
|
||||
</resources>
|
||||
<plugins>
|
||||
<!-- compiler plugin -->
|
||||
<plugin>
|
||||
|
@ -23,13 +23,6 @@
|
||||
</includes>
|
||||
<outputDirectory>plugin/reader/odpsreader</outputDirectory>
|
||||
</fileSet>
|
||||
<fileSet>
|
||||
<directory>src/main/libs</directory>
|
||||
<includes>
|
||||
<include>*.*</include>
|
||||
</includes>
|
||||
<outputDirectory>plugin/reader/odpsreader/libs</outputDirectory>
|
||||
</fileSet>
|
||||
</fileSets>
|
||||
|
||||
<dependencySets>
|
||||
|
@ -32,4 +32,6 @@ public class Constant {
|
||||
|
||||
public static final String PARSED_COLUMNS = "parsedColumns";
|
||||
|
||||
public static final String PARTITION_FILTER_HINT = "/*query*/";
|
||||
|
||||
}
|
||||
|
@ -6,6 +6,8 @@ public class Key {
|
||||
|
||||
public final static String ACCESS_KEY = "accessKey";
|
||||
|
||||
public final static String SECURITY_TOKEN = "securityToken";
|
||||
|
||||
public static final String PROJECT = "project";
|
||||
|
||||
public final static String TABLE = "table";
|
||||
@ -31,4 +33,13 @@ public class Key {
|
||||
|
||||
public final static String MAX_RETRY_TIME = "maxRetryTime";
|
||||
|
||||
// 分区不存在时
|
||||
public final static String SUCCESS_ON_NO_PATITION="successOnNoPartition";
|
||||
|
||||
// preSql
|
||||
public final static String PRE_SQL="preSql";
|
||||
|
||||
// postSql
|
||||
public final static String POST_SQL="postSql";
|
||||
|
||||
}
|
||||
|
@ -0,0 +1,64 @@
|
||||
description.DATAX_R_ODPS_001=\u7F3A\u5C11\u5FC5\u586B\u53C2\u6570
|
||||
description.DATAX_R_ODPS_002=\u914D\u7F6E\u503C\u4E0D\u5408\u6CD5
|
||||
description.DATAX_R_ODPS_003=\u521B\u5EFAODPS Session\u5931\u8D25
|
||||
description.DATAX_R_ODPS_004=\u83B7\u53D6ODPS Session\u5931\u8D25
|
||||
description.DATAX_R_ODPS_005=\u8BFB\u53D6ODPS\u6570\u636E\u5931\u8D25
|
||||
description.DATAX_R_ODPS_006=\u83B7\u53D6AK\u5931\u8D25
|
||||
description.DATAX_R_ODPS_007=\u8BFB\u53D6\u6570\u636E\u53D1\u751F\u5F02\u5E38
|
||||
description.DATAX_R_ODPS_008=\u6253\u5F00RecordReader\u5931\u8D25
|
||||
description.DATAX_R_ODPS_009=ODPS\u9879\u76EE\u4E0D\u5B58\u5728
|
||||
description.DATAX_R_ODPS_010=\u8868\u4E0D\u5B58\u5728
|
||||
description.DATAX_R_ODPS_011=AK\u4E0D\u5B58\u5728
|
||||
description.DATAX_R_ODPS_012=AK\u975E\u6CD5
|
||||
description.DATAX_R_ODPS_013=AK\u62D2\u7EDD\u8BBF\u95EE
|
||||
description.DATAX_R_ODPS_014=splitMode\u914D\u7F6E\u9519\u8BEF
|
||||
description.DATAX_R_ODPS_015=ODPS\u8D26\u53F7\u7C7B\u578B\u9519\u8BEF
|
||||
description.DATAX_R_ODPS_016=\u4E0D\u652F\u6301\u89C6\u56FE
|
||||
description.DATAX_R_ODPS_017=\u5206\u533A\u914D\u7F6E\u9519\u8BEF
|
||||
description.DATAX_R_ODPS_018=\u5206\u533A\u4E0D\u5B58\u5728
|
||||
description.DATAX_R_ODPS_019=\u6267\u884CODPS SQL\u5931\u8D25
|
||||
description.DATAX_R_ODPS_020=\u6267\u884CODPS SQL\u53D1\u751F\u5F02\u5E38
|
||||
|
||||
|
||||
solution.DATAX_R_ODPS_001=\u8BF7\u4FEE\u6539\u914D\u7F6E\u6587\u4EF6
|
||||
solution.DATAX_R_ODPS_002=\u8BF7\u4FEE\u6539\u914D\u7F6E\u503C
|
||||
solution.DATAX_R_ODPS_003=\u8BF7\u786E\u5B9A\u914D\u7F6E\u7684AK\u6216\u8054\u7CFBODPS\u7BA1\u7406\u5458
|
||||
solution.DATAX_R_ODPS_004=\u8BF7\u8054\u7CFBODPS\u7BA1\u7406\u5458
|
||||
solution.DATAX_R_ODPS_005=\u8BF7\u8054\u7CFBODPS\u7BA1\u7406\u5458
|
||||
solution.DATAX_R_ODPS_006=\u8BF7\u786E\u5B9A\u914D\u7F6E\u7684AK
|
||||
solution.DATAX_R_ODPS_007=\u8BF7\u8054\u7CFBODPS\u7BA1\u7406\u5458
|
||||
solution.DATAX_R_ODPS_008=\u8BF7\u8054\u7CFBODPS\u7BA1\u7406\u5458
|
||||
solution.DATAX_R_ODPS_009=\u8BF7\u786E\u5B9A\u914D\u7F6E\u7684\u9879\u76EE\u540D
|
||||
solution.DATAX_R_ODPS_010=\u8BF7\u786E\u5B9A\u914D\u7F6E\u7684\u8868\u540D
|
||||
solution.DATAX_R_ODPS_011=\u8BF7\u786E\u5B9A\u914D\u7F6E\u7684AK
|
||||
solution.DATAX_R_ODPS_012=\u8BF7\u4FEE\u6539AK
|
||||
solution.DATAX_R_ODPS_013=\u8BF7\u786E\u5B9AAK\u5728\u9879\u76EE\u4E2D\u7684\u6743\u9650
|
||||
solution.DATAX_R_ODPS_014=\u8BF7\u4FEE\u6539splitMode\u503C
|
||||
solution.DATAX_R_ODPS_015=\u8BF7\u4FEE\u6539\u8D26\u53F7\u7C7B\u578B
|
||||
solution.DATAX_R_ODPS_016=\u8BF7\u4FEE\u6539\u914D\u7F6E\u6587\u4EF6
|
||||
solution.DATAX_R_ODPS_017=\u8BF7\u4FEE\u6539\u5206\u533A\u503C
|
||||
solution.DATAX_R_ODPS_018=\u8BF7\u4FEE\u6539\u914D\u7F6E\u7684\u5206\u533A\u503C
|
||||
solution.DATAX_R_ODPS_019=\u8BF7\u8054\u7CFBODPS\u7BA1\u7406\u5458
|
||||
solution.DATAX_R_ODPS_020=\u8BF7\u8054\u7CFBODPS\u7BA1\u7406\u5458
|
||||
|
||||
odpsreader.1=\u6E90\u5934\u8868:{0} \u662F\u865A\u62DF\u89C6\u56FE\uFF0CDataX \u4E0D\u652F\u6301\u8BFB\u53D6\u865A\u62DF\u89C6\u56FE.
|
||||
odpsreader.2=\u60A8\u6240\u914D\u7F6E\u7684 splitMode:{0} \u4E0D\u6B63\u786E. splitMode \u4EC5\u5141\u8BB8\u914D\u7F6E\u4E3A record \u6216\u8005 partition.
|
||||
odpsreader.3=\u5206\u533A\u4FE1\u606F\u6CA1\u6709\u914D\u7F6E.\u7531\u4E8E\u6E90\u5934\u8868:{0} \u4E3A\u5206\u533A\u8868, \u6240\u4EE5\u60A8\u9700\u8981\u914D\u7F6E\u5176\u62BD\u53D6\u7684\u8868\u7684\u5206\u533A\u4FE1\u606F. \u683C\u5F0F\u5F62\u5982:pt=hello,ds=hangzhou\uFF0C\u8BF7\u60A8\u53C2\u8003\u6B64\u683C\u5F0F\u4FEE\u6539\u8BE5\u914D\u7F6E\u9879.
|
||||
odpsreader.4=\u5206\u533A\u4FE1\u606F\u914D\u7F6E\u9519\u8BEF.\u6E90\u5934\u8868:{0} \u867D\u7136\u4E3A\u5206\u533A\u8868, \u4F46\u5176\u5B9E\u9645\u5206\u533A\u503C\u5E76\u4E0D\u5B58\u5728. \u8BF7\u786E\u8BA4\u6E90\u5934\u8868\u5DF2\u7ECF\u751F\u6210\u8BE5\u5206\u533A\uFF0C\u518D\u8FDB\u884C\u6570\u636E\u62BD\u53D6.
|
||||
odpsreader.5=\u5206\u533A\u914D\u7F6E\u9519\u8BEF\uFF0C\u6839\u636E\u60A8\u6240\u914D\u7F6E\u7684\u5206\u533A\u6CA1\u6709\u5339\u914D\u5230\u6E90\u5934\u8868\u4E2D\u7684\u5206\u533A. \u6E90\u5934\u8868\u6240\u6709\u5206\u533A\u662F:[\n{0}\n], \u60A8\u914D\u7F6E\u7684\u5206\u533A\u662F:[\n{1}\n]. \u8BF7\u60A8\u6839\u636E\u5B9E\u9645\u60C5\u51B5\u518D\u4F5C\u51FA\u4FEE\u6539.
|
||||
odpsreader.6=\u5206\u533A\u914D\u7F6E\u9519\u8BEF\uFF0C\u6E90\u5934\u8868:{0} \u4E3A\u975E\u5206\u533A\u8868, \u60A8\u4E0D\u80FD\u914D\u7F6E\u5206\u533A. \u8BF7\u60A8\u5220\u9664\u8BE5\u914D\u7F6E\u9879.
|
||||
odpsreader.7=\u6E90\u5934\u8868:{0} \u7684\u6240\u6709\u5206\u533A\u5217\u662F:[{1}]
|
||||
odpsreader.8=\u5206\u533A\u914D\u7F6E\u9519\u8BEF, \u60A8\u6240\u914D\u7F6E\u7684\u5206\u533A\u7EA7\u6570\u548C\u8BE5\u8868\u7684\u5B9E\u9645\u60C5\u51B5\u4E0D\u4E00\u81F4, \u6BD4\u5982\u5206\u533A:[{0}] \u662F {1} \u7EA7\u5206\u533A, \u800C\u5206\u533A:[{2}] \u662F {3} \u7EA7\u5206\u533A. DataX \u662F\u901A\u8FC7\u82F1\u6587\u9017\u53F7\u5224\u65AD\u60A8\u6240\u914D\u7F6E\u7684\u5206\u533A\u7EA7\u6570\u7684. \u6B63\u786E\u7684\u683C\u5F0F\u5F62\u5982\"pt=$'{bizdate'}, type=0\" \uFF0C\u8BF7\u60A8\u53C2\u8003\u793A\u4F8B\u4FEE\u6539\u8BE5\u914D\u7F6E\u9879.
|
||||
odpsreader.9=\u5206\u533A\u914D\u7F6E\u9519\u8BEF, \u60A8\u6240\u914D\u7F6E\u7684\u5206\u533A:{0} \u7684\u7EA7\u6570:{1} \u4E0E\u60A8\u8981\u8BFB\u53D6\u7684 ODPS \u6E90\u5934\u8868\u7684\u5206\u533A\u7EA7\u6570:{2} \u4E0D\u76F8\u7B49. DataX \u662F\u901A\u8FC7\u82F1\u6587\u9017\u53F7\u5224\u65AD\u60A8\u6240\u914D\u7F6E\u7684\u5206\u533A\u7EA7\u6570\u7684.\u6B63\u786E\u7684\u683C\u5F0F\u5F62\u5982\"pt=$'{bizdate'}, type=0\" \uFF0C\u8BF7\u60A8\u53C2\u8003\u793A\u4F8B\u4FEE\u6539\u8BE5\u914D\u7F6E\u9879.
|
||||
odpsreader.10=\u6E90\u5934\u8868:{0} \u7684\u6240\u6709\u5B57\u6BB5\u662F:[{1}]
|
||||
odpsreader.11=\u8FD9\u662F\u4E00\u6761\u8B66\u544A\u4FE1\u606F\uFF0C\u60A8\u914D\u7F6E\u7684 ODPS \u8BFB\u53D6\u7684\u5217\u4E3A*\uFF0C\u8FD9\u662F\u4E0D\u63A8\u8350\u7684\u884C\u4E3A\uFF0C\u56E0\u4E3A\u5F53\u60A8\u7684\u8868\u5B57\u6BB5\u4E2A\u6570\u3001\u7C7B\u578B\u6709\u53D8\u52A8\u65F6\uFF0C\u53EF\u80FD\u5F71\u54CD\u4EFB\u52A1\u6B63\u786E\u6027\u751A\u81F3\u4F1A\u8FD0\u884C\u51FA\u9519. \u5EFA\u8BAE\u60A8\u628A\u6240\u6709\u9700\u8981\u62BD\u53D6\u7684\u5217\u90FD\u914D\u7F6E\u4E0A.
|
||||
odpsreader.12=\u6E90\u5934\u8868:{0} \u7684\u5206\u533A:{1} \u6CA1\u6709\u5185\u5BB9\u53EF\u62BD\u53D6, \u8BF7\u60A8\u77E5\u6653.
|
||||
odpsreader.13=\u6E90\u5934\u8868:{0} \u7684\u5206\u533A:{1} \u8BFB\u53D6\u884C\u6570\u4E3A\u8D1F\u6570, \u8BF7\u8054\u7CFB ODPS \u7BA1\u7406\u5458\u67E5\u770B\u8868\u72B6\u6001!
|
||||
odpsreader.14=\u6E90\u5934\u8868:{0} \u7684\u5206\u533A:{1} \u8BFB\u53D6\u5931\u8D25, \u8BF7\u8054\u7CFB ODPS \u7BA1\u7406\u5458\u67E5\u770B\u9519\u8BEF\u8BE6\u60C5.
|
||||
|
||||
|
||||
readerproxy.1=odps-read-exception, \u91CD\u8BD5\u7B2C{0}\u6B21
|
||||
readerproxy.2=\u60A8\u7684\u5206\u533A [{0}] \u89E3\u6790\u51FA\u73B0\u9519\u8BEF,\u89E3\u6790\u540E\u6B63\u786E\u7684\u914D\u7F6E\u65B9\u5F0F\u7C7B\u4F3C\u4E3A [ pt=1,dt=1 ].
|
||||
readerproxy.3=\u8868\u6240\u6709\u5206\u533A\u4FE1\u606F\u4E3A: {0} \u5176\u4E2D\u627E\u4E0D\u5230 [{1}] \u5BF9\u5E94\u7684\u5206\u533A\u503C.
|
||||
readerproxy.4=\u60A8\u8BFB\u53D6\u5206\u533A [{0}] \u51FA\u73B0\u65E5\u671F\u8F6C\u6362\u5F02\u5E38, \u65E5\u671F\u7684\u5B57\u7B26\u4E32\u8868\u793A\u4E3A [{1}].
|
||||
readerproxy.5=DataX \u62BD\u53D6 ODPS \u6570\u636E\u4E0D\u652F\u6301\u5B57\u6BB5\u7C7B\u578B\u4E3A:[{0}]. \u76EE\u524D\u652F\u6301\u62BD\u53D6\u7684\u5B57\u6BB5\u7C7B\u578B\u6709\uFF1Abigint, boolean, datetime, double, decimal, string. \u60A8\u53EF\u4EE5\u9009\u62E9\u4E0D\u62BD\u53D6 DataX \u4E0D\u652F\u6301\u7684\u5B57\u6BB5\u6216\u8005\u8054\u7CFB ODPS \u7BA1\u7406\u5458\u5BFB\u6C42\u5E2E\u52A9.
|
@ -5,44 +5,44 @@ import com.alibaba.datax.common.plugin.RecordSender;
|
||||
import com.alibaba.datax.common.spi.Reader;
|
||||
import com.alibaba.datax.common.util.Configuration;
|
||||
import com.alibaba.datax.common.util.FilterUtil;
|
||||
import com.alibaba.datax.plugin.reader.odpsreader.util.IdAndKeyUtil;
|
||||
import com.alibaba.datax.plugin.reader.odpsreader.util.OdpsSplitUtil;
|
||||
import com.alibaba.datax.plugin.reader.odpsreader.util.OdpsUtil;
|
||||
import com.aliyun.odps.*;
|
||||
import com.alibaba.datax.common.util.MessageSource;
|
||||
import com.alibaba.datax.plugin.reader.odpsreader.util.*;
|
||||
import com.alibaba.fastjson.JSON;
|
||||
import com.aliyun.odps.Column;
|
||||
import com.aliyun.odps.Odps;
|
||||
import com.aliyun.odps.Table;
|
||||
import com.aliyun.odps.TableSchema;
|
||||
import com.aliyun.odps.tunnel.TableTunnel.DownloadSession;
|
||||
|
||||
import com.aliyun.odps.type.TypeInfo;
|
||||
import org.apache.commons.lang3.StringUtils;
|
||||
import org.apache.commons.lang3.tuple.MutablePair;
|
||||
import org.apache.commons.lang3.tuple.Pair;
|
||||
import org.slf4j.Logger;
|
||||
import org.slf4j.LoggerFactory;
|
||||
|
||||
import java.util.ArrayList;
|
||||
import java.util.HashMap;
|
||||
import java.util.HashSet;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
import java.util.Set;
|
||||
import java.util.*;
|
||||
|
||||
public class OdpsReader extends Reader {
|
||||
public static class Job extends Reader.Job {
|
||||
private static final Logger LOG = LoggerFactory
|
||||
.getLogger(Job.class);
|
||||
|
||||
private static boolean IS_DEBUG = LOG.isDebugEnabled();
|
||||
.getLogger(Job.class);
|
||||
private static final MessageSource MESSAGE_SOURCE = MessageSource.loadResourceBundle(OdpsReaderErrorCode.class, Locale.ENGLISH, MessageSource.timeZone);
|
||||
|
||||
private Configuration originalConfig;
|
||||
private boolean successOnNoPartition;
|
||||
private Odps odps;
|
||||
private Table table;
|
||||
|
||||
@Override
|
||||
public void preCheck() {
|
||||
this.init();
|
||||
this.prepare();
|
||||
}
|
||||
|
||||
|
||||
@Override
|
||||
public void init() {
|
||||
this.originalConfig = super.getPluginJobConf();
|
||||
this.successOnNoPartition = this.originalConfig.getBool(Key.SUCCESS_ON_NO_PATITION, false);
|
||||
|
||||
//如果用户没有配置accessId/accessKey,尝试从环境变量获取
|
||||
String accountType = originalConfig.getString(Key.ACCOUNT_TYPE, Constant.DEFAULT_ACCOUNT_TYPE);
|
||||
@ -59,17 +59,21 @@ public class OdpsReader extends Reader {
|
||||
dealSplitMode(this.originalConfig);
|
||||
|
||||
this.odps = OdpsUtil.initOdps(this.originalConfig);
|
||||
|
||||
}
|
||||
|
||||
private void initOdpsTableInfo() {
|
||||
String tableName = this.originalConfig.getString(Key.TABLE);
|
||||
String projectName = this.originalConfig.getString(Key.PROJECT);
|
||||
|
||||
this.table = OdpsUtil.getTable(this.odps, projectName, tableName);
|
||||
this.originalConfig.set(Constant.IS_PARTITIONED_TABLE,
|
||||
OdpsUtil.isPartitionedTable(table));
|
||||
OdpsUtil.isPartitionedTable(table));
|
||||
|
||||
boolean isVirtualView = this.table.isVirtualView();
|
||||
if (isVirtualView) {
|
||||
throw DataXException.asDataXException(OdpsReaderErrorCode.VIRTUAL_VIEW_NOT_SUPPORT,
|
||||
String.format("源头表:%s 是虚拟视图,DataX 不支持读取虚拟视图.", tableName));
|
||||
MESSAGE_SOURCE.message("odpsreader.1", tableName));
|
||||
}
|
||||
|
||||
this.dealPartition(this.table);
|
||||
@ -79,11 +83,11 @@ public class OdpsReader extends Reader {
|
||||
private void dealSplitMode(Configuration originalConfig) {
|
||||
String splitMode = originalConfig.getString(Key.SPLIT_MODE, Constant.DEFAULT_SPLIT_MODE).trim();
|
||||
if (splitMode.equalsIgnoreCase(Constant.DEFAULT_SPLIT_MODE) ||
|
||||
splitMode.equalsIgnoreCase(Constant.PARTITION_SPLIT_MODE)) {
|
||||
splitMode.equalsIgnoreCase(Constant.PARTITION_SPLIT_MODE)) {
|
||||
originalConfig.set(Key.SPLIT_MODE, splitMode);
|
||||
} else {
|
||||
throw DataXException.asDataXException(OdpsReaderErrorCode.SPLIT_MODE_ERROR,
|
||||
String.format("您所配置的 splitMode:%s 不正确. splitMode 仅允许配置为 record 或者 partition.", splitMode));
|
||||
MESSAGE_SOURCE.message("odpsreader.2", splitMode));
|
||||
}
|
||||
}
|
||||
|
||||
@ -98,7 +102,7 @@ public class OdpsReader extends Reader {
|
||||
*/
|
||||
private void dealPartition(Table table) {
|
||||
List<String> userConfiguredPartitions = this.originalConfig.getList(
|
||||
Key.PARTITION, String.class);
|
||||
Key.PARTITION, String.class);
|
||||
|
||||
boolean isPartitionedTable = this.originalConfig.getBool(Constant.IS_PARTITIONED_TABLE);
|
||||
List<String> partitionColumns = new ArrayList<String>();
|
||||
@ -107,60 +111,140 @@ public class OdpsReader extends Reader {
|
||||
// 分区表,需要配置分区
|
||||
if (null == userConfiguredPartitions || userConfiguredPartitions.isEmpty()) {
|
||||
throw DataXException.asDataXException(OdpsReaderErrorCode.PARTITION_ERROR,
|
||||
String.format("分区信息没有配置.由于源头表:%s 为分区表, 所以您需要配置其抽取的表的分区信息. 格式形如:pt=hello,ds=hangzhou,请您参考此格式修改该配置项.",
|
||||
table.getName()));
|
||||
MESSAGE_SOURCE.message("odpsreader.3", table.getName()));
|
||||
} else {
|
||||
List<String> allPartitions = OdpsUtil.getTableAllPartitions(table);
|
||||
|
||||
if (null == allPartitions || allPartitions.isEmpty()) {
|
||||
throw DataXException.asDataXException(OdpsReaderErrorCode.PARTITION_ERROR,
|
||||
String.format("分区信息配置错误.源头表:%s 虽然为分区表, 但其实际分区值并不存在. 请确认源头表已经生成该分区,再进行数据抽取.",
|
||||
table.getName()));
|
||||
}
|
||||
|
||||
List<String> parsedPartitions = expandUserConfiguredPartition(
|
||||
allPartitions, userConfiguredPartitions);
|
||||
|
||||
if (null == parsedPartitions || parsedPartitions.isEmpty()) {
|
||||
throw DataXException.asDataXException(
|
||||
OdpsReaderErrorCode.PARTITION_ERROR,
|
||||
String.format(
|
||||
"分区配置错误,根据您所配置的分区没有匹配到源头表中的分区. 源头表所有分区是:[\n%s\n], 您配置的分区是:[\n%s\n]. 请您根据实际情况在作出修改. ",
|
||||
StringUtils.join(allPartitions, "\n"),
|
||||
StringUtils.join(userConfiguredPartitions, "\n")));
|
||||
}
|
||||
this.originalConfig.set(Key.PARTITION, parsedPartitions);
|
||||
|
||||
for (Column column : table.getSchema()
|
||||
.getPartitionColumns()) {
|
||||
// 获取分区列名, 支持用户配置分区列同步
|
||||
for (Column column : table.getSchema().getPartitionColumns()) {
|
||||
partitionColumns.add(column.getName());
|
||||
}
|
||||
|
||||
List<String> allPartitions = OdpsUtil.getTableAllPartitions(table);
|
||||
|
||||
List<String> parsedPartitions = expandUserConfiguredPartition(
|
||||
table, allPartitions, userConfiguredPartitions, partitionColumns.size());
|
||||
if (null == parsedPartitions || parsedPartitions.isEmpty()) {
|
||||
if (!this.successOnNoPartition) {
|
||||
// PARTITION_NOT_EXISTS_ERROR 这个异常ErrorCode在AdsWriter有使用,用户判断空分区Load Data任务不报错
|
||||
// 其他类型的异常不要使用这个错误码
|
||||
throw DataXException.asDataXException(
|
||||
OdpsReaderErrorCode.PARTITION_NOT_EXISTS_ERROR,
|
||||
MESSAGE_SOURCE.message("odpsreader.5",
|
||||
StringUtils.join(allPartitions, "\n"),
|
||||
StringUtils.join(userConfiguredPartitions, "\n")));
|
||||
} else {
|
||||
LOG.warn(
|
||||
String.format(
|
||||
"The partition configuration is wrong, " +
|
||||
"but you have configured the successOnNoPartition to be true to ignore the error. " +
|
||||
"According to the partition you have configured, it does not match the partition in the source table. " +
|
||||
"All the partitions in the source table are:[\n%s\n], the partition you configured is:[\n%s\n]. " +
|
||||
"please revise it according to the actual situation.",
|
||||
StringUtils.join(allPartitions, "\n"),
|
||||
StringUtils.join(userConfiguredPartitions, "\n")));
|
||||
}
|
||||
}
|
||||
LOG.info(String
|
||||
.format("expand user configured partitions are : %s", JSON.toJSONString(parsedPartitions)));
|
||||
this.originalConfig.set(Key.PARTITION, parsedPartitions);
|
||||
}
|
||||
} else {
|
||||
// 非分区表,则不能配置分区
|
||||
if (null != userConfiguredPartitions
|
||||
&& !userConfiguredPartitions.isEmpty()) {
|
||||
&& !userConfiguredPartitions.isEmpty()) {
|
||||
throw DataXException.asDataXException(OdpsReaderErrorCode.PARTITION_ERROR,
|
||||
String.format("分区配置错误,源头表:%s 为非分区表, 您不能配置分区. 请您删除该配置项. ", table.getName()));
|
||||
MESSAGE_SOURCE.message("odpsreader.6", table.getName()));
|
||||
}
|
||||
}
|
||||
|
||||
this.originalConfig.set(Constant.PARTITION_COLUMNS, partitionColumns);
|
||||
if (isPartitionedTable) {
|
||||
LOG.info("{源头表:{} 的所有分区列是:[{}]}", table.getName(),
|
||||
StringUtils.join(partitionColumns, ","));
|
||||
LOG.info(MESSAGE_SOURCE.message("odpsreader.7", table.getName(),
|
||||
StringUtils.join(partitionColumns, ",")));
|
||||
}
|
||||
}
|
||||
|
||||
private List<String> expandUserConfiguredPartition(
|
||||
List<String> allPartitions, List<String> userConfiguredPartitions) {
|
||||
/**
|
||||
* 将用户配置的分区(可能是直接的分区配置 dt=20170101, 可能是简单正则dt=201701*, 也可能是区间过滤条件 dt>=20170101 and dt<20170130) 和ODPS
|
||||
* table所有的分区进行匹配,过滤出用户希望同步的分区集合
|
||||
*
|
||||
* @param table odps table
|
||||
* @param allPartitions odps table所有的分区
|
||||
* @param userConfiguredPartitions 用户配置的分区
|
||||
* @param tableOriginalPartitionDepth odps table分区级数(一级分区,二级分区,三级分区等)
|
||||
* @return 返回过滤出的分区
|
||||
*/
|
||||
private List<String> expandUserConfiguredPartition(Table table,
|
||||
List<String> allPartitions,
|
||||
List<String> userConfiguredPartitions,
|
||||
int tableOriginalPartitionDepth) {
|
||||
|
||||
UserConfiguredPartitionClassification userConfiguredPartitionClassification = OdpsUtil
|
||||
.classifyUserConfiguredPartitions(userConfiguredPartitions);
|
||||
|
||||
if (userConfiguredPartitionClassification.isIncludeHintPartition()) {
|
||||
List<String> expandUserConfiguredPartitionResult = new ArrayList<String>();
|
||||
|
||||
// 处理不包含/*query*/的分区过滤
|
||||
if (!userConfiguredPartitionClassification.getUserConfiguredNormalPartition().isEmpty()) {
|
||||
expandUserConfiguredPartitionResult.addAll(expandNoHintUserConfiguredPartition(allPartitions,
|
||||
userConfiguredPartitionClassification.getUserConfiguredNormalPartition(),
|
||||
tableOriginalPartitionDepth));
|
||||
}
|
||||
if (!allPartitions.isEmpty()) {
|
||||
expandUserConfiguredPartitionResult.addAll(expandHintUserConfiguredPartition(table,
|
||||
allPartitions, userConfiguredPartitionClassification.getUserConfiguredHintPartition()));
|
||||
}
|
||||
return expandUserConfiguredPartitionResult;
|
||||
} else {
|
||||
return expandNoHintUserConfiguredPartition(allPartitions, userConfiguredPartitions,
|
||||
tableOriginalPartitionDepth);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* 匹配包含 HINT 条件的过滤
|
||||
*
|
||||
* @param table odps table
|
||||
* @param allPartitions odps table所有的分区
|
||||
* @param userHintConfiguredPartitions 用户配置的分区
|
||||
* @return 返回过滤出的分区
|
||||
*/
|
||||
private List<String> expandHintUserConfiguredPartition(Table table,
|
||||
List<String> allPartitions,
|
||||
List<String> userHintConfiguredPartitions) {
|
||||
try {
|
||||
// load odps table all partitions into sqlite memory database
|
||||
SqliteUtil sqliteUtil = new SqliteUtil();
|
||||
sqliteUtil.loadAllPartitionsIntoSqlite(table, allPartitions);
|
||||
return sqliteUtil.selectUserConfiguredPartition(userHintConfiguredPartitions);
|
||||
} catch (Exception ex) {
|
||||
throw DataXException.asDataXException(OdpsReaderErrorCode.PARTITION_ERROR,
|
||||
String.format("Expand user configured partition has exception: %s", ex.getMessage()), ex);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* 匹配没有 HINT 条件的过滤,包括 简单正则匹配(dt=201701*) 和 直接匹配(dt=20170101)
|
||||
*
|
||||
* @param allPartitions odps table所有的分区
|
||||
* @param userNormalConfiguredPartitions 用户配置的分区
|
||||
* @param tableOriginalPartitionDepth odps table分区级数(一级分区,二级分区,三级分区等)
|
||||
* @return 返回过滤出的分区
|
||||
*/
|
||||
private List<String> expandNoHintUserConfiguredPartition(List<String> allPartitions,
|
||||
List<String> userNormalConfiguredPartitions,
|
||||
int tableOriginalPartitionDepth) {
|
||||
// 对odps 本身的所有分区进行特殊字符的处理
|
||||
LOG.info("format partition with rules: remove all space; remove all '; replace / to ,");
|
||||
// 表里面已有分区量比较大,有些任务无关,没有打印
|
||||
List<String> allStandardPartitions = OdpsUtil
|
||||
.formatPartitions(allPartitions);
|
||||
.formatPartitions(allPartitions);
|
||||
|
||||
// 对用户自身配置的所有分区进行特殊字符的处理
|
||||
List<String> allStandardUserConfiguredPartitions = OdpsUtil
|
||||
.formatPartitions(userConfiguredPartitions);
|
||||
.formatPartitions(userNormalConfiguredPartitions);
|
||||
LOG.info("user configured partition: {}", JSON.toJSONString(userNormalConfiguredPartitions));
|
||||
LOG.info("formated partition: {}", JSON.toJSONString(allStandardUserConfiguredPartitions));
|
||||
|
||||
/**
|
||||
* 对配置的分区级数(深度)进行检查
|
||||
@ -177,20 +261,20 @@ public class OdpsReader extends Reader {
|
||||
comparedPartitionDepth = comparedPartition.split(",").length;
|
||||
if (comparedPartitionDepth != firstPartitionDepth) {
|
||||
throw DataXException.asDataXException(OdpsReaderErrorCode.PARTITION_ERROR,
|
||||
String.format("分区配置错误, 您所配置的分区级数和该表的实际情况不一致, 比如分区:[%s] 是 %s 级分区, 而分区:[%s] 是 %s 级分区. DataX 是通过英文逗号判断您所配置的分区级数的. 正确的格式形如\"pt=${bizdate}, type=0\" ,请您参考示例修改该配置项. ",
|
||||
firstPartition, firstPartitionDepth, comparedPartition, comparedPartitionDepth));
|
||||
MESSAGE_SOURCE
|
||||
.message("odpsreader.8", firstPartition, firstPartitionDepth, comparedPartition,
|
||||
comparedPartitionDepth));
|
||||
}
|
||||
}
|
||||
|
||||
int tableOriginalPartitionDepth = allStandardPartitions.get(0).split(",").length;
|
||||
if (firstPartitionDepth != tableOriginalPartitionDepth) {
|
||||
throw DataXException.asDataXException(OdpsReaderErrorCode.PARTITION_ERROR,
|
||||
String.format("分区配置错误, 您所配置的分区:%s 的级数:%s 与您要读取的 ODPS 源头表的分区级数:%s 不相等. DataX 是通过英文逗号判断您所配置的分区级数的.正确的格式形如\"pt=${bizdate}, type=0\" ,请您参考示例修改该配置项.",
|
||||
firstPartition, firstPartitionDepth, tableOriginalPartitionDepth));
|
||||
MESSAGE_SOURCE
|
||||
.message("odpsreader.9", firstPartition, firstPartitionDepth, tableOriginalPartitionDepth));
|
||||
}
|
||||
|
||||
List<String> retPartitions = FilterUtil.filterByRegulars(allStandardPartitions,
|
||||
allStandardUserConfiguredPartitions);
|
||||
allStandardUserConfiguredPartitions);
|
||||
|
||||
return retPartitions;
|
||||
}
|
||||
@ -198,11 +282,11 @@ public class OdpsReader extends Reader {
|
||||
private void dealColumn(Table table) {
|
||||
// 用户配置的 column 之前已经确保其不为空
|
||||
List<String> userConfiguredColumns = this.originalConfig.getList(
|
||||
Key.COLUMN, String.class);
|
||||
Key.COLUMN, String.class);
|
||||
|
||||
List<Column> allColumns = OdpsUtil.getTableAllColumns(table);
|
||||
List<String> allNormalColumns = OdpsUtil
|
||||
.getTableOriginalColumnNameList(allColumns);
|
||||
.getTableOriginalColumnNameList(allColumns);
|
||||
|
||||
StringBuilder columnMeta = new StringBuilder();
|
||||
for (Column column : allColumns) {
|
||||
@ -210,26 +294,26 @@ public class OdpsReader extends Reader {
|
||||
}
|
||||
columnMeta.setLength(columnMeta.length() - 1);
|
||||
|
||||
LOG.info("源头表:{} 的所有字段是:[{}]", table.getName(), columnMeta.toString());
|
||||
LOG.info(MESSAGE_SOURCE.message("odpsreader.10", table.getName(), columnMeta.toString()));
|
||||
|
||||
if (1 == userConfiguredColumns.size()
|
||||
&& "*".equals(userConfiguredColumns.get(0))) {
|
||||
LOG.warn("这是一条警告信息,您配置的 ODPS 读取的列为*,这是不推荐的行为,因为当您的表字段个数、类型有变动时,可能影响任务正确性甚至会运行出错. 建议您把所有需要抽取的列都配置上. ");
|
||||
&& "*".equals(userConfiguredColumns.get(0))) {
|
||||
LOG.warn(MESSAGE_SOURCE.message("odpsreader.11"));
|
||||
this.originalConfig.set(Key.COLUMN, allNormalColumns);
|
||||
}
|
||||
|
||||
userConfiguredColumns = this.originalConfig.getList(
|
||||
Key.COLUMN, String.class);
|
||||
Key.COLUMN, String.class);
|
||||
|
||||
/**
|
||||
* warn: 字符串常量需要与表原生字段tableOriginalColumnNameList 分开存放 demo:
|
||||
* ["id","'id'","name"]
|
||||
*/
|
||||
List<String> allPartitionColumns = this.originalConfig.getList(
|
||||
Constant.PARTITION_COLUMNS, String.class);
|
||||
Constant.PARTITION_COLUMNS, String.class);
|
||||
List<Pair<String, ColumnType>> parsedColumns = OdpsUtil
|
||||
.parseColumns(allNormalColumns, allPartitionColumns,
|
||||
userConfiguredColumns);
|
||||
.parseColumns(allNormalColumns, allPartitionColumns,
|
||||
userConfiguredColumns);
|
||||
|
||||
this.originalConfig.set(Constant.PARSED_COLUMNS, parsedColumns);
|
||||
|
||||
@ -238,7 +322,7 @@ public class OdpsReader extends Reader {
|
||||
for (int i = 0, len = parsedColumns.size(); i < len; i++) {
|
||||
Pair<String, ColumnType> pair = parsedColumns.get(i);
|
||||
sb.append(String.format(" %s : %s", pair.getLeft(),
|
||||
pair.getRight()));
|
||||
pair.getRight()));
|
||||
if (i != len - 1) {
|
||||
sb.append(",");
|
||||
}
|
||||
@ -247,9 +331,36 @@ public class OdpsReader extends Reader {
|
||||
LOG.info("parsed column details: {} .", sb.toString());
|
||||
}
|
||||
|
||||
|
||||
@Override
|
||||
public void prepare() {
|
||||
List<String> preSqls = this.originalConfig.getList(Key.PRE_SQL, String.class);
|
||||
if (preSqls != null && !preSqls.isEmpty()) {
|
||||
LOG.info(
|
||||
String.format("Beigin to exectue preSql : %s. \n Attention: these preSqls must be idempotent!!!",
|
||||
JSON.toJSONString(preSqls)));
|
||||
long beginTime = System.currentTimeMillis();
|
||||
|
||||
StringBuffer preSqlBuffer = new StringBuffer();
|
||||
for (String preSql : preSqls) {
|
||||
preSql = preSql.trim();
|
||||
if (StringUtils.isNotBlank(preSql) && !preSql.endsWith(";")) {
|
||||
preSql = String.format("%s;", preSql);
|
||||
}
|
||||
if (StringUtils.isNotBlank(preSql)) {
|
||||
preSqlBuffer.append(preSql);
|
||||
}
|
||||
}
|
||||
if (StringUtils.isNotBlank(preSqlBuffer.toString())) {
|
||||
OdpsUtil.runSqlTaskWithRetry(this.odps, preSqlBuffer.toString(), "preSql");
|
||||
} else {
|
||||
LOG.info("skip to execute the preSql: {}", JSON.toJSONString(preSqls));
|
||||
}
|
||||
long endTime = System.currentTimeMillis();
|
||||
|
||||
LOG.info(
|
||||
String.format("Exectue odpsreader preSql successfully! cost time: %s ms.", (endTime - beginTime)));
|
||||
}
|
||||
this.initOdpsTableInfo();
|
||||
}
|
||||
|
||||
@Override
|
||||
@ -259,6 +370,33 @@ public class OdpsReader extends Reader {
|
||||
|
||||
@Override
|
||||
public void post() {
|
||||
List<String> postSqls = this.originalConfig.getList(Key.POST_SQL, String.class);
|
||||
|
||||
if (postSqls != null && !postSqls.isEmpty()) {
|
||||
LOG.info(
|
||||
String.format("Beigin to exectue postSql : %s. \n Attention: these postSqls must be idempotent!!!",
|
||||
JSON.toJSONString(postSqls)));
|
||||
long beginTime = System.currentTimeMillis();
|
||||
StringBuffer postSqlBuffer = new StringBuffer();
|
||||
for (String postSql : postSqls) {
|
||||
postSql = postSql.trim();
|
||||
if (StringUtils.isNotBlank(postSql) && !postSql.endsWith(";")) {
|
||||
postSql = String.format("%s;", postSql);
|
||||
}
|
||||
if (StringUtils.isNotBlank(postSql)) {
|
||||
postSqlBuffer.append(postSql);
|
||||
}
|
||||
}
|
||||
if (StringUtils.isNotBlank(postSqlBuffer.toString())) {
|
||||
OdpsUtil.runSqlTaskWithRetry(this.odps, postSqlBuffer.toString(), "postSql");
|
||||
} else {
|
||||
LOG.info("skip to execute the postSql: {}", JSON.toJSONString(postSqls));
|
||||
}
|
||||
|
||||
long endTime = System.currentTimeMillis();
|
||||
LOG.info(
|
||||
String.format("Exectue odpsreader postSql successfully! cost time: %s ms.", (endTime - beginTime)));
|
||||
}
|
||||
}
|
||||
|
||||
@Override
|
||||
@ -268,6 +406,7 @@ public class OdpsReader extends Reader {
|
||||
|
||||
public static class Task extends Reader.Task {
|
||||
private static final Logger LOG = LoggerFactory.getLogger(Task.class);
|
||||
private static final MessageSource MESSAGE_SOURCE = MessageSource.loadResourceBundle(OdpsReader.class);
|
||||
private Configuration readerSliceConf;
|
||||
|
||||
private String tunnelServer;
|
||||
@ -278,32 +417,35 @@ public class OdpsReader extends Reader {
|
||||
private boolean isPartitionedTable;
|
||||
private String sessionId;
|
||||
private boolean isCompress;
|
||||
private boolean successOnNoPartition;
|
||||
|
||||
@Override
|
||||
public void init() {
|
||||
this.readerSliceConf = super.getPluginJobConf();
|
||||
this.tunnelServer = this.readerSliceConf.getString(
|
||||
Key.TUNNEL_SERVER, null);
|
||||
Key.TUNNEL_SERVER, null);
|
||||
|
||||
this.odps = OdpsUtil.initOdps(this.readerSliceConf);
|
||||
this.projectName = this.readerSliceConf.getString(Key.PROJECT);
|
||||
this.tableName = this.readerSliceConf.getString(Key.TABLE);
|
||||
this.table = OdpsUtil.getTable(this.odps, projectName, tableName);
|
||||
this.isPartitionedTable = this.readerSliceConf
|
||||
.getBool(Constant.IS_PARTITIONED_TABLE);
|
||||
.getBool(Constant.IS_PARTITIONED_TABLE);
|
||||
this.sessionId = this.readerSliceConf.getString(Constant.SESSION_ID, null);
|
||||
|
||||
|
||||
|
||||
this.isCompress = this.readerSliceConf.getBool(Key.IS_COMPRESS, false);
|
||||
this.successOnNoPartition = this.readerSliceConf.getBool(Key.SUCCESS_ON_NO_PATITION, false);
|
||||
|
||||
// sessionId 为空的情况是:切分级别只到 partition 的情况
|
||||
if (StringUtils.isBlank(this.sessionId)) {
|
||||
String partition = this.readerSliceConf.getString(Key.PARTITION);
|
||||
|
||||
// 没有分区读取时, 是没有sessionId这些的
|
||||
if (this.isPartitionedTable && StringUtils.isBlank(partition) && this.successOnNoPartition) {
|
||||
LOG.warn("Partition is blank, but you config successOnNoPartition[true] ,don't need to create session");
|
||||
} else if (StringUtils.isBlank(this.sessionId)) {
|
||||
DownloadSession session = OdpsUtil.createMasterSessionForPartitionedTable(odps,
|
||||
tunnelServer, projectName, tableName, this.readerSliceConf.getString(Key.PARTITION));
|
||||
tunnelServer, projectName, tableName, this.readerSliceConf.getString(Key.PARTITION));
|
||||
this.sessionId = session.getId();
|
||||
}
|
||||
|
||||
LOG.info("sessionId:{}", this.sessionId);
|
||||
}
|
||||
|
||||
@ -316,30 +458,35 @@ public class OdpsReader extends Reader {
|
||||
DownloadSession downloadSession = null;
|
||||
String partition = this.readerSliceConf.getString(Key.PARTITION);
|
||||
|
||||
if (this.isPartitionedTable && StringUtils.isBlank(partition) && this.successOnNoPartition) {
|
||||
LOG.warn(String.format(
|
||||
"Partition is blank,not need to be read"));
|
||||
recordSender.flush();
|
||||
return;
|
||||
}
|
||||
|
||||
if (this.isPartitionedTable) {
|
||||
downloadSession = OdpsUtil.getSlaveSessionForPartitionedTable(this.odps, this.sessionId,
|
||||
this.tunnelServer, this.projectName, this.tableName, partition);
|
||||
this.tunnelServer, this.projectName, this.tableName, partition);
|
||||
} else {
|
||||
downloadSession = OdpsUtil.getSlaveSessionForNonPartitionedTable(this.odps, this.sessionId,
|
||||
this.tunnelServer, this.projectName, this.tableName);
|
||||
this.tunnelServer, this.projectName, this.tableName);
|
||||
}
|
||||
|
||||
long start = this.readerSliceConf.getLong(Constant.START_INDEX, 0);
|
||||
long count = this.readerSliceConf.getLong(Constant.STEP_COUNT,
|
||||
downloadSession.getRecordCount());
|
||||
downloadSession.getRecordCount());
|
||||
|
||||
if (count > 0) {
|
||||
LOG.info(String.format(
|
||||
"Begin to read ODPS table:%s, partition:%s, startIndex:%s, count:%s.",
|
||||
this.tableName, partition, start, count));
|
||||
"Begin to read ODPS table:%s, partition:%s, startIndex:%s, count:%s.",
|
||||
this.tableName, partition, start, count));
|
||||
} else if (count == 0) {
|
||||
LOG.warn(String.format("源头表:%s 的分区:%s 没有内容可抽取, 请您知晓.",
|
||||
this.tableName, partition));
|
||||
LOG.warn(MESSAGE_SOURCE.message("odpsreader.12", this.tableName, partition));
|
||||
return;
|
||||
} else {
|
||||
throw DataXException.asDataXException(OdpsReaderErrorCode.READ_DATA_FAIL,
|
||||
String.format("源头表:%s 的分区:%s 读取行数为负数, 请联系 ODPS 管理员查看表状态!",
|
||||
this.tableName, partition));
|
||||
MESSAGE_SOURCE.message("odpsreader.13", this.tableName, partition));
|
||||
}
|
||||
|
||||
TableSchema tableSchema = this.table.getSchema();
|
||||
@ -347,37 +494,36 @@ public class OdpsReader extends Reader {
|
||||
allColumns.addAll(tableSchema.getColumns());
|
||||
allColumns.addAll(tableSchema.getPartitionColumns());
|
||||
|
||||
Map<String, OdpsType> columnTypeMap = new HashMap<String, OdpsType>();
|
||||
Map<String, TypeInfo> columnTypeMap = new HashMap<String, TypeInfo>();
|
||||
for (Column column : allColumns) {
|
||||
columnTypeMap.put(column.getName(), column.getType());
|
||||
columnTypeMap.put(column.getName(), column.getTypeInfo());
|
||||
}
|
||||
|
||||
try {
|
||||
List<Configuration> parsedColumnsTmp = this.readerSliceConf
|
||||
.getListConfiguration(Constant.PARSED_COLUMNS);
|
||||
.getListConfiguration(Constant.PARSED_COLUMNS);
|
||||
List<Pair<String, ColumnType>> parsedColumns = new ArrayList<Pair<String, ColumnType>>();
|
||||
for (int i = 0; i < parsedColumnsTmp.size(); i++) {
|
||||
Configuration eachColumnConfig = parsedColumnsTmp.get(i);
|
||||
String columnName = eachColumnConfig.getString("left");
|
||||
ColumnType columnType = ColumnType
|
||||
.asColumnType(eachColumnConfig.getString("right"));
|
||||
.asColumnType(eachColumnConfig.getString("right"));
|
||||
parsedColumns.add(new MutablePair<String, ColumnType>(
|
||||
columnName, columnType));
|
||||
columnName, columnType));
|
||||
|
||||
}
|
||||
ReaderProxy readerProxy = new ReaderProxy(recordSender, downloadSession,
|
||||
columnTypeMap, parsedColumns, partition, this.isPartitionedTable,
|
||||
start, count, this.isCompress);
|
||||
start, count, this.isCompress, this.readerSliceConf);
|
||||
|
||||
readerProxy.doRead();
|
||||
} catch (Exception e) {
|
||||
throw DataXException.asDataXException(OdpsReaderErrorCode.READ_DATA_FAIL,
|
||||
String.format("源头表:%s 的分区:%s 读取失败, 请联系 ODPS 管理员查看错误详情.", this.tableName, partition), e);
|
||||
MESSAGE_SOURCE.message("odpsreader.14", this.tableName, partition), e);
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
|
||||
@Override
|
||||
public void post() {
|
||||
}
|
||||
|
@ -1,45 +1,53 @@
|
||||
package com.alibaba.datax.plugin.reader.odpsreader;
|
||||
|
||||
import com.alibaba.datax.common.spi.ErrorCode;
|
||||
import com.alibaba.datax.common.util.MessageSource;
|
||||
|
||||
public enum OdpsReaderErrorCode implements ErrorCode {
|
||||
REQUIRED_VALUE("OdpsReader-00", "您缺失了必须填写的参数值."),
|
||||
ILLEGAL_VALUE("OdpsReader-01", "您配置的值不合法."),
|
||||
CREATE_DOWNLOADSESSION_FAIL("OdpsReader-03", "创建 ODPS 的 downloadSession 失败."),
|
||||
GET_DOWNLOADSESSION_FAIL("OdpsReader-04", "获取 ODPS 的 downloadSession 失败."),
|
||||
READ_DATA_FAIL("OdpsReader-05", "读取 ODPS 源头表失败."),
|
||||
GET_ID_KEY_FAIL("OdpsReader-06", "获取 accessId/accessKey 失败."),
|
||||
REQUIRED_VALUE("DATAX_R_ODPS_001", MessageSource.loadResourceBundle(OdpsReaderErrorCode.class).message("description.DATAX_R_ODPS_001"),MessageSource.loadResourceBundle(OdpsReaderErrorCode.class).message("solution.DATAX_R_ODPS_001")),
|
||||
ILLEGAL_VALUE("DATAX_R_ODPS_002", MessageSource.loadResourceBundle(OdpsReaderErrorCode.class).message("description.DATAX_R_ODPS_002"),MessageSource.loadResourceBundle(OdpsReaderErrorCode.class).message("solution.DATAX_R_ODPS_002")),
|
||||
CREATE_DOWNLOADSESSION_FAIL("DATAX_R_ODPS_003", MessageSource.loadResourceBundle(OdpsReaderErrorCode.class).message("description.DATAX_R_ODPS_003"),MessageSource.loadResourceBundle(OdpsReaderErrorCode.class).message("solution.DATAX_R_ODPS_003")),
|
||||
GET_DOWNLOADSESSION_FAIL("DATAX_R_ODPS_004", MessageSource.loadResourceBundle(OdpsReaderErrorCode.class).message("description.DATAX_R_ODPS_004"),MessageSource.loadResourceBundle(OdpsReaderErrorCode.class).message("solution.DATAX_R_ODPS_004")),
|
||||
READ_DATA_FAIL("DATAX_R_ODPS_005", MessageSource.loadResourceBundle(OdpsReaderErrorCode.class).message("description.DATAX_R_ODPS_005"),MessageSource.loadResourceBundle(OdpsReaderErrorCode.class).message("solution.DATAX_R_ODPS_005")),
|
||||
GET_ID_KEY_FAIL("DATAX_R_ODPS_006", MessageSource.loadResourceBundle(OdpsReaderErrorCode.class).message("description.DATAX_R_ODPS_006"),MessageSource.loadResourceBundle(OdpsReaderErrorCode.class).message("solution.DATAX_R_ODPS_006")),
|
||||
|
||||
ODPS_READ_EXCEPTION("OdpsReader-07", "读取 odps 异常"),
|
||||
OPEN_RECORD_READER_FAILED("OdpsReader-08", "打开 recordReader 失败."),
|
||||
ODPS_READ_EXCEPTION("DATAX_R_ODPS_007", MessageSource.loadResourceBundle(OdpsReaderErrorCode.class).message("description.DATAX_R_ODPS_007"),MessageSource.loadResourceBundle(OdpsReaderErrorCode.class).message("solution.DATAX_R_ODPS_007")),
|
||||
OPEN_RECORD_READER_FAILED("DATAX_R_ODPS_008", MessageSource.loadResourceBundle(OdpsReaderErrorCode.class).message("description.DATAX_R_ODPS_008"),MessageSource.loadResourceBundle(OdpsReaderErrorCode.class).message("solution.DATAX_R_ODPS_008")),
|
||||
|
||||
ODPS_PROJECT_NOT_FOUNT("OdpsReader-10", "您配置的值不合法, odps project 不存在."), //ODPS-0420111: Project not found
|
||||
ODPS_PROJECT_NOT_FOUNT("DATAX_R_ODPS_009", MessageSource.loadResourceBundle(OdpsReaderErrorCode.class).message("description.DATAX_R_ODPS_009"),MessageSource.loadResourceBundle(OdpsReaderErrorCode.class).message("solution.DATAX_R_ODPS_009")), //ODPS-0420111: Project not found
|
||||
|
||||
ODPS_TABLE_NOT_FOUNT("OdpsReader-12", "您配置的值不合法, odps table 不存在."), // ODPS-0130131:Table not found
|
||||
ODPS_TABLE_NOT_FOUNT("DATAX_R_ODPS_010", MessageSource.loadResourceBundle(OdpsReaderErrorCode.class).message("description.DATAX_R_ODPS_010"),MessageSource.loadResourceBundle(OdpsReaderErrorCode.class).message("solution.DATAX_R_ODPS_010")), // ODPS-0130131:Table not found
|
||||
|
||||
ODPS_ACCESS_KEY_ID_NOT_FOUND("OdpsReader-13", "您配置的值不合法, odps accessId,accessKey 不存在."), //ODPS-0410051:Invalid credentials - accessKeyId not found
|
||||
ODPS_ACCESS_KEY_ID_NOT_FOUND("DATAX_R_ODPS_011", MessageSource.loadResourceBundle(OdpsReaderErrorCode.class).message("description.DATAX_R_ODPS_011"),MessageSource.loadResourceBundle(OdpsReaderErrorCode.class).message("solution.DATAX_R_ODPS_011")), //ODPS-0410051:Invalid credentials - accessKeyId not found
|
||||
|
||||
ODPS_ACCESS_KEY_INVALID("OdpsReader-14", "您配置的值不合法, odps accessKey 错误."), //ODPS-0410042:Invalid signature value - User signature dose not match
|
||||
ODPS_ACCESS_KEY_INVALID("DATAX_R_ODPS_012", MessageSource.loadResourceBundle(OdpsReaderErrorCode.class).message("description.DATAX_R_ODPS_012"),MessageSource.loadResourceBundle(OdpsReaderErrorCode.class).message("solution.DATAX_R_ODPS_012")), //ODPS-0410042:Invalid signature value - User signature dose not match
|
||||
|
||||
ODPS_ACCESS_DENY("OdpsReader-15", "拒绝访问, 您不在 您配置的 project 中."), //ODPS-0420095: Access Denied - Authorization Failed [4002], You doesn't exist in project
|
||||
ODPS_ACCESS_DENY("DATAX_R_ODPS_013", MessageSource.loadResourceBundle(OdpsReaderErrorCode.class).message("description.DATAX_R_ODPS_013"),MessageSource.loadResourceBundle(OdpsReaderErrorCode.class).message("solution.DATAX_R_ODPS_013")), //ODPS-0420095: Access Denied - Authorization Failed [4002], You doesn't exist in project
|
||||
|
||||
|
||||
|
||||
SPLIT_MODE_ERROR("OdpsReader-30", "splitMode配置错误."),
|
||||
SPLIT_MODE_ERROR("DATAX_R_ODPS_014", MessageSource.loadResourceBundle(OdpsReaderErrorCode.class).message("description.DATAX_R_ODPS_014"),MessageSource.loadResourceBundle(OdpsReaderErrorCode.class).message("solution.DATAX_R_ODPS_014")),
|
||||
|
||||
ACCOUNT_TYPE_ERROR("OdpsReader-31", "odps 账号类型错误."),
|
||||
ACCOUNT_TYPE_ERROR("DATAX_R_ODPS_015", MessageSource.loadResourceBundle(OdpsReaderErrorCode.class).message("description.DATAX_R_ODPS_015"),MessageSource.loadResourceBundle(OdpsReaderErrorCode.class).message("solution.DATAX_R_ODPS_015")),
|
||||
|
||||
VIRTUAL_VIEW_NOT_SUPPORT("OdpsReader-32", "Datax 不支持 读取虚拟视图."),
|
||||
VIRTUAL_VIEW_NOT_SUPPORT("DATAX_R_ODPS_016", MessageSource.loadResourceBundle(OdpsReaderErrorCode.class).message("description.DATAX_R_ODPS_016"),MessageSource.loadResourceBundle(OdpsReaderErrorCode.class).message("solution.DATAX_R_ODPS_016")),
|
||||
|
||||
PARTITION_ERROR("OdpsReader-33", "分区配置错误."),
|
||||
PARTITION_ERROR("DATAX_R_ODPS_017", MessageSource.loadResourceBundle(OdpsReaderErrorCode.class).message("description.DATAX_R_ODPS_017"),MessageSource.loadResourceBundle(OdpsReaderErrorCode.class).message("solution.DATAX_R_ODPS_017")),
|
||||
|
||||
PARTITION_NOT_EXISTS_ERROR("DATAX_R_ODPS_018", MessageSource.loadResourceBundle(OdpsReaderErrorCode.class).message("description.DATAX_R_ODPS_018"),MessageSource.loadResourceBundle(OdpsReaderErrorCode.class).message("solution.DATAX_R_ODPS_018")),
|
||||
|
||||
RUN_SQL_FAILED("DATAX_R_ODPS_019", MessageSource.loadResourceBundle(OdpsReaderErrorCode.class).message("description.DATAX_R_ODPS_019"),MessageSource.loadResourceBundle(OdpsReaderErrorCode.class).message("solution.DATAX_R_ODPS_019")),
|
||||
|
||||
RUN_SQL_ODPS_EXCEPTION("DATAX_R_ODPS_020", MessageSource.loadResourceBundle(OdpsReaderErrorCode.class).message("description.DATAX_R_ODPS_020"),MessageSource.loadResourceBundle(OdpsReaderErrorCode.class).message("solution.DATAX_R_ODPS_020")),
|
||||
;
|
||||
private final String code;
|
||||
private final String description;
|
||||
private final String solution;
|
||||
|
||||
private OdpsReaderErrorCode(String code, String description) {
|
||||
private OdpsReaderErrorCode(String code, String description,String solution) {
|
||||
this.code = code;
|
||||
this.description = description;
|
||||
this.solution = solution;
|
||||
}
|
||||
|
||||
@Override
|
||||
@ -52,9 +60,12 @@ public enum OdpsReaderErrorCode implements ErrorCode {
|
||||
return this.description;
|
||||
}
|
||||
|
||||
public String getSolution() {
|
||||
return solution;
|
||||
}
|
||||
|
||||
@Override
|
||||
public String toString() {
|
||||
return String.format("Code:[%s], Description:[%s]. ", this.code,
|
||||
this.description);
|
||||
return String.format("Code:%s:%s, Solution:[%s]. ", this.code,this.description,this.solution);
|
||||
}
|
||||
}
|
||||
|
@ -3,28 +3,37 @@ package com.alibaba.datax.plugin.reader.odpsreader;
|
||||
import com.alibaba.datax.common.element.*;
|
||||
import com.alibaba.datax.common.exception.DataXException;
|
||||
import com.alibaba.datax.common.plugin.RecordSender;
|
||||
import com.alibaba.datax.common.util.Configuration;
|
||||
import com.alibaba.datax.common.util.MessageSource;
|
||||
import com.alibaba.datax.plugin.reader.odpsreader.util.OdpsUtil;
|
||||
import com.alibaba.fastjson.JSON;
|
||||
import com.aliyun.odps.Column;
|
||||
import com.aliyun.odps.OdpsType;
|
||||
import com.aliyun.odps.data.*;
|
||||
import com.aliyun.odps.data.Record;
|
||||
import com.aliyun.odps.data.RecordReader;
|
||||
import com.aliyun.odps.tunnel.TableTunnel;
|
||||
import com.aliyun.odps.type.ArrayTypeInfo;
|
||||
import com.aliyun.odps.type.MapTypeInfo;
|
||||
import com.aliyun.odps.type.TypeInfo;
|
||||
import org.apache.commons.codec.binary.Base64;
|
||||
import org.apache.commons.lang3.StringUtils;
|
||||
import org.apache.commons.lang3.tuple.Pair;
|
||||
import org.slf4j.Logger;
|
||||
import org.slf4j.LoggerFactory;
|
||||
|
||||
import java.text.ParseException;
|
||||
import java.util.HashMap;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
import java.text.SimpleDateFormat;
|
||||
import java.util.*;
|
||||
|
||||
public class ReaderProxy {
|
||||
private static final Logger LOG = LoggerFactory
|
||||
.getLogger(ReaderProxy.class);
|
||||
private static final MessageSource MESSAGE_SOURCE = MessageSource.loadResourceBundle(ReaderProxy.class);
|
||||
private static boolean IS_DEBUG = LOG.isDebugEnabled();
|
||||
|
||||
private RecordSender recordSender;
|
||||
private TableTunnel.DownloadSession downloadSession;
|
||||
private Map<String, OdpsType> columnTypeMap;
|
||||
private Map<String, TypeInfo> columnTypeMap;
|
||||
private List<Pair<String, ColumnType>> parsedColumns;
|
||||
private String partition;
|
||||
private boolean isPartitionTable;
|
||||
@ -33,10 +42,37 @@ public class ReaderProxy {
|
||||
private long count;
|
||||
private boolean isCompress;
|
||||
|
||||
private static final String NULL_INDICATOR = null;
|
||||
// TODO 没有支持用户可配置
|
||||
// TODO 没有timezone
|
||||
private SimpleDateFormat dateFormat = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss");
|
||||
|
||||
// 读取 jvm 默认时区
|
||||
private Calendar calendarForDate = null;
|
||||
private boolean useDateWithCalendar = true;
|
||||
|
||||
private Calendar initCalendar(Configuration config) {
|
||||
// 理论上不会有其他选择,有配置化可以随时应急
|
||||
String calendarType = config.getString("calendarType", "iso8601");
|
||||
Boolean lenient = config.getBool("calendarLenient", true);
|
||||
|
||||
// 默认jvm时区
|
||||
TimeZone timeZone = TimeZone.getDefault();
|
||||
String timeZoneStr = config.getString("calendarTimeZone");
|
||||
if (StringUtils.isNotBlank(timeZoneStr)) {
|
||||
// 如果用户明确指定使用用户指定的
|
||||
timeZone = TimeZone.getTimeZone(timeZoneStr);
|
||||
}
|
||||
|
||||
Calendar calendarForDate = new Calendar.Builder().setCalendarType(calendarType).setLenient(lenient)
|
||||
.setTimeZone(timeZone).build();
|
||||
return calendarForDate;
|
||||
}
|
||||
|
||||
public ReaderProxy(RecordSender recordSender, TableTunnel.DownloadSession downloadSession,
|
||||
Map<String, OdpsType> columnTypeMap,
|
||||
List<Pair<String, ColumnType>> parsedColumns, String partition,
|
||||
boolean isPartitionTable, long start, long count, boolean isCompress) {
|
||||
Map<String, TypeInfo> columnTypeMap,
|
||||
List<Pair<String, ColumnType>> parsedColumns, String partition,
|
||||
boolean isPartitionTable, long start, long count, boolean isCompress, Configuration taskConfig) {
|
||||
this.recordSender = recordSender;
|
||||
this.downloadSession = downloadSession;
|
||||
this.columnTypeMap = columnTypeMap;
|
||||
@ -46,14 +82,24 @@ public class ReaderProxy {
|
||||
this.start = start;
|
||||
this.count = count;
|
||||
this.isCompress = isCompress;
|
||||
|
||||
this.calendarForDate = this.initCalendar(taskConfig);
|
||||
this.useDateWithCalendar = taskConfig.getBool("useDateWithCalendar", true);
|
||||
}
|
||||
|
||||
// warn: odps 分区列和正常列不能重名, 所有列都不不区分大小写
|
||||
public void doRead() {
|
||||
try {
|
||||
LOG.info("start={}, count={}",start, count);
|
||||
//RecordReader recordReader = downloadSession.openRecordReader(start, count, isCompress);
|
||||
RecordReader recordReader = OdpsUtil.getRecordReader(downloadSession, start, count, isCompress);
|
||||
List<Column> userConfigNormalColumns = OdpsUtil.getNormalColumns(this.parsedColumns, this.columnTypeMap);
|
||||
RecordReader recordReader = null;
|
||||
// fix #ODPS-52184/10332469, updateColumnsSize表示如果用户指定的读取源表列数100列以内的话,则进行列裁剪优化;
|
||||
int updateColumnsSize = 100;
|
||||
if(userConfigNormalColumns.size() <= updateColumnsSize){
|
||||
recordReader = OdpsUtil.getRecordReader(downloadSession, start, count, isCompress, userConfigNormalColumns);
|
||||
} else {
|
||||
recordReader = OdpsUtil.getRecordReader(downloadSession, start, count, isCompress);
|
||||
}
|
||||
|
||||
Record odpsRecord;
|
||||
Map<String, String> partitionMap = this
|
||||
@ -72,7 +118,7 @@ public class ReaderProxy {
|
||||
} catch (InterruptedException ignored) {
|
||||
}
|
||||
recordReader = downloadSession.openRecordReader(start, count, isCompress);
|
||||
LOG.warn("odps-read-exception, 重试第{}次", retryTimes);
|
||||
LOG.warn(MESSAGE_SOURCE.message("readerproxy.1", retryTimes));
|
||||
retryTimes++;
|
||||
continue;
|
||||
} else {
|
||||
@ -144,9 +190,7 @@ public class ReaderProxy {
|
||||
throw DataXException
|
||||
.asDataXException(
|
||||
OdpsReaderErrorCode.ILLEGAL_VALUE,
|
||||
String.format(
|
||||
"您的分区 [%s] 解析出现错误,解析后正确的配置方式类似为 [ pt=1,dt=1 ].",
|
||||
eachPartition));
|
||||
MESSAGE_SOURCE.message("readerproxy.2", eachPartition));
|
||||
}
|
||||
// warn: translate to lower case, it's more comfortable to
|
||||
// compare whit user's input columns
|
||||
@ -168,8 +212,7 @@ public class ReaderProxy {
|
||||
partitionColumnName = partitionColumnName.toLowerCase();
|
||||
// it's will never happen, but add this checking
|
||||
if (!partitionMap.containsKey(partitionColumnName)) {
|
||||
String errorMessage = String.format(
|
||||
"表所有分区信息为: %s 其中找不到 [%s] 对应的分区值.",
|
||||
String errorMessage = MESSAGE_SOURCE.message("readerproxy.3",
|
||||
com.alibaba.fastjson.JSON.toJSONString(partitionMap),
|
||||
partitionColumnName);
|
||||
throw DataXException.asDataXException(
|
||||
@ -190,7 +233,7 @@ public class ReaderProxy {
|
||||
* every line record of odps table
|
||||
* @param dataXRecord
|
||||
* every datax record, to be send to writer. method getXXX() case sensitive
|
||||
* @param type
|
||||
* @param typeInfo
|
||||
* odps column type
|
||||
* @param columnNameValue
|
||||
* for partition column it's column value, for normal column it's
|
||||
@ -199,83 +242,681 @@ public class ReaderProxy {
|
||||
* true means partition column and false means normal column
|
||||
* */
|
||||
private void odpsColumnToDataXField(Record odpsRecord,
|
||||
com.alibaba.datax.common.element.Record dataXRecord, OdpsType type,
|
||||
com.alibaba.datax.common.element.Record dataXRecord, TypeInfo typeInfo,
|
||||
String columnNameValue, boolean isPartitionColumn) {
|
||||
switch (type) {
|
||||
case BIGINT: {
|
||||
if (isPartitionColumn) {
|
||||
dataXRecord.addColumn(new LongColumn(columnNameValue));
|
||||
} else {
|
||||
dataXRecord.addColumn(new LongColumn(odpsRecord
|
||||
.getBigint(columnNameValue)));
|
||||
}
|
||||
break;
|
||||
}
|
||||
case BOOLEAN: {
|
||||
if (isPartitionColumn) {
|
||||
dataXRecord.addColumn(new BoolColumn(columnNameValue));
|
||||
} else {
|
||||
dataXRecord.addColumn(new BoolColumn(odpsRecord
|
||||
.getBoolean(columnNameValue)));
|
||||
}
|
||||
break;
|
||||
}
|
||||
case DATETIME: {
|
||||
if (isPartitionColumn) {
|
||||
try {
|
||||
dataXRecord.addColumn(new DateColumn(ColumnCast
|
||||
.string2Date(new StringColumn(columnNameValue))));
|
||||
} catch (ParseException e) {
|
||||
LOG.error(String.format("", this.partition));
|
||||
String errMessage = String.format(
|
||||
"您读取分区 [%s] 出现日期转换异常, 日期的字符串表示为 [%s].",
|
||||
this.partition, columnNameValue);
|
||||
LOG.error(errMessage);
|
||||
throw DataXException.asDataXException(
|
||||
OdpsReaderErrorCode.READ_DATA_FAIL, errMessage, e);
|
||||
}
|
||||
} else {
|
||||
dataXRecord.addColumn(new DateColumn(odpsRecord
|
||||
.getDatetime(columnNameValue)));
|
||||
}
|
||||
|
||||
break;
|
||||
}
|
||||
case DOUBLE: {
|
||||
if (isPartitionColumn) {
|
||||
dataXRecord.addColumn(new DoubleColumn(columnNameValue));
|
||||
} else {
|
||||
dataXRecord.addColumn(new DoubleColumn(odpsRecord
|
||||
.getDouble(columnNameValue)));
|
||||
ArrayRecord record = (ArrayRecord) odpsRecord;
|
||||
|
||||
OdpsType type = typeInfo.getOdpsType();
|
||||
|
||||
switch (type) {
|
||||
case BIGINT: {
|
||||
if (isPartitionColumn) {
|
||||
dataXRecord.addColumn(new LongColumn(columnNameValue));
|
||||
} else {
|
||||
dataXRecord.addColumn(new LongColumn(record
|
||||
.getBigint(columnNameValue)));
|
||||
}
|
||||
break;
|
||||
}
|
||||
break;
|
||||
}
|
||||
case DECIMAL: {
|
||||
if(isPartitionColumn) {
|
||||
dataXRecord.addColumn(new DoubleColumn(columnNameValue));
|
||||
} else {
|
||||
dataXRecord.addColumn(new DoubleColumn(odpsRecord.getDecimal(columnNameValue)));
|
||||
case BOOLEAN: {
|
||||
if (isPartitionColumn) {
|
||||
dataXRecord.addColumn(new BoolColumn(columnNameValue));
|
||||
} else {
|
||||
dataXRecord.addColumn(new BoolColumn(record
|
||||
.getBoolean(columnNameValue)));
|
||||
}
|
||||
break;
|
||||
}
|
||||
break;
|
||||
}
|
||||
case STRING: {
|
||||
if (isPartitionColumn) {
|
||||
dataXRecord.addColumn(new StringColumn(columnNameValue));
|
||||
} else {
|
||||
dataXRecord.addColumn(new StringColumn(odpsRecord
|
||||
.getString(columnNameValue)));
|
||||
case DATE:
|
||||
case DATETIME: {
|
||||
// odps分区列,目前支持TINYINT、SMALLINT、INT、BIGINT、VARCHAR和STRING类型
|
||||
if (isPartitionColumn) {
|
||||
try {
|
||||
dataXRecord.addColumn(new DateColumn(ColumnCast
|
||||
.string2Date(new StringColumn(columnNameValue))));
|
||||
} catch (ParseException e) {
|
||||
String errMessage = MESSAGE_SOURCE.message("readerproxy.4",
|
||||
this.partition, columnNameValue);
|
||||
LOG.error(errMessage);
|
||||
throw DataXException.asDataXException(
|
||||
OdpsReaderErrorCode.READ_DATA_FAIL, errMessage, e);
|
||||
}
|
||||
} else {
|
||||
if (com.aliyun.odps.OdpsType.DATETIME == type) {
|
||||
dataXRecord.addColumn(new DateColumn(record
|
||||
.getDatetime(columnNameValue)));
|
||||
} else {
|
||||
if (this.useDateWithCalendar) {
|
||||
dataXRecord.addColumn(new DateColumn(record.
|
||||
getDate(columnNameValue, this.calendarForDate)));
|
||||
} else {
|
||||
dataXRecord.addColumn(new DateColumn(record
|
||||
.getDate(columnNameValue)));
|
||||
}
|
||||
|
||||
}
|
||||
}
|
||||
|
||||
break;
|
||||
}
|
||||
break;
|
||||
}
|
||||
default:
|
||||
throw DataXException
|
||||
.asDataXException(
|
||||
OdpsReaderErrorCode.ILLEGAL_VALUE,
|
||||
String.format(
|
||||
"DataX 抽取 ODPS 数据不支持字段类型为:[%s]. 目前支持抽取的字段类型有:bigint, boolean, datetime, double, decimal, string. "
|
||||
+ "您可以选择不抽取 DataX 不支持的字段或者联系 ODPS 管理员寻求帮助.",
|
||||
type));
|
||||
case DOUBLE: {
|
||||
if (isPartitionColumn) {
|
||||
dataXRecord.addColumn(new DoubleColumn(columnNameValue));
|
||||
} else {
|
||||
dataXRecord.addColumn(new DoubleColumn(record
|
||||
.getDouble(columnNameValue)));
|
||||
}
|
||||
break;
|
||||
}
|
||||
case DECIMAL: {
|
||||
if(isPartitionColumn) {
|
||||
dataXRecord.addColumn(new DoubleColumn(columnNameValue));
|
||||
} else {
|
||||
dataXRecord.addColumn(new DoubleColumn(record.getDecimal(columnNameValue)));
|
||||
}
|
||||
break;
|
||||
}
|
||||
case STRING: {
|
||||
if (isPartitionColumn) {
|
||||
dataXRecord.addColumn(new StringColumn(columnNameValue));
|
||||
} else {
|
||||
dataXRecord.addColumn(new StringColumn(record
|
||||
.getString(columnNameValue)));
|
||||
}
|
||||
break;
|
||||
}
|
||||
case TINYINT:
|
||||
if (isPartitionColumn) {
|
||||
dataXRecord.addColumn(new LongColumn(columnNameValue));
|
||||
} else {
|
||||
Byte value = record.getTinyint(columnNameValue);
|
||||
Integer intValue = value != null ? value.intValue() : null;
|
||||
dataXRecord.addColumn(new LongColumn(intValue));
|
||||
}
|
||||
break;
|
||||
case SMALLINT: {
|
||||
if (isPartitionColumn) {
|
||||
dataXRecord.addColumn(new LongColumn(columnNameValue));
|
||||
} else {
|
||||
Short value = record.getSmallint(columnNameValue);
|
||||
Long valueInLong = null;
|
||||
if (null != value) {
|
||||
valueInLong = value.longValue();
|
||||
}
|
||||
dataXRecord.addColumn(new LongColumn(valueInLong));
|
||||
}
|
||||
break;
|
||||
}
|
||||
case INT: {
|
||||
if (isPartitionColumn) {
|
||||
dataXRecord.addColumn(new LongColumn(columnNameValue));
|
||||
} else {
|
||||
dataXRecord.addColumn(new LongColumn(record
|
||||
.getInt(columnNameValue)));
|
||||
}
|
||||
break;
|
||||
}
|
||||
case FLOAT: {
|
||||
if (isPartitionColumn) {
|
||||
dataXRecord.addColumn(new DoubleColumn(columnNameValue));
|
||||
} else {
|
||||
dataXRecord.addColumn(new DoubleColumn(record
|
||||
.getFloat(columnNameValue)));
|
||||
}
|
||||
break;
|
||||
}
|
||||
case VARCHAR: {
|
||||
if (isPartitionColumn) {
|
||||
dataXRecord.addColumn(new StringColumn(columnNameValue));
|
||||
} else {
|
||||
Varchar value = record.getVarchar(columnNameValue);
|
||||
String columnValue = value != null ? value.getValue() : null;
|
||||
dataXRecord.addColumn(new StringColumn(columnValue));
|
||||
}
|
||||
break;
|
||||
}
|
||||
case TIMESTAMP: {
|
||||
if (isPartitionColumn) {
|
||||
try {
|
||||
dataXRecord.addColumn(new DateColumn(ColumnCast
|
||||
.string2Date(new StringColumn(columnNameValue))));
|
||||
} catch (ParseException e) {
|
||||
String errMessage = MESSAGE_SOURCE.message("readerproxy.4",
|
||||
this.partition, columnNameValue);
|
||||
LOG.error(errMessage);
|
||||
throw DataXException.asDataXException(
|
||||
OdpsReaderErrorCode.READ_DATA_FAIL, errMessage, e);
|
||||
}
|
||||
} else {
|
||||
dataXRecord.addColumn(new DateColumn(record
|
||||
.getTimestamp(columnNameValue)));
|
||||
}
|
||||
|
||||
break;
|
||||
}
|
||||
case BINARY: {
|
||||
if (isPartitionColumn) {
|
||||
dataXRecord.addColumn(new BytesColumn(columnNameValue.getBytes()));
|
||||
} else {
|
||||
// dataXRecord.addColumn(new BytesColumn(record
|
||||
// .getBinary(columnNameValue).data()));
|
||||
Binary binaryData = record.getBinary(columnNameValue);
|
||||
if (null == binaryData) {
|
||||
dataXRecord.addColumn(new BytesColumn(null));
|
||||
} else {
|
||||
dataXRecord.addColumn(new BytesColumn(binaryData.data()));
|
||||
}
|
||||
}
|
||||
break;
|
||||
}
|
||||
case ARRAY: {
|
||||
if (isPartitionColumn) {
|
||||
dataXRecord.addColumn(new StringColumn(columnNameValue));
|
||||
} else {
|
||||
List arrayValue = record.getArray(columnNameValue);
|
||||
if (arrayValue == null) {
|
||||
dataXRecord.addColumn(new StringColumn(null));
|
||||
} else {
|
||||
dataXRecord.addColumn(new StringColumn(JSON.toJSONString(transOdpsArrayToJavaList(arrayValue, (ArrayTypeInfo)typeInfo))));
|
||||
}
|
||||
}
|
||||
break;
|
||||
}
|
||||
case MAP: {
|
||||
if (isPartitionColumn) {
|
||||
dataXRecord.addColumn(new StringColumn(columnNameValue));
|
||||
} else {
|
||||
Map mapValue = record.getMap(columnNameValue);
|
||||
if (mapValue == null) {
|
||||
dataXRecord.addColumn(new StringColumn(null));
|
||||
} else {
|
||||
dataXRecord.addColumn(new StringColumn(JSON.toJSONString(transOdpsMapToJavaMap(mapValue, (MapTypeInfo)typeInfo))));
|
||||
}
|
||||
}
|
||||
break;
|
||||
}
|
||||
case STRUCT: {
|
||||
if (isPartitionColumn) {
|
||||
dataXRecord.addColumn(new StringColumn(columnNameValue));
|
||||
} else {
|
||||
Struct structValue = record.getStruct(columnNameValue);
|
||||
if (structValue == null) {
|
||||
dataXRecord.addColumn(new StringColumn(null));
|
||||
} else {
|
||||
dataXRecord.addColumn(new StringColumn(JSON.toJSONString(transOdpsStructToJavaMap(structValue))));
|
||||
}
|
||||
}
|
||||
break;
|
||||
}
|
||||
default:
|
||||
throw DataXException.asDataXException(
|
||||
OdpsReaderErrorCode.ILLEGAL_VALUE,
|
||||
MESSAGE_SOURCE.message("readerproxy.5", type));
|
||||
}
|
||||
}
|
||||
|
||||
private List transOdpsArrayToJavaList(List odpsArray, ArrayTypeInfo typeInfo) {
|
||||
TypeInfo eleType = typeInfo.getElementTypeInfo();
|
||||
List result = new ArrayList();
|
||||
switch (eleType.getOdpsType()) {
|
||||
// warn:array<double> [1.2, 3.4] 被转为了:"["1.2", "3.4"]", 本来应该被转换成 "[1.2, 3.4]"
|
||||
// 注意回归Case覆盖
|
||||
case BIGINT:
|
||||
case DOUBLE:
|
||||
case INT:
|
||||
case FLOAT:
|
||||
case DECIMAL:
|
||||
case TINYINT:
|
||||
case SMALLINT:
|
||||
for (Object item : odpsArray) {
|
||||
Object object = item;
|
||||
result.add(object == null ? NULL_INDICATOR : object);
|
||||
}
|
||||
return result;
|
||||
case BOOLEAN: // 未调整array<Boolean> 问题
|
||||
case STRING:
|
||||
case VARCHAR:
|
||||
case CHAR:
|
||||
case TIMESTAMP:
|
||||
case DATE:
|
||||
for (Object item : odpsArray) {
|
||||
Object object = item;
|
||||
result.add(object == null ? NULL_INDICATOR : object.toString());
|
||||
}
|
||||
return result;
|
||||
/**
|
||||
* 日期类型
|
||||
*/
|
||||
case DATETIME:
|
||||
for (Object item : odpsArray) {
|
||||
Date dateVal = (Date) item;
|
||||
result.add(dateVal == null ? NULL_INDICATOR : dateFormat.format(dateVal));
|
||||
}
|
||||
return result;
|
||||
/**
|
||||
* 字节数组
|
||||
*/
|
||||
case BINARY:
|
||||
for (Object item : odpsArray) {
|
||||
Binary binaryVal = (Binary) item;
|
||||
result.add(binaryVal == null ? NULL_INDICATOR :
|
||||
Base64.encodeBase64(binaryVal.data()));
|
||||
}
|
||||
return result;
|
||||
/**
|
||||
* 日期间隔
|
||||
*/
|
||||
case INTERVAL_DAY_TIME:
|
||||
for (Object item : odpsArray) {
|
||||
IntervalDayTime dayTimeVal = (IntervalDayTime) item;
|
||||
result.add(dayTimeVal == null ? NULL_INDICATOR :
|
||||
transIntervalDayTimeToJavaMap(dayTimeVal));
|
||||
}
|
||||
return result;
|
||||
/**
|
||||
* 年份间隔
|
||||
*/
|
||||
case INTERVAL_YEAR_MONTH:
|
||||
for (Object item : odpsArray) {
|
||||
IntervalYearMonth yearMonthVal = (IntervalYearMonth) item;
|
||||
result.add(yearMonthVal == null ? NULL_INDICATOR :
|
||||
transIntervalYearMonthToJavaMap(yearMonthVal));
|
||||
}
|
||||
return result;
|
||||
/**
|
||||
* 结构体
|
||||
*/
|
||||
case STRUCT:
|
||||
for (Object item : odpsArray) {
|
||||
Struct structVal = (Struct) item;
|
||||
result.add(structVal == null ? NULL_INDICATOR :
|
||||
transOdpsStructToJavaMap(structVal));
|
||||
}
|
||||
return result;
|
||||
/**
|
||||
* MAP类型
|
||||
*/
|
||||
case MAP:
|
||||
for (Object item : odpsArray) {
|
||||
Map mapVal = (Map) item;
|
||||
result.add(mapVal == null ? NULL_INDICATOR :
|
||||
transOdpsMapToJavaMap(mapVal, (MapTypeInfo) eleType));
|
||||
}
|
||||
return result;
|
||||
/**
|
||||
* ARRAY类型
|
||||
*/
|
||||
case ARRAY:
|
||||
for (Object item : odpsArray) {
|
||||
List arrayVal = (List) item;
|
||||
result.add(arrayVal == null ? NULL_INDICATOR :
|
||||
transOdpsArrayToJavaList(arrayVal, (ArrayTypeInfo) eleType));
|
||||
}
|
||||
return result;
|
||||
default:
|
||||
throw new IllegalArgumentException("decode record failed. column type: " + eleType.getTypeName());
|
||||
}
|
||||
}
|
||||
|
||||
private Map transOdpsMapToJavaMap(Map odpsMap, MapTypeInfo typeInfo) {
|
||||
TypeInfo keyType = typeInfo.getKeyTypeInfo();
|
||||
TypeInfo valueType = typeInfo.getValueTypeInfo();
|
||||
Map result = new HashMap();
|
||||
Set<Map.Entry> entrySet = null;
|
||||
switch (valueType.getOdpsType()) {
|
||||
case BIGINT:
|
||||
case DOUBLE:
|
||||
case BOOLEAN:
|
||||
case STRING:
|
||||
case DECIMAL:
|
||||
case TINYINT:
|
||||
case SMALLINT:
|
||||
case INT:
|
||||
case FLOAT:
|
||||
case CHAR:
|
||||
case VARCHAR:
|
||||
case DATE:
|
||||
case TIMESTAMP:
|
||||
switch (keyType.getOdpsType()) {
|
||||
case DATETIME:
|
||||
entrySet = odpsMap.entrySet();
|
||||
for (Map.Entry item : entrySet) {
|
||||
Object value = item.getValue();
|
||||
result.put(dateFormat.format((Date)item.getKey()), value == null ? NULL_INDICATOR : value.toString());
|
||||
}
|
||||
return result;
|
||||
case BINARY:
|
||||
entrySet = odpsMap.entrySet();
|
||||
for (Map.Entry item : entrySet) {
|
||||
Object value = item.getValue();
|
||||
result.put(Base64.encodeBase64(((Binary)item.getKey()).data()),
|
||||
value == null ? NULL_INDICATOR : value.toString());
|
||||
}
|
||||
return result;
|
||||
default:
|
||||
entrySet = odpsMap.entrySet();
|
||||
for (Map.Entry item : entrySet) {
|
||||
Object value = item.getValue();
|
||||
result.put(item.getKey(), value == null ? NULL_INDICATOR : value.toString());
|
||||
}
|
||||
return result;
|
||||
}
|
||||
/**
|
||||
* 日期类型
|
||||
*/
|
||||
case DATETIME:
|
||||
switch (keyType.getOdpsType()) {
|
||||
case DATETIME:
|
||||
entrySet = odpsMap.entrySet();
|
||||
for (Map.Entry item : entrySet) {
|
||||
Date dateVal = (Date) item.getValue();
|
||||
result.put(dateFormat.format((Date)item.getKey()),
|
||||
dateVal == null ? NULL_INDICATOR : dateFormat.format(dateVal));
|
||||
}
|
||||
return result;
|
||||
case BINARY:
|
||||
entrySet = odpsMap.entrySet();
|
||||
for (Map.Entry item : entrySet) {
|
||||
Date dateVal = (Date) item.getValue();
|
||||
result.put(Base64.encodeBase64(((Binary)item.getKey()).data()),
|
||||
dateVal == null ? NULL_INDICATOR : dateFormat.format(dateVal));
|
||||
}
|
||||
return result;
|
||||
default:
|
||||
entrySet = odpsMap.entrySet();
|
||||
for (Map.Entry item : entrySet) {
|
||||
Date dateVal = (Date) item.getValue();
|
||||
result.put(item.getKey(), dateVal == null ? NULL_INDICATOR : dateFormat.format(dateVal));
|
||||
}
|
||||
return result;
|
||||
}
|
||||
/**
|
||||
* 字节数组
|
||||
*/
|
||||
case BINARY:
|
||||
switch (keyType.getOdpsType()) {
|
||||
case DATETIME:
|
||||
entrySet = odpsMap.entrySet();
|
||||
for (Map.Entry item : entrySet) {
|
||||
Binary binaryVal = (Binary) item.getValue();
|
||||
result.put(dateFormat.format((Date)item.getKey()), binaryVal == null ? NULL_INDICATOR :
|
||||
Base64.encodeBase64(binaryVal.data()));
|
||||
}
|
||||
return result;
|
||||
case BINARY:
|
||||
entrySet = odpsMap.entrySet();
|
||||
for (Map.Entry item : entrySet) {
|
||||
Binary binaryVal = (Binary) item.getValue();
|
||||
result.put(Base64.encodeBase64(((Binary)item.getKey()).data()),
|
||||
binaryVal == null ? NULL_INDICATOR :
|
||||
Base64.encodeBase64(binaryVal.data()));
|
||||
}
|
||||
return result;
|
||||
default:
|
||||
entrySet = odpsMap.entrySet();
|
||||
for (Map.Entry item : entrySet) {
|
||||
Binary binaryVal = (Binary) item.getValue();
|
||||
result.put(item.getKey(), binaryVal == null ? NULL_INDICATOR :
|
||||
Base64.encodeBase64(binaryVal.data()));
|
||||
}
|
||||
return result;
|
||||
}
|
||||
/**
|
||||
* 日期间隔
|
||||
*/
|
||||
case INTERVAL_DAY_TIME:
|
||||
switch (keyType.getOdpsType()) {
|
||||
case DATETIME:
|
||||
entrySet = odpsMap.entrySet();
|
||||
for (Map.Entry item : entrySet) {
|
||||
IntervalDayTime dayTimeVal = (IntervalDayTime) item.getValue();
|
||||
result.put(dateFormat.format((Date)item.getKey()), dayTimeVal == null ? NULL_INDICATOR :
|
||||
transIntervalDayTimeToJavaMap(dayTimeVal));
|
||||
}
|
||||
return result;
|
||||
case BINARY:
|
||||
entrySet = odpsMap.entrySet();
|
||||
for (Map.Entry item : entrySet) {
|
||||
IntervalDayTime dayTimeVal = (IntervalDayTime) item.getValue();
|
||||
result.put(Base64.encodeBase64(((Binary)item.getKey()).data()),
|
||||
dayTimeVal == null ? NULL_INDICATOR :
|
||||
transIntervalDayTimeToJavaMap(dayTimeVal));
|
||||
}
|
||||
return result;
|
||||
default:
|
||||
entrySet = odpsMap.entrySet();
|
||||
for (Map.Entry item : entrySet) {
|
||||
IntervalDayTime dayTimeVal = (IntervalDayTime) item.getValue();
|
||||
result.put(item.getKey(), dayTimeVal == null ? NULL_INDICATOR :
|
||||
transIntervalDayTimeToJavaMap(dayTimeVal));
|
||||
}
|
||||
return result;
|
||||
}
|
||||
/**
|
||||
* 年份间隔
|
||||
*/
|
||||
case INTERVAL_YEAR_MONTH:
|
||||
switch (keyType.getOdpsType()) {
|
||||
case DATETIME:
|
||||
entrySet = odpsMap.entrySet();
|
||||
for (Map.Entry item : entrySet) {
|
||||
IntervalYearMonth yearMonthVal = (IntervalYearMonth) item.getValue();
|
||||
result.put(dateFormat.format((Date)item.getKey()), yearMonthVal == null ? NULL_INDICATOR :
|
||||
transIntervalYearMonthToJavaMap(yearMonthVal));
|
||||
}
|
||||
return result;
|
||||
case BINARY:
|
||||
entrySet = odpsMap.entrySet();
|
||||
for (Map.Entry item : entrySet) {
|
||||
IntervalYearMonth yearMonthVal = (IntervalYearMonth) item.getValue();
|
||||
result.put(Base64.encodeBase64(((Binary)item.getKey()).data()),
|
||||
yearMonthVal == null ? NULL_INDICATOR :
|
||||
transIntervalYearMonthToJavaMap(yearMonthVal));
|
||||
}
|
||||
return result;
|
||||
default:
|
||||
entrySet = odpsMap.entrySet();
|
||||
for (Map.Entry item : entrySet) {
|
||||
IntervalYearMonth yearMonthVal = (IntervalYearMonth) item.getValue();
|
||||
result.put(item.getKey(), yearMonthVal == null ? NULL_INDICATOR :
|
||||
transIntervalYearMonthToJavaMap(yearMonthVal));
|
||||
}
|
||||
return result;
|
||||
}
|
||||
/**
|
||||
* 结构体
|
||||
*/
|
||||
case STRUCT:
|
||||
switch (keyType.getOdpsType()) {
|
||||
case DATETIME:
|
||||
entrySet = odpsMap.entrySet();
|
||||
for (Map.Entry item : entrySet) {
|
||||
Struct structVal = (Struct) item.getValue();
|
||||
result.put(dateFormat.format((Date)item.getKey()), structVal == null ? NULL_INDICATOR :
|
||||
transOdpsStructToJavaMap(structVal));
|
||||
}
|
||||
return result;
|
||||
case BINARY:
|
||||
entrySet = odpsMap.entrySet();
|
||||
for (Map.Entry item : entrySet) {
|
||||
Struct structVal = (Struct) item.getValue();
|
||||
result.put(Base64.encodeBase64(((Binary)item.getKey()).data()),
|
||||
structVal == null ? NULL_INDICATOR :
|
||||
transOdpsStructToJavaMap(structVal));
|
||||
}
|
||||
return result;
|
||||
default:
|
||||
entrySet = odpsMap.entrySet();
|
||||
for (Map.Entry item : entrySet) {
|
||||
Struct structVal = (Struct) item.getValue();
|
||||
result.put(item.getKey(), structVal == null ? NULL_INDICATOR :
|
||||
transOdpsStructToJavaMap(structVal));
|
||||
}
|
||||
return result;
|
||||
}
|
||||
/**
|
||||
* MAP类型
|
||||
*/
|
||||
case MAP:
|
||||
switch (keyType.getOdpsType()) {
|
||||
case DATETIME:
|
||||
entrySet = odpsMap.entrySet();
|
||||
for (Map.Entry item : entrySet) {
|
||||
Map mapVal = (Map) item.getValue();
|
||||
result.put(dateFormat.format((Date)item.getKey()),mapVal == null ? NULL_INDICATOR :
|
||||
transOdpsMapToJavaMap(mapVal, (MapTypeInfo) valueType));
|
||||
}
|
||||
return result;
|
||||
case BINARY:
|
||||
entrySet = odpsMap.entrySet();
|
||||
for (Map.Entry item : entrySet) {
|
||||
Map mapVal = (Map) item.getValue();
|
||||
result.put(Base64.encodeBase64(((Binary)item.getKey()).data()),
|
||||
mapVal == null ? NULL_INDICATOR : transOdpsMapToJavaMap(mapVal, (MapTypeInfo) valueType));
|
||||
}
|
||||
return result;
|
||||
default:
|
||||
entrySet = odpsMap.entrySet();
|
||||
for (Map.Entry item : entrySet) {
|
||||
Map mapVal = (Map) item.getValue();
|
||||
result.put(item.getKey(), mapVal == null ? NULL_INDICATOR :
|
||||
transOdpsMapToJavaMap(mapVal, (MapTypeInfo) valueType));
|
||||
}
|
||||
return result;
|
||||
}
|
||||
/**
|
||||
* ARRAY类型
|
||||
*/
|
||||
case ARRAY:
|
||||
switch (keyType.getOdpsType()) {
|
||||
case DATETIME:
|
||||
entrySet = odpsMap.entrySet();
|
||||
for (Map.Entry item : entrySet) {
|
||||
List arrayVal = (List) item.getValue();
|
||||
result.put(dateFormat.format((Date)item.getKey()),arrayVal == null ? NULL_INDICATOR :
|
||||
transOdpsArrayToJavaList(arrayVal, (ArrayTypeInfo) valueType));
|
||||
}
|
||||
return result;
|
||||
case BINARY:
|
||||
entrySet = odpsMap.entrySet();
|
||||
for (Map.Entry item : entrySet) {
|
||||
List arrayVal = (List) item.getValue();
|
||||
result.put(Base64.encodeBase64(((Binary)item.getKey()).data()),
|
||||
arrayVal == null ? NULL_INDICATOR : transOdpsArrayToJavaList(arrayVal, (ArrayTypeInfo) valueType));
|
||||
}
|
||||
return result;
|
||||
default:
|
||||
entrySet = odpsMap.entrySet();
|
||||
for (Map.Entry item : entrySet) {
|
||||
List arrayVal = (List) item.getValue();
|
||||
result.put(item.getKey(), arrayVal == null ? NULL_INDICATOR :
|
||||
transOdpsArrayToJavaList(arrayVal, (ArrayTypeInfo) valueType));
|
||||
}
|
||||
return result;
|
||||
}
|
||||
default:
|
||||
throw new IllegalArgumentException("decode record failed. column type: " + valueType.getTypeName());
|
||||
}
|
||||
}
|
||||
|
||||
private Map transIntervalDayTimeToJavaMap(IntervalDayTime dayTime) {
|
||||
Map<String, Long> result = new HashMap<String, Long>();
|
||||
result.put("totalSeconds", dayTime.getTotalSeconds());
|
||||
result.put("nanos", (long)dayTime.getNanos());
|
||||
return result;
|
||||
}
|
||||
|
||||
private Map transOdpsStructToJavaMap(Struct odpsStruct) {
|
||||
Map result = new HashMap();
|
||||
for (int i = 0; i < odpsStruct.getFieldCount(); i++) {
|
||||
String fieldName = odpsStruct.getFieldName(i);
|
||||
Object fieldValue = odpsStruct.getFieldValue(i);
|
||||
TypeInfo fieldType = odpsStruct.getFieldTypeInfo(i);
|
||||
switch (fieldType.getOdpsType()) {
|
||||
case BIGINT:
|
||||
case DOUBLE:
|
||||
case BOOLEAN:
|
||||
case STRING:
|
||||
case DECIMAL:
|
||||
case TINYINT:
|
||||
case SMALLINT:
|
||||
case INT:
|
||||
case FLOAT:
|
||||
case VARCHAR:
|
||||
case CHAR:
|
||||
case TIMESTAMP:
|
||||
case DATE:
|
||||
result.put(fieldName, fieldValue == null ? NULL_INDICATOR : fieldValue.toString());
|
||||
break;
|
||||
/**
|
||||
* 日期类型
|
||||
*/
|
||||
case DATETIME:
|
||||
Date dateVal = (Date) fieldValue;
|
||||
result.put(fieldName, dateVal == null ? NULL_INDICATOR : dateFormat.format(dateVal));
|
||||
break;
|
||||
/**
|
||||
* 字节数组
|
||||
*/
|
||||
case BINARY:
|
||||
Binary binaryVal = (Binary) fieldValue;
|
||||
result.put(fieldName, binaryVal == null ? NULL_INDICATOR :
|
||||
Base64.encodeBase64(binaryVal.data()));
|
||||
break;
|
||||
/**
|
||||
* 日期间隔
|
||||
*/
|
||||
case INTERVAL_DAY_TIME:
|
||||
IntervalDayTime dayTimeVal = (IntervalDayTime) fieldValue;
|
||||
result.put(fieldName, dayTimeVal == null ? NULL_INDICATOR :
|
||||
transIntervalDayTimeToJavaMap(dayTimeVal));
|
||||
break;
|
||||
/**
|
||||
* 年份间隔
|
||||
*/
|
||||
case INTERVAL_YEAR_MONTH:
|
||||
IntervalYearMonth yearMonthVal = (IntervalYearMonth) fieldValue;
|
||||
result.put(fieldName, yearMonthVal == null ? NULL_INDICATOR :
|
||||
transIntervalYearMonthToJavaMap(yearMonthVal));
|
||||
break;
|
||||
/**
|
||||
* 结构体
|
||||
*/
|
||||
case STRUCT:
|
||||
Struct structVal = (Struct) fieldValue;
|
||||
result.put(fieldName, structVal == null ? NULL_INDICATOR :
|
||||
transOdpsStructToJavaMap(structVal));
|
||||
break;
|
||||
/**
|
||||
* MAP类型
|
||||
*/
|
||||
case MAP:
|
||||
Map mapVal = (Map) fieldValue;
|
||||
result.put(fieldName, mapVal == null ? NULL_INDICATOR :
|
||||
transOdpsMapToJavaMap(mapVal, (MapTypeInfo) fieldType));
|
||||
break;
|
||||
/**
|
||||
* ARRAY类型
|
||||
*/
|
||||
case ARRAY:
|
||||
List arrayVal = (List) fieldValue;
|
||||
result.put(fieldName, arrayVal == null ? NULL_INDICATOR :
|
||||
transOdpsArrayToJavaList(arrayVal, (ArrayTypeInfo) fieldType));
|
||||
break;
|
||||
default:
|
||||
throw new IllegalArgumentException("decode record failed. column type: " + fieldType.getTypeName());
|
||||
}
|
||||
}
|
||||
|
||||
return result;
|
||||
}
|
||||
|
||||
private Map transIntervalYearMonthToJavaMap(IntervalYearMonth yearMonth) {
|
||||
Map <String, Integer> result = new HashMap<String, Integer>();
|
||||
result.put("years", yearMonth.getYears());
|
||||
result.put("months", yearMonth.getMonths());
|
||||
return result;
|
||||
}
|
||||
|
||||
}
|
||||
|
@ -1,5 +1,5 @@
|
||||
/**
|
||||
* (C) 2010-2014 Alibaba Group Holding Limited.
|
||||
* (C) 2010-2022 Alibaba Group Holding Limited.
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
@ -18,9 +18,11 @@ package com.alibaba.datax.plugin.reader.odpsreader.util;
|
||||
|
||||
import com.alibaba.datax.common.exception.DataXException;
|
||||
import com.alibaba.datax.common.util.Configuration;
|
||||
import com.alibaba.datax.plugin.reader.odpsreader.Constant;
|
||||
import com.alibaba.datax.common.util.IdAndKeyRollingUtil;
|
||||
import com.alibaba.datax.common.util.MessageSource;
|
||||
import com.alibaba.datax.plugin.reader.odpsreader.Key;
|
||||
import com.alibaba.datax.plugin.reader.odpsreader.OdpsReaderErrorCode;
|
||||
|
||||
import org.apache.commons.lang3.StringUtils;
|
||||
import org.slf4j.Logger;
|
||||
import org.slf4j.LoggerFactory;
|
||||
@ -29,6 +31,7 @@ import java.util.Map;
|
||||
|
||||
public class IdAndKeyUtil {
|
||||
private static Logger LOG = LoggerFactory.getLogger(IdAndKeyUtil.class);
|
||||
private static MessageSource MESSAGE_SOURCE = MessageSource.loadResourceBundle(IdAndKeyUtil.class);
|
||||
|
||||
public static Configuration parseAccessIdAndKey(Configuration originalConfig) {
|
||||
String accessId = originalConfig.getString(Key.ACCESS_ID);
|
||||
@ -50,36 +53,13 @@ public class IdAndKeyUtil {
|
||||
|
||||
private static Configuration getAccessIdAndKeyFromEnv(Configuration originalConfig,
|
||||
Map<String, String> envProp) {
|
||||
String accessId = null;
|
||||
String accessKey = null;
|
||||
|
||||
String skynetAccessID = envProp.get(Constant.SKYNET_ACCESSID);
|
||||
String skynetAccessKey = envProp.get(Constant.SKYNET_ACCESSKEY);
|
||||
|
||||
if (StringUtils.isNotBlank(skynetAccessID)
|
||||
|| StringUtils.isNotBlank(skynetAccessKey)) {
|
||||
/**
|
||||
* 环境变量中,如果存在SKYNET_ACCESSID/SKYNET_ACCESSKEy(只要有其中一个变量,则认为一定是两个都存在的!),
|
||||
* 则使用其值作为odps的accessId/accessKey(会解密)
|
||||
*/
|
||||
|
||||
LOG.info("Try to get accessId/accessKey from environment.");
|
||||
accessId = skynetAccessID;
|
||||
accessKey = DESCipher.decrypt(skynetAccessKey);
|
||||
if (StringUtils.isNotBlank(accessKey)) {
|
||||
originalConfig.set(Key.ACCESS_ID, accessId);
|
||||
originalConfig.set(Key.ACCESS_KEY, accessKey);
|
||||
LOG.info("Get accessId/accessKey from environment variables successfully.");
|
||||
} else {
|
||||
throw DataXException.asDataXException(OdpsReaderErrorCode.GET_ID_KEY_FAIL,
|
||||
String.format("从环境变量中获取accessId/accessKey 失败, accessId=[%s]", accessId));
|
||||
}
|
||||
} else {
|
||||
// 如果获取到ak,在getAccessIdAndKeyFromEnv中已经设置到originalConfig了
|
||||
String accessKey = IdAndKeyRollingUtil.getAccessIdAndKeyFromEnv(originalConfig);
|
||||
if (StringUtils.isBlank(accessKey)) {
|
||||
// 无处获取(既没有配置在作业中,也没用在环境变量中)
|
||||
throw DataXException.asDataXException(OdpsReaderErrorCode.GET_ID_KEY_FAIL,
|
||||
"无法获取到accessId/accessKey. 它们既不存在于您的配置中,也不存在于环境变量中.");
|
||||
MESSAGE_SOURCE.message("idandkeyutil.2"));
|
||||
}
|
||||
|
||||
return originalConfig;
|
||||
}
|
||||
}
|
||||
|
@ -0,0 +1,25 @@
|
||||
descipher.1=\u957F\u5EA6\u4E0D\u662F\u5076\u6570
|
||||
|
||||
idandkeyutil.1=\u4ECE\u73AF\u5883\u53D8\u91CF\u4E2D\u83B7\u53D6accessId/accessKey \u5931\u8D25, accessId=[{0}]
|
||||
idandkeyutil.2=\u65E0\u6CD5\u83B7\u53D6\u5230accessId/accessKey. \u5B83\u4EEC\u65E2\u4E0D\u5B58\u5728\u4E8E\u60A8\u7684\u914D\u7F6E\u4E2D\uFF0C\u4E5F\u4E0D\u5B58\u5728\u4E8E\u73AF\u5883\u53D8\u91CF\u4E2D.
|
||||
|
||||
|
||||
odpssplitutil.1=\u60A8\u6240\u914D\u7F6E\u7684\u5206\u533A\u4E0D\u80FD\u4E3A\u7A7A\u767D.
|
||||
odpssplitutil.2=\u5207\u5206\u7684 recordCount \u4E0D\u80FD\u4E3A\u8D1F\u6570.recordCount={0}
|
||||
odpssplitutil.3=\u5207\u5206\u7684 adviceNum \u4E0D\u80FD\u4E3A\u8D1F\u6570.adviceNum={0}
|
||||
odpssplitutil.4=\u6CE8\u610F: \u7531\u4E8E\u60A8\u914D\u7F6E\u4E86successOnNoPartition\u503C\u4E3Atrue (\u5373\u5F53\u5206\u533A\u503C\u4E0D\u5B58\u5728\u65F6, \u540C\u6B65\u4EFB\u52A1\u4E0D\u62A5\u9519), \u60A8\u8BBE\u7F6E\u7684\u5206\u533A\u65E0\u6CD5\u5339\u914D\u5230ODPS\u8868\u4E2D\u5BF9\u5E94\u7684\u5206\u533A, \u540C\u6B65\u4EFB\u52A1\u7EE7\u7EED...
|
||||
|
||||
odpsutil.1=datax\u83B7\u53D6\u4E0D\u5230\u6E90\u8868\u7684\u5217\u4FE1\u606F\uFF0C \u7531\u4E8E\u60A8\u672A\u914D\u7F6E\u8BFB\u53D6\u6E90\u5934\u8868\u7684\u5217\u4FE1\u606F. datax\u65E0\u6CD5\u77E5\u9053\u8BE5\u62BD\u53D6\u8868\u7684\u54EA\u4E9B\u5B57\u6BB5\u7684\u6570\u636E\uFF0C \u6B63\u786E\u7684\u914D\u7F6E\u65B9\u5F0F\u662F\u7ED9 column \u914D\u7F6E\u4E0A\u60A8\u9700\u8981\u8BFB\u53D6\u7684\u5217\u540D\u79F0,\u7528\u82F1\u6587\u9017\u53F7\u5206\u9694.
|
||||
odpsutil.2=\u60A8\u6240\u914D\u7F6E\u7684maxRetryTime \u503C\u9519\u8BEF. \u8BE5\u503C\u4E0D\u80FD\u5C0F\u4E8E1, \u4E14\u4E0D\u80FD\u5927\u4E8E {0}. \u63A8\u8350\u7684\u914D\u7F6E\u65B9\u5F0F\u662F\u7ED9maxRetryTime \u914D\u7F6E1-11\u4E4B\u95F4\u7684\u67D0\u4E2A\u503C. \u8BF7\u60A8\u68C0\u67E5\u914D\u7F6E\u5E76\u505A\u51FA\u76F8\u5E94\u4FEE\u6539.
|
||||
odpsutil.3=\u4E0D\u652F\u6301\u7684\u8D26\u53F7\u7C7B\u578B:[{0}]. \u8D26\u53F7\u7C7B\u578B\u76EE\u524D\u4EC5\u652F\u6301aliyun, taobao.
|
||||
odpsutil.4=\u60A8\u6240\u914D\u7F6E\u7684\u5206\u533A\u4E0D\u80FD\u4E3A\u7A7A\u767D.
|
||||
odpsutil.5=\u6E90\u5934\u8868\u7684\u5217\u914D\u7F6E\u9519\u8BEF. \u60A8\u6240\u914D\u7F6E\u7684\u5217 [{0}] \u4E0D\u5B58\u5728.
|
||||
odpsutil.6=open RecordReader\u5931\u8D25. \u8BF7\u8054\u7CFB ODPS \u7BA1\u7406\u5458\u5904\u7406.
|
||||
odpsutil.7=\u52A0\u8F7D ODPS \u6E90\u5934\u8868:{0} \u5931\u8D25. \u8BF7\u68C0\u67E5\u60A8\u914D\u7F6E\u7684 ODPS \u6E90\u5934\u8868\u7684 [project] \u662F\u5426\u6B63\u786E.
|
||||
odpsutil.8=\u52A0\u8F7D ODPS \u6E90\u5934\u8868:{0} \u5931\u8D25. \u8BF7\u68C0\u67E5\u60A8\u914D\u7F6E\u7684 ODPS \u6E90\u5934\u8868\u7684 [table] \u662F\u5426\u6B63\u786E.
|
||||
odpsutil.9=\u52A0\u8F7D ODPS \u6E90\u5934\u8868:{0} \u5931\u8D25. \u8BF7\u68C0\u67E5\u60A8\u914D\u7F6E\u7684 ODPS \u6E90\u5934\u8868\u7684 [accessId] [accessKey]\u662F\u5426\u6B63\u786E.
|
||||
odpsutil.10=\u52A0\u8F7D ODPS \u6E90\u5934\u8868:{0} \u5931\u8D25. \u8BF7\u68C0\u67E5\u60A8\u914D\u7F6E\u7684 ODPS \u6E90\u5934\u8868\u7684 [accessKey] \u662F\u5426\u6B63\u786E.
|
||||
odpsutil.11=\u52A0\u8F7D ODPS \u6E90\u5934\u8868:{0} \u5931\u8D25. \u8BF7\u68C0\u67E5\u60A8\u914D\u7F6E\u7684 ODPS \u6E90\u5934\u8868\u7684 [accessId] [accessKey] [project]\u662F\u5426\u5339\u914D.
|
||||
odpsutil.12=\u52A0\u8F7D ODPS \u6E90\u5934\u8868:{0} \u5931\u8D25. \u8BF7\u68C0\u67E5\u60A8\u914D\u7F6E\u7684 ODPS \u6E90\u5934\u8868\u7684 project,table,accessId,accessKey,odpsServer\u7B49\u503C.
|
||||
odpsutil.13=\u6267\u884C ODPS SQL\u5931\u8D25, \u8FD4\u56DE\u503C\u4E3A:{0}. \u8BF7\u4ED4\u7EC6\u68C0\u67E5ODPS SQL\u662F\u5426\u6B63\u786E, \u5982\u679C\u68C0\u67E5\u65E0\u8BEF, \u8BF7\u8054\u7CFB ODPS \u503C\u73ED\u540C\u5B66\u5904\u7406. SQL \u5185\u5BB9\u4E3A:[\n{1}\n].
|
||||
odpsutil.14=\u6267\u884C ODPS SQL \u65F6\u629B\u51FA\u5F02\u5E38, \u8BF7\u4ED4\u7EC6\u68C0\u67E5ODPS SQL\u662F\u5426\u6B63\u786E, \u5982\u679C\u68C0\u67E5\u65E0\u8BEF, \u8BF7\u8054\u7CFB ODPS \u503C\u73ED\u540C\u5B66\u5904\u7406. SQL \u5185\u5BB9\u4E3A:[\n{0}\n].
|
@ -2,19 +2,26 @@ package com.alibaba.datax.plugin.reader.odpsreader.util;
|
||||
|
||||
import com.alibaba.datax.common.exception.DataXException;
|
||||
import com.alibaba.datax.common.util.Configuration;
|
||||
import com.alibaba.datax.common.util.MessageSource;
|
||||
import com.alibaba.datax.common.util.RangeSplitUtil;
|
||||
import com.alibaba.datax.plugin.reader.odpsreader.Constant;
|
||||
import com.alibaba.datax.plugin.reader.odpsreader.Key;
|
||||
import com.alibaba.datax.plugin.reader.odpsreader.OdpsReaderErrorCode;
|
||||
import com.aliyun.odps.Odps;
|
||||
import com.aliyun.odps.tunnel.TableTunnel.DownloadSession;
|
||||
|
||||
import org.apache.commons.lang3.tuple.ImmutablePair;
|
||||
import org.apache.commons.lang3.tuple.Pair;
|
||||
import org.slf4j.Logger;
|
||||
import org.slf4j.LoggerFactory;
|
||||
|
||||
import java.util.ArrayList;
|
||||
import java.util.List;
|
||||
|
||||
public final class OdpsSplitUtil {
|
||||
private static final Logger LOG = LoggerFactory.getLogger(OdpsSplitUtil.class);
|
||||
|
||||
private static final MessageSource MESSAGE_SOURCE = MessageSource.loadResourceBundle(OdpsSplitUtil.class);
|
||||
|
||||
public static List<Configuration> doSplit(Configuration originalConfig, Odps odps,
|
||||
int adviceNum) {
|
||||
@ -36,9 +43,17 @@ public final class OdpsSplitUtil {
|
||||
List<String> partitions = originalConfig.getList(Key.PARTITION,
|
||||
String.class);
|
||||
|
||||
if ((null == partitions || partitions.isEmpty()) && originalConfig.getBool(Key.SUCCESS_ON_NO_PATITION, false)) {
|
||||
Configuration tempConfig = originalConfig.clone();
|
||||
tempConfig.set(Key.PARTITION, null);
|
||||
splittedConfigs.add(tempConfig);
|
||||
LOG.warn(MESSAGE_SOURCE.message("odpssplitutil.4"));
|
||||
return splittedConfigs;
|
||||
}
|
||||
|
||||
if (null == partitions || partitions.isEmpty()) {
|
||||
throw DataXException.asDataXException(OdpsReaderErrorCode.ILLEGAL_VALUE,
|
||||
"您所配置的分区不能为空白.");
|
||||
MESSAGE_SOURCE.message("odpssplitutil.1"));
|
||||
}
|
||||
|
||||
//splitMode 默认为 record
|
||||
@ -141,11 +156,11 @@ public final class OdpsSplitUtil {
|
||||
*/
|
||||
private static List<Pair<Long, Long>> splitRecordCount(long recordCount, int adviceNum) {
|
||||
if(recordCount<0){
|
||||
throw new IllegalArgumentException("切分的 recordCount 不能为负数.recordCount=" + recordCount);
|
||||
throw new IllegalArgumentException(MESSAGE_SOURCE.message("odpssplitutil.2", recordCount));
|
||||
}
|
||||
|
||||
if(adviceNum<1){
|
||||
throw new IllegalArgumentException("切分的 adviceNum 不能为负数.adviceNum=" + adviceNum);
|
||||
throw new IllegalArgumentException(MESSAGE_SOURCE.message("odpssplitutil.3", adviceNum));
|
||||
}
|
||||
|
||||
List<Pair<Long, Long>> result = new ArrayList<Pair<Long, Long>>();
|
||||
|
@ -2,16 +2,22 @@ package com.alibaba.datax.plugin.reader.odpsreader.util;
|
||||
|
||||
import com.alibaba.datax.common.exception.DataXException;
|
||||
import com.alibaba.datax.common.util.Configuration;
|
||||
import com.alibaba.datax.common.util.DataXCaseEnvUtil;
|
||||
import com.alibaba.datax.common.util.MessageSource;
|
||||
import com.alibaba.datax.common.util.RetryUtil;
|
||||
import com.alibaba.datax.plugin.reader.odpsreader.ColumnType;
|
||||
import com.alibaba.datax.plugin.reader.odpsreader.Constant;
|
||||
import com.alibaba.datax.plugin.reader.odpsreader.Key;
|
||||
import com.alibaba.datax.plugin.reader.odpsreader.OdpsReaderErrorCode;
|
||||
import com.aliyun.odps.*;
|
||||
import com.aliyun.odps.Column;
|
||||
import com.aliyun.odps.account.Account;
|
||||
import com.aliyun.odps.account.AliyunAccount;
|
||||
import com.aliyun.odps.account.StsAccount;
|
||||
import com.aliyun.odps.data.RecordReader;
|
||||
import com.aliyun.odps.task.SQLTask;
|
||||
import com.aliyun.odps.tunnel.TableTunnel;
|
||||
import com.aliyun.odps.type.TypeInfo;
|
||||
|
||||
import org.apache.commons.lang3.StringUtils;
|
||||
import org.apache.commons.lang3.tuple.MutablePair;
|
||||
@ -19,13 +25,12 @@ import org.apache.commons.lang3.tuple.Pair;
|
||||
import org.slf4j.Logger;
|
||||
import org.slf4j.LoggerFactory;
|
||||
|
||||
import java.util.ArrayList;
|
||||
import java.util.Collections;
|
||||
import java.util.List;
|
||||
import java.util.*;
|
||||
import java.util.concurrent.Callable;
|
||||
|
||||
public final class OdpsUtil {
|
||||
private static final Logger LOG = LoggerFactory.getLogger(OdpsUtil.class);
|
||||
private static final MessageSource MESSAGE_SOURCE = MessageSource.loadResourceBundle(OdpsUtil.class);
|
||||
|
||||
public static int MAX_RETRY_TIME = 10;
|
||||
|
||||
@ -37,8 +42,8 @@ public final class OdpsUtil {
|
||||
|
||||
if (null == originalConfig.getList(Key.COLUMN) ||
|
||||
originalConfig.getList(Key.COLUMN, String.class).isEmpty()) {
|
||||
throw DataXException.asDataXException(OdpsReaderErrorCode.REQUIRED_VALUE, "datax获取不到源表的列信息, 由于您未配置读取源头表的列信息. datax无法知道该抽取表的哪些字段的数据 " +
|
||||
"正确的配置方式是给 column 配置上您需要读取的列名称,用英文逗号分隔.");
|
||||
throw DataXException.asDataXException(OdpsReaderErrorCode.REQUIRED_VALUE,
|
||||
MESSAGE_SOURCE.message("odpsutil.1"));
|
||||
}
|
||||
|
||||
}
|
||||
@ -47,8 +52,8 @@ public final class OdpsUtil {
|
||||
int maxRetryTime = originalConfig.getInt(Key.MAX_RETRY_TIME,
|
||||
OdpsUtil.MAX_RETRY_TIME);
|
||||
if (maxRetryTime < 1 || maxRetryTime > OdpsUtil.MAX_RETRY_TIME) {
|
||||
throw DataXException.asDataXException(OdpsReaderErrorCode.ILLEGAL_VALUE, "您所配置的maxRetryTime 值错误. 该值不能小于1, 且不能大于 " + OdpsUtil.MAX_RETRY_TIME +
|
||||
". 推荐的配置方式是给maxRetryTime 配置1-11之间的某个值. 请您检查配置并做出相应修改.");
|
||||
throw DataXException.asDataXException(OdpsReaderErrorCode.ILLEGAL_VALUE,
|
||||
MESSAGE_SOURCE.message("odpsutil.2", OdpsUtil.MAX_RETRY_TIME));
|
||||
}
|
||||
MAX_RETRY_TIME = maxRetryTime;
|
||||
}
|
||||
@ -59,11 +64,12 @@ public final class OdpsUtil {
|
||||
String accessId = originalConfig.getString(Key.ACCESS_ID);
|
||||
String accessKey = originalConfig.getString(Key.ACCESS_KEY);
|
||||
String project = originalConfig.getString(Key.PROJECT);
|
||||
String securityToken = originalConfig.getString(Key.SECURITY_TOKEN);
|
||||
|
||||
String packageAuthorizedProject = originalConfig.getString(Key.PACKAGE_AUTHORIZED_PROJECT);
|
||||
|
||||
String defaultProject;
|
||||
if(StringUtils.isBlank(packageAuthorizedProject)) {
|
||||
if (StringUtils.isBlank(packageAuthorizedProject)) {
|
||||
defaultProject = project;
|
||||
} else {
|
||||
defaultProject = packageAuthorizedProject;
|
||||
@ -74,21 +80,26 @@ public final class OdpsUtil {
|
||||
|
||||
Account account = null;
|
||||
if (accountType.equalsIgnoreCase(Constant.DEFAULT_ACCOUNT_TYPE)) {
|
||||
account = new AliyunAccount(accessId, accessKey);
|
||||
if (StringUtils.isNotBlank(securityToken)) {
|
||||
account = new StsAccount(accessId, accessKey, securityToken);
|
||||
} else {
|
||||
account = new AliyunAccount(accessId, accessKey);
|
||||
}
|
||||
} else {
|
||||
throw DataXException.asDataXException(OdpsReaderErrorCode.ACCOUNT_TYPE_ERROR,
|
||||
String.format("不支持的账号类型:[%s]. 账号类型目前仅支持aliyun, taobao.", accountType));
|
||||
MESSAGE_SOURCE.message("odpsutil.3", accountType));
|
||||
}
|
||||
|
||||
Odps odps = new Odps(account);
|
||||
boolean isPreCheck = originalConfig.getBool("dryRun", false);
|
||||
if(isPreCheck) {
|
||||
if (isPreCheck) {
|
||||
odps.getRestClient().setConnectTimeout(3);
|
||||
odps.getRestClient().setReadTimeout(3);
|
||||
odps.getRestClient().setRetryTimes(2);
|
||||
}
|
||||
odps.setDefaultProject(defaultProject);
|
||||
odps.setEndpoint(odpsServer);
|
||||
odps.setUserAgent("DATAX");
|
||||
|
||||
return odps;
|
||||
}
|
||||
@ -103,7 +114,7 @@ public final class OdpsUtil {
|
||||
table.reload();
|
||||
return table;
|
||||
}
|
||||
}, 3, 1000, false);
|
||||
}, DataXCaseEnvUtil.getRetryTimes(3), DataXCaseEnvUtil.getRetryInterval(1000), DataXCaseEnvUtil.getRetryExponential(false));
|
||||
} catch (Exception e) {
|
||||
throwDataXExceptionWhenReloadTable(e, tableName);
|
||||
}
|
||||
@ -154,7 +165,7 @@ public final class OdpsUtil {
|
||||
public static String formatPartition(String partition) {
|
||||
if (StringUtils.isBlank(partition)) {
|
||||
throw DataXException.asDataXException(OdpsReaderErrorCode.ILLEGAL_VALUE,
|
||||
"您所配置的分区不能为空白.");
|
||||
MESSAGE_SOURCE.message("odpsutil.4"));
|
||||
} else {
|
||||
return partition.trim().replaceAll(" *= *", "=")
|
||||
.replaceAll(" */ *", ",").replaceAll(" *, *", ",")
|
||||
@ -175,6 +186,35 @@ public final class OdpsUtil {
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* 将用户配置的分区分类成两类:
|
||||
* (1) 包含 HINT 的区间过滤;
|
||||
* (2) 不包含 HINT 的普通模式
|
||||
* @param userConfiguredPartitions
|
||||
* @return
|
||||
*/
|
||||
public static UserConfiguredPartitionClassification classifyUserConfiguredPartitions(List<String> userConfiguredPartitions){
|
||||
UserConfiguredPartitionClassification userConfiguredPartitionClassification = new UserConfiguredPartitionClassification();
|
||||
|
||||
List<String> userConfiguredHintPartition = new ArrayList<String>();
|
||||
List<String> userConfiguredNormalPartition = new ArrayList<String>();
|
||||
boolean isIncludeHintPartition = false;
|
||||
for (String userConfiguredPartition : userConfiguredPartitions){
|
||||
if (StringUtils.isNotBlank(userConfiguredPartition)){
|
||||
if (userConfiguredPartition.trim().toLowerCase().startsWith(Constant.PARTITION_FILTER_HINT)) {
|
||||
userConfiguredHintPartition.add(userConfiguredPartition.trim());
|
||||
isIncludeHintPartition = true;
|
||||
}else {
|
||||
userConfiguredNormalPartition.add(userConfiguredPartition.trim());
|
||||
}
|
||||
}
|
||||
}
|
||||
userConfiguredPartitionClassification.setIncludeHintPartition(isIncludeHintPartition);
|
||||
userConfiguredPartitionClassification.setUserConfiguredHintPartition(userConfiguredHintPartition);
|
||||
userConfiguredPartitionClassification.setUserConfiguredNormalPartition(userConfiguredNormalPartition);
|
||||
return userConfiguredPartitionClassification;
|
||||
}
|
||||
|
||||
public static List<Pair<String, ColumnType>> parseColumns(
|
||||
List<String> allNormalColumns, List<String> allPartitionColumns,
|
||||
List<String> userConfiguredColumns) {
|
||||
@ -213,14 +253,14 @@ public final class OdpsUtil {
|
||||
// not exist column
|
||||
throw DataXException.asDataXException(
|
||||
OdpsReaderErrorCode.ILLEGAL_VALUE,
|
||||
String.format("源头表的列配置错误. 您所配置的列 [%s] 不存在.", column));
|
||||
MESSAGE_SOURCE.message("odpsutil.5", column));
|
||||
|
||||
}
|
||||
return parsededColumns;
|
||||
}
|
||||
|
||||
private static int indexOfIgnoreCase(List<String> columnCollection,
|
||||
String column) {
|
||||
String column) {
|
||||
int index = -1;
|
||||
for (int i = 0; i < columnCollection.size(); i++) {
|
||||
if (columnCollection.get(i).equalsIgnoreCase(column)) {
|
||||
@ -255,7 +295,7 @@ public final class OdpsUtil {
|
||||
return tunnel.createDownloadSession(
|
||||
projectName, tableName);
|
||||
}
|
||||
}, MAX_RETRY_TIME, 1000, true);
|
||||
}, DataXCaseEnvUtil.getRetryTimes(MAX_RETRY_TIME), DataXCaseEnvUtil.getRetryInterval(1000), DataXCaseEnvUtil.getRetryExponential(true));
|
||||
} catch (Exception e) {
|
||||
throw DataXException.asDataXException(OdpsReaderErrorCode.CREATE_DOWNLOADSESSION_FAIL, e);
|
||||
}
|
||||
@ -276,7 +316,7 @@ public final class OdpsUtil {
|
||||
return tunnel.getDownloadSession(
|
||||
projectName, tableName, sessionId);
|
||||
}
|
||||
}, MAX_RETRY_TIME ,1000, true);
|
||||
}, DataXCaseEnvUtil.getRetryTimes(MAX_RETRY_TIME), DataXCaseEnvUtil.getRetryInterval(1000), DataXCaseEnvUtil.getRetryExponential(true));
|
||||
} catch (Exception e) {
|
||||
throw DataXException.asDataXException(OdpsReaderErrorCode.GET_DOWNLOADSESSION_FAIL, e);
|
||||
}
|
||||
@ -299,7 +339,7 @@ public final class OdpsUtil {
|
||||
return tunnel.createDownloadSession(
|
||||
projectName, tableName, partitionSpec);
|
||||
}
|
||||
}, MAX_RETRY_TIME, 1000, true);
|
||||
}, DataXCaseEnvUtil.getRetryTimes(MAX_RETRY_TIME), DataXCaseEnvUtil.getRetryInterval(1000), DataXCaseEnvUtil.getRetryExponential(true));
|
||||
} catch (Exception e) {
|
||||
throw DataXException.asDataXException(OdpsReaderErrorCode.CREATE_DOWNLOADSESSION_FAIL, e);
|
||||
}
|
||||
@ -321,58 +361,152 @@ public final class OdpsUtil {
|
||||
return tunnel.getDownloadSession(
|
||||
projectName, tableName, partitionSpec, sessionId);
|
||||
}
|
||||
}, MAX_RETRY_TIME, 1000, true);
|
||||
}, DataXCaseEnvUtil.getRetryTimes(MAX_RETRY_TIME), DataXCaseEnvUtil.getRetryInterval(1000), DataXCaseEnvUtil.getRetryExponential(true));
|
||||
} catch (Exception e) {
|
||||
throw DataXException.asDataXException(OdpsReaderErrorCode.GET_DOWNLOADSESSION_FAIL, e);
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
|
||||
/**
|
||||
* odpsreader采用的直接读取所有列的downloadSession
|
||||
*/
|
||||
public static RecordReader getRecordReader(final TableTunnel.DownloadSession downloadSession, final long start, final long count,
|
||||
final boolean isCompress) {
|
||||
final boolean isCompress) {
|
||||
try {
|
||||
return RetryUtil.executeWithRetry(new Callable<RecordReader>() {
|
||||
@Override
|
||||
public RecordReader call() throws Exception {
|
||||
return downloadSession.openRecordReader(start, count, isCompress);
|
||||
}
|
||||
}, MAX_RETRY_TIME, 1000, true);
|
||||
}, DataXCaseEnvUtil.getRetryTimes(MAX_RETRY_TIME), DataXCaseEnvUtil.getRetryInterval(1000), DataXCaseEnvUtil.getRetryExponential(true));
|
||||
} catch (Exception e) {
|
||||
throw DataXException.asDataXException(OdpsReaderErrorCode.OPEN_RECORD_READER_FAILED,
|
||||
"open RecordReader失败. 请联系 ODPS 管理员处理.", e);
|
||||
MESSAGE_SOURCE.message("odpsutil.6"), e);
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
/**
|
||||
* odpsreader采用的指定读取某些列的downloadSession
|
||||
*/
|
||||
public static RecordReader getRecordReader(final TableTunnel.DownloadSession downloadSession, final long start, final long count,
|
||||
final boolean isCompress, final List<Column> columns) {
|
||||
try {
|
||||
return RetryUtil.executeWithRetry(new Callable<RecordReader>() {
|
||||
@Override
|
||||
public RecordReader call() throws Exception {
|
||||
return downloadSession.openRecordReader(start, count, isCompress, columns);
|
||||
}
|
||||
}, DataXCaseEnvUtil.getRetryTimes(MAX_RETRY_TIME), DataXCaseEnvUtil.getRetryInterval(1000), DataXCaseEnvUtil.getRetryExponential(true));
|
||||
} catch (Exception e) {
|
||||
throw DataXException.asDataXException(OdpsReaderErrorCode.OPEN_RECORD_READER_FAILED,
|
||||
MESSAGE_SOURCE.message("odpsutil.6"), e);
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
/**
|
||||
* table.reload() 方法抛出的 odps 异常 转化为更清晰的 datax 异常 抛出
|
||||
*/
|
||||
public static void throwDataXExceptionWhenReloadTable(Exception e, String tableName) {
|
||||
if(e.getMessage() != null) {
|
||||
if(e.getMessage().contains(OdpsExceptionMsg.ODPS_PROJECT_NOT_FOUNT)) {
|
||||
if (e.getMessage() != null) {
|
||||
if (e.getMessage().contains(OdpsExceptionMsg.ODPS_PROJECT_NOT_FOUNT)) {
|
||||
throw DataXException.asDataXException(OdpsReaderErrorCode.ODPS_PROJECT_NOT_FOUNT,
|
||||
String.format("加载 ODPS 源头表:%s 失败. " +
|
||||
"请检查您配置的 ODPS 源头表的 [project] 是否正确.", tableName), e);
|
||||
} else if(e.getMessage().contains(OdpsExceptionMsg.ODPS_TABLE_NOT_FOUNT)) {
|
||||
MESSAGE_SOURCE.message("odpsutil.7", tableName), e);
|
||||
} else if (e.getMessage().contains(OdpsExceptionMsg.ODPS_TABLE_NOT_FOUNT)) {
|
||||
throw DataXException.asDataXException(OdpsReaderErrorCode.ODPS_TABLE_NOT_FOUNT,
|
||||
String.format("加载 ODPS 源头表:%s 失败. " +
|
||||
"请检查您配置的 ODPS 源头表的 [table] 是否正确.", tableName), e);
|
||||
} else if(e.getMessage().contains(OdpsExceptionMsg.ODPS_ACCESS_KEY_ID_NOT_FOUND)) {
|
||||
MESSAGE_SOURCE.message("odpsutil.8", tableName), e);
|
||||
} else if (e.getMessage().contains(OdpsExceptionMsg.ODPS_ACCESS_KEY_ID_NOT_FOUND)) {
|
||||
throw DataXException.asDataXException(OdpsReaderErrorCode.ODPS_ACCESS_KEY_ID_NOT_FOUND,
|
||||
String.format("加载 ODPS 源头表:%s 失败. " +
|
||||
"请检查您配置的 ODPS 源头表的 [accessId] [accessKey]是否正确.", tableName), e);
|
||||
} else if(e.getMessage().contains(OdpsExceptionMsg.ODPS_ACCESS_KEY_INVALID)) {
|
||||
MESSAGE_SOURCE.message("odpsutil.9", tableName), e);
|
||||
} else if (e.getMessage().contains(OdpsExceptionMsg.ODPS_ACCESS_KEY_INVALID)) {
|
||||
throw DataXException.asDataXException(OdpsReaderErrorCode.ODPS_ACCESS_KEY_INVALID,
|
||||
String.format("加载 ODPS 源头表:%s 失败. " +
|
||||
"请检查您配置的 ODPS 源头表的 [accessKey] 是否正确.", tableName), e);
|
||||
} else if(e.getMessage().contains(OdpsExceptionMsg.ODPS_ACCESS_DENY)) {
|
||||
MESSAGE_SOURCE.message("odpsutil.10", tableName), e);
|
||||
} else if (e.getMessage().contains(OdpsExceptionMsg.ODPS_ACCESS_DENY)) {
|
||||
throw DataXException.asDataXException(OdpsReaderErrorCode.ODPS_ACCESS_DENY,
|
||||
String.format("加载 ODPS 源头表:%s 失败. " +
|
||||
"请检查您配置的 ODPS 源头表的 [accessId] [accessKey] [project]是否匹配.", tableName), e);
|
||||
MESSAGE_SOURCE.message("odpsutil.11", tableName), e);
|
||||
}
|
||||
}
|
||||
throw DataXException.asDataXException(OdpsReaderErrorCode.ILLEGAL_VALUE,
|
||||
String.format("加载 ODPS 源头表:%s 失败. " +
|
||||
"请检查您配置的 ODPS 源头表的 project,table,accessId,accessKey,odpsServer等值.", tableName), e);
|
||||
MESSAGE_SOURCE.message("odpsutil.12", tableName), e);
|
||||
}
|
||||
|
||||
public static List<Column> getNormalColumns(List<Pair<String, ColumnType>> parsedColumns,
|
||||
Map<String, TypeInfo> columnTypeMap) {
|
||||
List<Column> userConfigNormalColumns = new ArrayList<Column>();
|
||||
Set<String> columnNameSet = new HashSet<String>();
|
||||
for (Pair<String, ColumnType> columnInfo : parsedColumns) {
|
||||
if (columnInfo.getValue() == ColumnType.NORMAL) {
|
||||
String columnName = columnInfo.getKey();
|
||||
if (!columnNameSet.contains(columnName)) {
|
||||
Column column = new Column(columnName, columnTypeMap.get(columnName));
|
||||
userConfigNormalColumns.add(column);
|
||||
columnNameSet.add(columnName);
|
||||
}
|
||||
}
|
||||
}
|
||||
return userConfigNormalColumns;
|
||||
}
|
||||
|
||||
/**
|
||||
* 执行odps preSql和postSql
|
||||
*
|
||||
* @param odps: odps client
|
||||
* @param sql : 要执行的odps sql语句, 因为会有重试, 所以sql 必须为幂等的
|
||||
* @param tag : "preSql" or "postSql"
|
||||
*/
|
||||
public static void runSqlTaskWithRetry(final Odps odps, final String sql, final String tag){
|
||||
//重试次数
|
||||
int retryTimes = 10;
|
||||
//重试间隔(ms)
|
||||
long sleepTimeInMilliSecond = 1000L;
|
||||
try {
|
||||
RetryUtil.executeWithRetry(new Callable<Void>() {
|
||||
@Override
|
||||
public Void call() throws Exception {
|
||||
long beginTime = System.currentTimeMillis();
|
||||
|
||||
runSqlTask(odps, sql, tag);
|
||||
|
||||
long endIime = System.currentTimeMillis();
|
||||
LOG.info(String.format("exectue odps sql: %s finished, cost time : %s ms",
|
||||
sql, (endIime - beginTime)));
|
||||
return null;
|
||||
}
|
||||
}, DataXCaseEnvUtil.getRetryTimes(retryTimes), DataXCaseEnvUtil.getRetryInterval(sleepTimeInMilliSecond), DataXCaseEnvUtil.getRetryExponential(true));
|
||||
} catch (Exception e) {
|
||||
String errMessage = String.format("Retry %s times to exectue sql :[%s] failed! Exception: %s",
|
||||
retryTimes, e.getMessage());
|
||||
throw DataXException.asDataXException(OdpsReaderErrorCode.RUN_SQL_ODPS_EXCEPTION, errMessage, e);
|
||||
}
|
||||
}
|
||||
|
||||
public static void runSqlTask(Odps odps, String sql, String tag) {
|
||||
if (StringUtils.isBlank(sql)) {
|
||||
return;
|
||||
}
|
||||
|
||||
String taskName = String.format("datax_odpsreader_%s_%s", tag, UUID.randomUUID().toString().replace('-', '_'));
|
||||
|
||||
LOG.info("Try to start sqlTask:[{}] to run odps sql:[\n{}\n] .", taskName, sql);
|
||||
|
||||
Instance instance;
|
||||
Instance.TaskStatus status;
|
||||
try {
|
||||
Map<String, String> hints = new HashMap<String, String>();
|
||||
hints.put("odps.sql.submit.mode", "script");
|
||||
instance = SQLTask.run(odps, odps.getDefaultProject(), sql, taskName, hints, null);
|
||||
instance.waitForSuccess();
|
||||
status = instance.getTaskStatus().get(taskName);
|
||||
if (!Instance.TaskStatus.Status.SUCCESS.equals(status.getStatus())) {
|
||||
throw DataXException.asDataXException(OdpsReaderErrorCode.RUN_SQL_FAILED,
|
||||
MESSAGE_SOURCE.message("odpsutil.13", sql));
|
||||
}
|
||||
} catch (DataXException e) {
|
||||
throw e;
|
||||
} catch (Exception e) {
|
||||
throw DataXException.asDataXException(OdpsReaderErrorCode.RUN_SQL_ODPS_EXCEPTION,
|
||||
MESSAGE_SOURCE.message("odpsutil.14", sql), e);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
@ -0,0 +1,103 @@
|
||||
package com.alibaba.datax.plugin.reader.odpsreader.util;
|
||||
|
||||
import java.sql.Connection;
|
||||
import java.sql.DriverManager;
|
||||
import java.sql.ResultSet;
|
||||
import java.sql.ResultSetMetaData;
|
||||
import java.sql.SQLException;
|
||||
import java.sql.Statement;
|
||||
import java.util.ArrayList;
|
||||
import java.util.List;
|
||||
|
||||
import com.alibaba.datax.plugin.reader.odpsreader.Constant;
|
||||
import com.aliyun.odps.Partition;
|
||||
import com.aliyun.odps.Table;
|
||||
import org.apache.commons.lang3.StringUtils;
|
||||
import org.slf4j.Logger;
|
||||
import org.slf4j.LoggerFactory;
|
||||
|
||||
public class SqliteUtil {
|
||||
|
||||
private static final Logger LOGGER = LoggerFactory.getLogger(SqliteUtil.class);
|
||||
|
||||
private Connection connection = null;
|
||||
private Statement stmt = null;
|
||||
|
||||
private String partitionName = "partitionName";
|
||||
|
||||
private String createSQLTemplate = "Create Table DataXODPSReaderPPR (" + partitionName +" String, %s)";
|
||||
private String insertSQLTemplate = "Insert Into DataXODPSReaderPPR Values (%s)";
|
||||
private String selectSQLTemplate = "Select * From DataXODPSReaderPPR Where %s";
|
||||
|
||||
public SqliteUtil() throws ClassNotFoundException, SQLException {
|
||||
|
||||
Class.forName("org.sqlite.JDBC");
|
||||
this.connection = DriverManager.getConnection("jdbc:sqlite::memory:");
|
||||
this.stmt = this.connection.createStatement();
|
||||
}
|
||||
|
||||
public void loadAllPartitionsIntoSqlite(Table table, List<String> allOriginPartitions) throws SQLException {
|
||||
List<String> partitionColumnList = new ArrayList<String>();
|
||||
String partition = allOriginPartitions.get(0);
|
||||
String[] partitionSpecs = partition.split(",");
|
||||
List<String> partitionKeyList = new ArrayList<String>();
|
||||
for (String partitionKeyValue : partitionSpecs) {
|
||||
String partitionKey = partitionKeyValue.split("=")[0];
|
||||
partitionColumnList.add(String.format("%s String", partitionKey));
|
||||
partitionKeyList.add(partitionKey);
|
||||
}
|
||||
String createSQL = String.format(createSQLTemplate, StringUtils.join(partitionColumnList.toArray(), ","));
|
||||
LOGGER.info(createSQL);
|
||||
this.stmt.execute(createSQL);
|
||||
|
||||
insertAllOriginPartitionIntoSqlite(table, partitionKeyList);
|
||||
}
|
||||
|
||||
/**
|
||||
* 根据用户配置的过滤条件, 从sqlite中select出符合的partition列表
|
||||
* @param userHintConfiguredPartitions
|
||||
* @return
|
||||
*/
|
||||
public List<String> selectUserConfiguredPartition(List<String> userHintConfiguredPartitions) throws SQLException {
|
||||
List<String> selectedPartitionsFromSqlite = new ArrayList<String>();
|
||||
for (String partitionWhereConditions : userHintConfiguredPartitions) {
|
||||
String selectUserConfiguredPartitionsSql = String.format(selectSQLTemplate,
|
||||
StringUtils.remove(partitionWhereConditions, Constant.PARTITION_FILTER_HINT));
|
||||
LOGGER.info(selectUserConfiguredPartitionsSql);
|
||||
ResultSet rs = stmt.executeQuery(selectUserConfiguredPartitionsSql);
|
||||
while (rs.next()) {
|
||||
selectedPartitionsFromSqlite.add(getPartitionsValue(rs));
|
||||
}
|
||||
}
|
||||
return selectedPartitionsFromSqlite;
|
||||
}
|
||||
|
||||
private String getPartitionsValue (ResultSet rs) throws SQLException {
|
||||
List<String> partitions = new ArrayList<String>();
|
||||
ResultSetMetaData rsMetaData = rs.getMetaData();
|
||||
Integer columnCounter = rs.getMetaData().getColumnCount();
|
||||
for (int columnIndex = 2; columnIndex <= columnCounter; columnIndex++) {
|
||||
partitions.add(String.format("%s=%s", rsMetaData.getColumnName(columnIndex), rs.getString(columnIndex)));
|
||||
}
|
||||
return StringUtils.join(partitions, ",");
|
||||
}
|
||||
|
||||
/**
|
||||
* 将odps table里所有partition值载入sqlite中
|
||||
* @param table
|
||||
* @param partitionKeyList
|
||||
* @throws SQLException
|
||||
*/
|
||||
private void insertAllOriginPartitionIntoSqlite(Table table, List<String> partitionKeyList) throws SQLException {
|
||||
List<Partition> partitions = table.getPartitions();
|
||||
for (Partition partition : partitions){
|
||||
List<String> partitionColumnValue = new ArrayList<String>();
|
||||
partitionColumnValue.add("\""+partition.getPartitionSpec().toString()+"\"");
|
||||
for (String partitionKey : partitionKeyList) {
|
||||
partitionColumnValue.add("\""+partition.getPartitionSpec().get(partitionKey)+"\"");
|
||||
}
|
||||
String insertPartitionValueSql = String.format(insertSQLTemplate, StringUtils.join(partitionColumnValue, ","));
|
||||
this.stmt.execute(insertPartitionValueSql);
|
||||
}
|
||||
}
|
||||
}
|
@ -0,0 +1,39 @@
|
||||
package com.alibaba.datax.plugin.reader.odpsreader.util;
|
||||
|
||||
import java.util.List;
|
||||
|
||||
public class UserConfiguredPartitionClassification {
|
||||
|
||||
//包含/*query*/的partition, 例如: /*query*/ dt>=20170101 and dt<= 20170109
|
||||
private List<String> userConfiguredHintPartition;
|
||||
|
||||
//不包含/*query*/的partition, 例如: dt=20170101 或者 dt=201701*
|
||||
private List<String> userConfiguredNormalPartition;
|
||||
|
||||
//是否包含hint的partition
|
||||
private boolean isIncludeHintPartition;
|
||||
|
||||
public List<String> getUserConfiguredHintPartition() {
|
||||
return userConfiguredHintPartition;
|
||||
}
|
||||
|
||||
public void setUserConfiguredHintPartition(List<String> userConfiguredHintPartition) {
|
||||
this.userConfiguredHintPartition = userConfiguredHintPartition;
|
||||
}
|
||||
|
||||
public List<String> getUserConfiguredNormalPartition() {
|
||||
return userConfiguredNormalPartition;
|
||||
}
|
||||
|
||||
public void setUserConfiguredNormalPartition(List<String> userConfiguredNormalPartition) {
|
||||
this.userConfiguredNormalPartition = userConfiguredNormalPartition;
|
||||
}
|
||||
|
||||
public boolean isIncludeHintPartition() {
|
||||
return isIncludeHintPartition;
|
||||
}
|
||||
|
||||
public void setIncludeHintPartition(boolean includeHintPartition) {
|
||||
isIncludeHintPartition = includeHintPartition;
|
||||
}
|
||||
}
|
Binary file not shown.
@ -31,17 +31,10 @@
|
||||
<artifactId>logback-classic</artifactId>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>org.bouncycastle</groupId>
|
||||
<artifactId>bcprov-jdk15on</artifactId>
|
||||
<version>1.52</version>
|
||||
<scope>system</scope>
|
||||
<systemPath>${basedir}/src/main/libs/bcprov-jdk15on-1.52.jar</systemPath>
|
||||
<groupId>com.aliyun.odps</groupId>
|
||||
<artifactId>odps-sdk-core</artifactId>
|
||||
<version>0.38.4-public</version>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>com.aliyun.odps</groupId>
|
||||
<artifactId>odps-sdk-core</artifactId>
|
||||
<version>0.20.7-public</version>
|
||||
</dependency>
|
||||
|
||||
<!-- httpclient begin -->
|
||||
<dependency>
|
||||
@ -51,6 +44,14 @@
|
||||
</dependency>
|
||||
<!-- httpclient end -->
|
||||
|
||||
<!-- json begin -->
|
||||
<!-- <dependency>
|
||||
<groupId>net.sf.json-lib</groupId>
|
||||
<artifactId>json-lib</artifactId>
|
||||
<version>2.2.3</version>
|
||||
</dependency> -->
|
||||
<!-- json end -->
|
||||
|
||||
<dependency>
|
||||
<groupId>org.mockito</groupId>
|
||||
<artifactId>mockito-core</artifactId>
|
||||
@ -70,9 +71,30 @@
|
||||
<scope>test</scope>
|
||||
</dependency>
|
||||
|
||||
<!-- https://mvnrepository.com/artifact/org.aspectj/aspectjweaver -->
|
||||
<dependency>
|
||||
<groupId>org.aspectj</groupId>
|
||||
<artifactId>aspectjweaver</artifactId>
|
||||
<version>1.8.10</version>
|
||||
</dependency>
|
||||
|
||||
<dependency>
|
||||
<groupId>commons-codec</groupId>
|
||||
<artifactId>commons-codec</artifactId>
|
||||
<version>1.8</version>
|
||||
</dependency>
|
||||
|
||||
</dependencies>
|
||||
|
||||
<build>
|
||||
<resources>
|
||||
<resource>
|
||||
<directory>src/main/java</directory>
|
||||
<includes>
|
||||
<include>**/*.properties</include>
|
||||
</includes>
|
||||
</resource>
|
||||
</resources>
|
||||
<plugins>
|
||||
<!-- compiler plugin -->
|
||||
<plugin>
|
||||
|
@ -23,13 +23,6 @@
|
||||
</includes>
|
||||
<outputDirectory>plugin/writer/odpswriter</outputDirectory>
|
||||
</fileSet>
|
||||
<fileSet>
|
||||
<directory>src/main/libs</directory>
|
||||
<includes>
|
||||
<include>*.*</include>
|
||||
</includes>
|
||||
<outputDirectory>plugin/writer/odpswriter/libs</outputDirectory>
|
||||
</fileSet>
|
||||
</fileSets>
|
||||
|
||||
<dependencySets>
|
||||
|
@ -12,4 +12,34 @@ public class Constant {
|
||||
|
||||
public static final String COLUMN_POSITION = "columnPosition";
|
||||
|
||||
/*
|
||||
* 每个task独立维护一个proxy列表,一共会生成 task并发量 * 分区数量 的proxy,每个proxy会创建 blocksizeInMB(一般是64M) 大小的数组
|
||||
* 因此极易OOM,
|
||||
* 假设默认情况下768M的内存,实际最多只能创建 12 个proxy,8G内存最多只能创建126个proxy,所以最多只允许创建一定数量的proxy,对应到分区数量 1:1
|
||||
*
|
||||
* blockSizeInMB 减小可以减少内存消耗,但是意味着更高频率的网络请求,会对odps服务器造成较大压力
|
||||
*
|
||||
* 另外,可以考虑proxy不用常驻内存,但是需要增加复杂的控制逻辑
|
||||
* 但是一般情况下用户作为分区值得数据是有规律的,比如按照时间,2020-08的数据已经同步完成了,并且后面没有这个分区的数据了,对应的proxy还放在内存中,
|
||||
* 会造成很大的内存浪费。所以有必要对某些proxy进行回收。
|
||||
*
|
||||
* 这里采用是否回收某个proxy的标准是:在最近时间内是否有过数据传输。
|
||||
*
|
||||
*
|
||||
* 需要注意的问题!
|
||||
* 多个任务公用一个proxy,写入时需要抢锁,多并发的性能会受到很大影响,相当于单个分区时串行写入
|
||||
* 这个对性能影响很大,需要避免这种方式,还是尽量各个task有独立的proxy,只是需要去控制内存的使用,只能是控制每个task保有的proxy数量了
|
||||
*
|
||||
* 还可以考虑修改proxy的数组大小,但是设置太小不确定会不会影响性能。可以测试一下
|
||||
*/
|
||||
|
||||
public static final Long PROXY_MAX_IDLE_TIME_MS =60 * 1000L; // 60s没有动作就回收
|
||||
|
||||
public static final Long MAX_PARTITION_CNT = 200L;
|
||||
|
||||
public static final int UTF8_ENCODED_CHAR_MAX_SIZE = 6;
|
||||
|
||||
public static final int DEFAULT_FIELD_MAX_SIZE = 8 * 1024 * 1024;
|
||||
|
||||
|
||||
}
|
||||
|
@ -0,0 +1,57 @@
|
||||
package com.alibaba.datax.plugin.writer.odpswriter;
|
||||
|
||||
public class DateTransForm {
|
||||
/**
|
||||
* 列名称
|
||||
*/
|
||||
private String colName;
|
||||
|
||||
/**
|
||||
* 之前是什么格式
|
||||
*/
|
||||
private String fromFormat;
|
||||
|
||||
/**
|
||||
* 要转换成什么格式
|
||||
*/
|
||||
private String toFormat;
|
||||
|
||||
public DateTransForm(String colName, String fromFormat, String toFormat) {
|
||||
this.colName = colName;
|
||||
this.fromFormat = fromFormat;
|
||||
this.toFormat = toFormat;
|
||||
}
|
||||
|
||||
public String getColName() {
|
||||
return colName;
|
||||
}
|
||||
|
||||
public void setColName(String colName) {
|
||||
this.colName = colName;
|
||||
}
|
||||
|
||||
public String getFromFormat() {
|
||||
return fromFormat;
|
||||
}
|
||||
|
||||
public void setFromFormat(String fromFormat) {
|
||||
this.fromFormat = fromFormat;
|
||||
}
|
||||
|
||||
public String getToFormat() {
|
||||
return toFormat;
|
||||
}
|
||||
|
||||
public void setToFormat(String toFormat) {
|
||||
this.toFormat = toFormat;
|
||||
}
|
||||
|
||||
@Override
|
||||
public String toString() {
|
||||
return "DateTransForm{" +
|
||||
"colName='" + colName + '\'' +
|
||||
", fromFormat='" + fromFormat + '\'' +
|
||||
", toFormat='" + toFormat + '\'' +
|
||||
'}';
|
||||
}
|
||||
}
|
@ -11,6 +11,8 @@ public final class Key {
|
||||
|
||||
public final static String ACCESS_KEY = "accessKey";
|
||||
|
||||
public final static String SECURITY_TOKEN = "securityToken";
|
||||
|
||||
public final static String PROJECT = "project";
|
||||
|
||||
public final static String TABLE = "table";
|
||||
@ -31,4 +33,58 @@ public final class Key {
|
||||
public final static String ACCOUNT_TYPE = "accountType";
|
||||
|
||||
public final static String IS_COMPRESS = "isCompress";
|
||||
|
||||
// preSql
|
||||
public final static String PRE_SQL="preSql";
|
||||
|
||||
// postSql
|
||||
public final static String POST_SQL="postSql";
|
||||
|
||||
public final static String CONSISTENCY_COMMIT = "consistencyCommit";
|
||||
|
||||
public final static String UPLOAD_ID = "uploadId";
|
||||
|
||||
public final static String TASK_COUNT = "taskCount";
|
||||
|
||||
/**
|
||||
* support dynamic partition,支持动态分区,即根据读取到的record的某一列或几列来确定该record应该存入哪个分区
|
||||
* 1. 如何确定根据哪些列:根据目的表哪几列是分区列,再根据对应的column来路由
|
||||
* 2. 何时创建upload session:由于是动态分区,因此无法在初始化时确定分区,也就无法在初始化时创建 upload session,只有再读取到具体record之后才能创建
|
||||
* 3. 缓存 upload sesseion:每当出现新的分区,则创建新的session,同时将该分区对应的session缓存下来,以备下次又有需要存入该分区的记录
|
||||
* 4. 参数检查:不必要检查分区是否配置
|
||||
*/
|
||||
public final static String SUPPORT_DYNAMIC_PARTITION = "supportDynamicPartition";
|
||||
|
||||
/**
|
||||
* 动态分区下,用户如果将源表的某一个时间列映射到分区列,存在如下需求场景:源表的该时间列精确到秒,当时同步到odps表时,只想保留到天,并存入对应的天分区
|
||||
* 格式:
|
||||
* "partitionColumnMapping":[
|
||||
* {
|
||||
* "name":"pt", // 必填
|
||||
* "srcDateFormat":"YYYY-MM-dd hh:mm:ss", // 可选,可能源表中的时间列是 String 类型,此时必须通过 fromDateFormat 来指定源表中该列的日期格式
|
||||
* "dateFormat":"YYYY-MM-dd" // 必填
|
||||
* },
|
||||
* {
|
||||
* ...
|
||||
* },
|
||||
*
|
||||
* ...
|
||||
* ]
|
||||
*/
|
||||
public final static String PARTITION_COL_MAPPING = "partitionColumnMapping";
|
||||
public final static String PARTITION_COL_MAPPING_NAME = "name";
|
||||
public final static String PARTITION_COL_MAPPING_SRC_COL_DATEFORMAT = "srcDateFormat";
|
||||
public final static String PARTITION_COL_MAPPING_DATEFORMAT = "dateFormat";
|
||||
public final static String WRITE_TIMEOUT_IN_MS = "writeTimeoutInMs";
|
||||
|
||||
public static final String OVER_LENGTH_RULE = "overLengthRule";
|
||||
//截断后保留的最大长度
|
||||
public static final String MAX_FIELD_LENGTH = "maxFieldLength";
|
||||
//odps本身支持的最大长度
|
||||
public static final String MAX_ODPS_FIELD_LENGTH = "maxOdpsFieldLength";
|
||||
public static final String ENABLE_OVER_LENGTH_OUTPUT = "enableOverLengthOutput";
|
||||
public static final String MAX_OVER_LENGTH_OUTPUT_COUNT = "maxOverLengthOutputCount";
|
||||
|
||||
//动态分区写入模式下,内存使用率达到80%则flush时间间隔,单位分钟
|
||||
public static final String DYNAMIC_PARTITION_MEM_USAGE_FLUSH_INTERVAL_IN_MINUTE = "dynamicPartitionMemUsageFlushIntervalInMinute";
|
||||
}
|
||||
|
@ -0,0 +1,34 @@
|
||||
errorcode.required_value=\u60a8\u7f3a\u5931\u4e86\u5fc5\u987b\u586b\u5199\u7684\u53c2\u6570\u503c.
|
||||
errorcode.illegal_value=\u60a8\u914d\u7f6e\u7684\u503c\u4e0d\u5408\u6cd5.
|
||||
errorcode.unsupported_column_type=DataX \u4e0d\u652f\u6301\u5199\u5165 ODPS \u7684\u76ee\u7684\u8868\u7684\u6b64\u79cd\u6570\u636e\u7c7b\u578b.
|
||||
errorcode.table_truncate_error=\u6e05\u7a7a ODPS \u76ee\u7684\u8868\u65f6\u51fa\u9519.
|
||||
errorcode.create_master_upload_fail=\u521b\u5efa ODPS \u7684 uploadSession \u5931\u8d25.
|
||||
errorcode.get_slave_upload_fail=\u83b7\u53d6 ODPS \u7684 uploadSession \u5931\u8d25.
|
||||
errorcode.get_id_key_fail=\u83b7\u53d6 accessId/accessKey \u5931\u8d25.
|
||||
errorcode.get_partition_fail=\u83b7\u53d6 ODPS \u76ee\u7684\u8868\u7684\u6240\u6709\u5206\u533a\u5931\u8d25.
|
||||
errorcode.add_partition_failed=\u6dfb\u52a0\u5206\u533a\u5230 ODPS \u76ee\u7684\u8868\u5931\u8d25.
|
||||
errorcode.writer_record_fail=\u5199\u5165\u6570\u636e\u5230 ODPS \u76ee\u7684\u8868\u5931\u8d25.
|
||||
errorcode.commit_block_fail=\u63d0\u4ea4 block \u5230 ODPS \u76ee\u7684\u8868\u5931\u8d25.
|
||||
errorcode.run_sql_failed=\u6267\u884c ODPS Sql \u5931\u8d25.
|
||||
errorcode.check_if_partitioned_table_failed=\u68c0\u67e5 ODPS \u76ee\u7684\u8868:%s \u662f\u5426\u4e3a\u5206\u533a\u8868\u5931\u8d25.
|
||||
errorcode.run_sql_odps_exception=\u6267\u884c ODPS Sql \u65f6\u629b\u51fa\u5f02\u5e38, \u53ef\u91cd\u8bd5
|
||||
errorcode.account_type_error=\u8d26\u53f7\u7c7b\u578b\u9519\u8bef.
|
||||
errorcode.partition_error=\u5206\u533a\u914d\u7f6e\u9519\u8bef.
|
||||
errorcode.column_not_exist=\u7528\u6237\u914d\u7f6e\u7684\u5217\u4e0d\u5b58\u5728.
|
||||
errorcode.odps_project_not_fount=\u60a8\u914d\u7f6e\u7684\u503c\u4e0d\u5408\u6cd5, odps project \u4e0d\u5b58\u5728.
|
||||
errorcode.odps_table_not_fount=\u60a8\u914d\u7f6e\u7684\u503c\u4e0d\u5408\u6cd5, odps table \u4e0d\u5b58\u5728
|
||||
errorcode.odps_access_key_id_not_found=\u60a8\u914d\u7f6e\u7684\u503c\u4e0d\u5408\u6cd5, odps accessId,accessKey \u4e0d\u5b58\u5728
|
||||
errorcode.odps_access_key_invalid=\u60a8\u914d\u7f6e\u7684\u503c\u4e0d\u5408\u6cd5, odps accessKey \u9519\u8bef
|
||||
errorcode.odps_access_deny=\u62d2\u7edd\u8bbf\u95ee, \u60a8\u4e0d\u5728 \u60a8\u914d\u7f6e\u7684 project \u4e2d
|
||||
|
||||
|
||||
odpswriter.1=\u8d26\u53f7\u7c7b\u578b\u9519\u8bef\uff0c\u56e0\u4e3a\u4f60\u7684\u8d26\u53f7 [{0}] \u4e0d\u662fdatax\u76ee\u524d\u652f\u6301\u7684\u8d26\u53f7\u7c7b\u578b\uff0c\u76ee\u524d\u4ec5\u652f\u6301aliyun, taobao\u8d26\u53f7\uff0c\u8bf7\u4fee\u6539\u60a8\u7684\u8d26\u53f7\u4fe1\u606f.
|
||||
odpswriter.2=\u8fd9\u662f\u4e00\u6761\u9700\u8981\u6ce8\u610f\u7684\u4fe1\u606f \u7531\u4e8e\u60a8\u7684\u4f5c\u4e1a\u914d\u7f6e\u4e86\u5199\u5165 ODPS \u7684\u76ee\u7684\u8868\u65f6emptyAsNull=true, \u6240\u4ee5 DataX\u5c06\u4f1a\u628a\u957f\u5ea6\u4e3a0\u7684\u7a7a\u5b57\u7b26\u4e32\u4f5c\u4e3a java \u7684 null \u5199\u5165 ODPS.
|
||||
odpswriter.3=\u60a8\u914d\u7f6e\u7684blockSizeInMB:{0} \u53c2\u6570\u9519\u8bef. \u6b63\u786e\u7684\u914d\u7f6e\u662f[1-512]\u4e4b\u95f4\u7684\u6574\u6570. \u8bf7\u4fee\u6539\u6b64\u53c2\u6570\u7684\u503c\u4e3a\u8be5\u533a\u95f4\u5185\u7684\u6570\u503c
|
||||
odpswriter.4=\u5199\u5165 ODPS \u76ee\u7684\u8868\u5931\u8d25. \u8bf7\u8054\u7cfb ODPS \u7ba1\u7406\u5458\u5904\u7406.
|
||||
|
||||
|
||||
odpswriterproxy.1=\u4eb2\uff0c\u914d\u7f6e\u4e2d\u7684\u6e90\u8868\u7684\u5217\u4e2a\u6570\u548c\u76ee\u7684\u7aef\u8868\u4e0d\u4e00\u81f4\uff0c\u6e90\u8868\u4e2d\u60a8\u914d\u7f6e\u7684\u5217\u6570\u662f:{0} \u5927\u4e8e\u76ee\u7684\u7aef\u7684\u5217\u6570\u662f:{1} , \u8fd9\u6837\u4f1a\u5bfc\u81f4\u6e90\u5934\u6570\u636e\u65e0\u6cd5\u6b63\u786e\u5bfc\u5165\u76ee\u7684\u7aef, \u8bf7\u68c0\u67e5\u60a8\u7684\u914d\u7f6e\u5e76\u4fee\u6539.
|
||||
odpswriterproxy.2=\u6e90\u8868\u7684\u5217\u4e2a\u6570\u5c0f\u4e8e\u76ee\u7684\u8868\u7684\u5217\u4e2a\u6570\uff0c\u6e90\u8868\u5217\u6570\u662f:{0} \u76ee\u7684\u8868\u5217\u6570\u662f:{1} , \u6570\u76ee\u4e0d\u5339\u914d. DataX \u4f1a\u628a\u76ee\u7684\u7aef\u591a\u51fa\u7684\u5217\u7684\u503c\u8bbe\u7f6e\u4e3a\u7a7a\u503c. \u5982\u679c\u8fd9\u4e2a\u9ed8\u8ba4\u914d\u7f6e\u4e0d\u7b26\u5408\u60a8\u7684\u671f\u671b\uff0c\u8bf7\u4fdd\u6301\u6e90\u8868\u548c\u76ee\u7684\u8868\u914d\u7f6e\u7684\u5217\u6570\u76ee\u4fdd\u6301\u4e00\u81f4.
|
||||
odpswriterproxy.3=Odps decimal \u7c7b\u578b\u7684\u6574\u6570\u4f4d\u4e2a\u6570\u4e0d\u80fd\u8d85\u8fc735
|
||||
odpswriterproxy.4=\u5199\u5165 ODPS \u76ee\u7684\u8868\u65f6\u9047\u5230\u4e86\u810f\u6570\u636e: \u7b2c[{0}]\u4e2a\u5b57\u6bb5 {1} \u7684\u6570\u636e\u51fa\u73b0\u9519\u8bef\uff0c\u8bf7\u68c0\u67e5\u8be5\u6570\u636e\u5e76\u4f5c\u51fa\u4fee\u6539 \u6216\u8005\u60a8\u53ef\u4ee5\u589e\u5927\u9600\u503c\uff0c\u5ffd\u7565\u8fd9\u6761\u8bb0\u5f55.
|
@ -8,29 +8,49 @@ import com.alibaba.datax.common.spi.Writer;
|
||||
import com.alibaba.datax.common.statistics.PerfRecord;
|
||||
import com.alibaba.datax.common.util.Configuration;
|
||||
import com.alibaba.datax.common.util.ListUtil;
|
||||
import com.alibaba.datax.plugin.writer.odpswriter.util.IdAndKeyUtil;
|
||||
import com.alibaba.datax.plugin.writer.odpswriter.util.OdpsUtil;
|
||||
|
||||
import com.alibaba.datax.common.util.MessageSource;
|
||||
import com.alibaba.datax.plugin.writer.odpswriter.model.PartitionInfo;
|
||||
import com.alibaba.datax.plugin.writer.odpswriter.model.UserDefinedFunction;
|
||||
import com.alibaba.datax.plugin.writer.odpswriter.util.*;
|
||||
import com.alibaba.fastjson.JSON;
|
||||
import com.alibaba.fastjson.JSONArray;
|
||||
import com.alibaba.fastjson.JSONObject;
|
||||
import com.aliyun.odps.Odps;
|
||||
import com.aliyun.odps.Table;
|
||||
import com.aliyun.odps.TableSchema;
|
||||
import com.aliyun.odps.tunnel.TableTunnel;
|
||||
import org.apache.commons.lang3.StringUtils;
|
||||
import org.apache.commons.lang3.tuple.MutablePair;
|
||||
import org.apache.commons.lang3.tuple.Pair;
|
||||
import org.slf4j.Logger;
|
||||
import org.slf4j.LoggerFactory;
|
||||
|
||||
import java.util.ArrayList;
|
||||
import java.util.List;
|
||||
import java.lang.management.ManagementFactory;
|
||||
import java.lang.management.MemoryUsage;
|
||||
import java.util.*;
|
||||
import java.util.concurrent.atomic.AtomicInteger;
|
||||
import java.util.concurrent.atomic.AtomicLong;
|
||||
import java.util.stream.Collectors;
|
||||
|
||||
import static com.alibaba.datax.plugin.writer.odpswriter.util.CustomPartitionUtils.getListWithJson;
|
||||
|
||||
/**
|
||||
* 已修改为:每个 task 各自创建自己的 upload,拥有自己的 uploadId,并在 task 中完成对对应 block 的提交。
|
||||
*/
|
||||
public class OdpsWriter extends Writer {
|
||||
|
||||
public static HashSet<String> partitionsDealedTruncate = new HashSet<>();
|
||||
static final Object lockForPartitionDealedTruncate = new Object();
|
||||
public static AtomicInteger partitionCnt = new AtomicInteger(0);
|
||||
public static Long maxPartitionCnt;
|
||||
public static AtomicLong globalTotalTruncatedRecordNumber = new AtomicLong(0);
|
||||
public static Long maxOutputOverLengthRecord;
|
||||
public static int maxOdpsFieldLength = Constant.DEFAULT_FIELD_MAX_SIZE;
|
||||
|
||||
public static class Job extends Writer.Job {
|
||||
private static final Logger LOG = LoggerFactory
|
||||
.getLogger(Job.class);
|
||||
private static final MessageSource MESSAGE_SOURCE = MessageSource.loadResourceBundle(OdpsWriter.class);
|
||||
|
||||
private static final boolean IS_DEBUG = LOG.isDebugEnabled();
|
||||
|
||||
@ -47,6 +67,8 @@ public class OdpsWriter extends Writer {
|
||||
private String uploadId;
|
||||
private TableTunnel.UploadSession masterUpload;
|
||||
private int blockSizeInMB;
|
||||
private boolean consistencyCommit;
|
||||
private boolean supportDynamicPartition;
|
||||
|
||||
public void preCheck() {
|
||||
this.init();
|
||||
@ -54,6 +76,94 @@ public class OdpsWriter extends Writer {
|
||||
}
|
||||
|
||||
public void doPreCheck() {
|
||||
|
||||
//检查列信息是否正确
|
||||
List<String> allColumns = OdpsUtil.getAllColumns(this.table.getSchema());
|
||||
LOG.info("allColumnList: {} .", StringUtils.join(allColumns, ','));
|
||||
List<String> allPartColumns = OdpsUtil.getAllPartColumns(this.table.getSchema());
|
||||
LOG.info("allPartColumnsList: {} .", StringUtils.join(allPartColumns, ','));
|
||||
dealColumn(this.originalConfig, allColumns, allPartColumns);
|
||||
|
||||
//检查分区信息是否正确
|
||||
if (!supportDynamicPartition) {
|
||||
OdpsUtil.preCheckPartition(this.odps, this.table, this.partition, this.truncate);
|
||||
}
|
||||
}
|
||||
|
||||
@Override
|
||||
public void init() {
|
||||
this.originalConfig = super.getPluginJobConf();
|
||||
|
||||
|
||||
OdpsUtil.checkNecessaryConfig(this.originalConfig);
|
||||
OdpsUtil.dealMaxRetryTime(this.originalConfig);
|
||||
|
||||
|
||||
|
||||
this.projectName = this.originalConfig.getString(Key.PROJECT);
|
||||
this.tableName = this.originalConfig.getString(Key.TABLE);
|
||||
this.tunnelServer = this.originalConfig.getString(Key.TUNNEL_SERVER, null);
|
||||
|
||||
this.dealAK();
|
||||
|
||||
// init odps config
|
||||
this.odps = OdpsUtil.initOdpsProject(this.originalConfig);
|
||||
|
||||
//检查表等配置是否正确
|
||||
this.table = OdpsUtil.getTable(odps, this.projectName, this.tableName);
|
||||
|
||||
// 处理动态分区参数,以及动态分区相关配置是否合法,如果没有配置动态分区,则根据列映射配置决定是否启用
|
||||
this.dealDynamicPartition();
|
||||
|
||||
//check isCompress
|
||||
this.originalConfig.getBool(Key.IS_COMPRESS, false);
|
||||
|
||||
// 如果不是动态分区写入,则检查分区配置,动态分区写入不用检查
|
||||
if (!this.supportDynamicPartition) {
|
||||
this.partition = OdpsUtil.formatPartition(this.originalConfig
|
||||
.getString(Key.PARTITION, ""), true);
|
||||
this.originalConfig.set(Key.PARTITION, this.partition);
|
||||
}
|
||||
|
||||
this.truncate = this.originalConfig.getBool(Key.TRUNCATE);
|
||||
|
||||
this.consistencyCommit = this.originalConfig.getBool(Key.CONSISTENCY_COMMIT, false);
|
||||
|
||||
boolean emptyAsNull = this.originalConfig.getBool(Key.EMPTY_AS_NULL, false);
|
||||
this.originalConfig.set(Key.EMPTY_AS_NULL, emptyAsNull);
|
||||
if (emptyAsNull) {
|
||||
LOG.warn(MESSAGE_SOURCE.message("odpswriter.2"));
|
||||
}
|
||||
|
||||
this.blockSizeInMB = this.originalConfig.getInt(Key.BLOCK_SIZE_IN_MB, 64);
|
||||
if (this.blockSizeInMB < 8) {
|
||||
this.blockSizeInMB = 8;
|
||||
}
|
||||
this.originalConfig.set(Key.BLOCK_SIZE_IN_MB, this.blockSizeInMB);
|
||||
LOG.info("blockSizeInMB={}.", this.blockSizeInMB);
|
||||
maxPartitionCnt = ManagementFactory.getMemoryMXBean().getHeapMemoryUsage().getMax() / 1024 / 1024 / this.blockSizeInMB;
|
||||
if (maxPartitionCnt < Constant.MAX_PARTITION_CNT) {
|
||||
maxPartitionCnt = Constant.MAX_PARTITION_CNT;
|
||||
}
|
||||
LOG.info("maxPartitionCnt={}", maxPartitionCnt);
|
||||
|
||||
if (IS_DEBUG) {
|
||||
LOG.debug("After master init(), job config now is: [\n{}\n] .",
|
||||
this.originalConfig.toJSON());
|
||||
}
|
||||
}
|
||||
|
||||
private void dealAK() {
|
||||
this.accountType = this.originalConfig.getString(Key.ACCOUNT_TYPE,
|
||||
Constant.DEFAULT_ACCOUNT_TYPE);
|
||||
|
||||
if (!Constant.DEFAULT_ACCOUNT_TYPE.equalsIgnoreCase(this.accountType) &&
|
||||
!Constant.TAOBAO_ACCOUNT_TYPE.equalsIgnoreCase(this.accountType)) {
|
||||
throw DataXException.asDataXException(OdpsWriterErrorCode.ACCOUNT_TYPE_ERROR,
|
||||
MESSAGE_SOURCE.message("odpswriter.1", accountType));
|
||||
}
|
||||
this.originalConfig.set(Key.ACCOUNT_TYPE, this.accountType);
|
||||
|
||||
//检查accessId,accessKey配置
|
||||
if (Constant.DEFAULT_ACCOUNT_TYPE
|
||||
.equalsIgnoreCase(this.accountType)) {
|
||||
@ -66,66 +176,66 @@ public class OdpsWriter extends Writer {
|
||||
}
|
||||
LOG.info("accessId:[{}] .", accessId);
|
||||
}
|
||||
// init odps config
|
||||
this.odps = OdpsUtil.initOdpsProject(this.originalConfig);
|
||||
|
||||
//检查表等配置是否正确
|
||||
this.table = OdpsUtil.getTable(odps,this.projectName,this.tableName);
|
||||
|
||||
//检查列信息是否正确
|
||||
List<String> allColumns = OdpsUtil.getAllColumns(this.table.getSchema());
|
||||
LOG.info("allColumnList: {} .", StringUtils.join(allColumns, ','));
|
||||
dealColumn(this.originalConfig, allColumns);
|
||||
|
||||
//检查分区信息是否正确
|
||||
OdpsUtil.preCheckPartition(this.odps, this.table, this.partition, this.truncate);
|
||||
}
|
||||
|
||||
@Override
|
||||
public void init() {
|
||||
this.originalConfig = super.getPluginJobConf();
|
||||
|
||||
OdpsUtil.checkNecessaryConfig(this.originalConfig);
|
||||
OdpsUtil.dealMaxRetryTime(this.originalConfig);
|
||||
|
||||
this.projectName = this.originalConfig.getString(Key.PROJECT);
|
||||
this.tableName = this.originalConfig.getString(Key.TABLE);
|
||||
this.tunnelServer = this.originalConfig.getString(Key.TUNNEL_SERVER, null);
|
||||
|
||||
//check isCompress
|
||||
this.originalConfig.getBool(Key.IS_COMPRESS, false);
|
||||
|
||||
this.partition = OdpsUtil.formatPartition(this.originalConfig
|
||||
.getString(Key.PARTITION, ""));
|
||||
this.originalConfig.set(Key.PARTITION, this.partition);
|
||||
|
||||
this.accountType = this.originalConfig.getString(Key.ACCOUNT_TYPE,
|
||||
Constant.DEFAULT_ACCOUNT_TYPE);
|
||||
if (!Constant.DEFAULT_ACCOUNT_TYPE.equalsIgnoreCase(this.accountType) &&
|
||||
!Constant.TAOBAO_ACCOUNT_TYPE.equalsIgnoreCase(this.accountType)) {
|
||||
throw DataXException.asDataXException(OdpsWriterErrorCode.ACCOUNT_TYPE_ERROR,
|
||||
String.format("账号类型错误,因为你的账号 [%s] 不是datax目前支持的账号类型,目前仅支持aliyun, taobao账号,请修改您的账号信息.", accountType));
|
||||
}
|
||||
this.originalConfig.set(Key.ACCOUNT_TYPE, this.accountType);
|
||||
|
||||
this.truncate = this.originalConfig.getBool(Key.TRUNCATE);
|
||||
|
||||
boolean emptyAsNull = this.originalConfig.getBool(Key.EMPTY_AS_NULL, false);
|
||||
this.originalConfig.set(Key.EMPTY_AS_NULL, emptyAsNull);
|
||||
if (emptyAsNull) {
|
||||
LOG.warn("这是一条需要注意的信息 由于您的作业配置了写入 ODPS 的目的表时emptyAsNull=true, 所以 DataX将会把长度为0的空字符串作为 java 的 null 写入 ODPS.");
|
||||
private void dealDynamicPartition() {
|
||||
/*
|
||||
* 如果显示配置了 supportDynamicPartition,则以配置为准
|
||||
* 如果没有配置,表为分区表且 列映射中包所有含分区列
|
||||
*/
|
||||
List<String> partitionCols = OdpsUtil.getAllPartColumns(this.table.getSchema());
|
||||
List<String> configCols = this.originalConfig.getList(Key.COLUMN, String.class);
|
||||
LOG.info("partition columns:{}", partitionCols);
|
||||
LOG.info("config columns:{}", configCols);
|
||||
LOG.info("support dynamic partition:{}",this.originalConfig.getBool(Key.SUPPORT_DYNAMIC_PARTITION));
|
||||
LOG.info("partition format type:{}",this.originalConfig.getString("partitionFormatType"));
|
||||
if (this.originalConfig.getKeys().contains(Key.SUPPORT_DYNAMIC_PARTITION)) {
|
||||
this.supportDynamicPartition = this.originalConfig.getBool(Key.SUPPORT_DYNAMIC_PARTITION);
|
||||
if (supportDynamicPartition) {
|
||||
// 自定义分区
|
||||
if("custom".equalsIgnoreCase(originalConfig.getString("partitionFormatType"))){
|
||||
List<PartitionInfo> partitions = getListWithJson(originalConfig,"customPartitionColumns",PartitionInfo.class);
|
||||
// 自定义分区配置必须与实际分区列完全一致
|
||||
if (!ListUtil.checkIfAllSameValue(partitions.stream().map(item->item.getName()).collect(Collectors.toList()), partitionCols)) {
|
||||
throw DataXException.asDataXException("custom partition config is not same as real partition info.");
|
||||
}
|
||||
} else {
|
||||
// 设置动态分区写入为真--检查是否所有分区列都配置在了列映射中,不满足则抛出异常
|
||||
if (!ListUtil.checkIfBInA(configCols, partitionCols, false)) {
|
||||
throw DataXException.asDataXException("You config supportDynamicPartition as true, but didn't config all partition columns");
|
||||
}
|
||||
}
|
||||
} else {
|
||||
// 设置动态分区写入为假--确保列映射中没有配置分区列,配置则抛出异常
|
||||
if (ListUtil.checkIfHasSameValue(configCols, partitionCols)) {
|
||||
throw DataXException.asDataXException("You should config all partition columns in column param, or you can specify a static partition param");
|
||||
}
|
||||
}
|
||||
} else {
|
||||
if (OdpsUtil.isPartitionedTable(table)) {
|
||||
// 分区表,列映射配置了分区,同时检查所有分区列要么都被配置,要么都没有配置
|
||||
if (ListUtil.checkIfBInA(configCols, partitionCols, false)) {
|
||||
// 所有的partition 列都配置在了column中
|
||||
this.supportDynamicPartition = true;
|
||||
} else {
|
||||
// 并非所有partition列都配置在了column中,此时还需检查是否只配置了部分,如果只配置了部分,则报错
|
||||
if (ListUtil.checkIfHasSameValue(configCols, partitionCols)) {
|
||||
throw DataXException.asDataXException("You should config all partition columns in column param, or you can specify a static partition param");
|
||||
}
|
||||
// 分区列没有配置任何分区列,则设置为false
|
||||
this.supportDynamicPartition = false;
|
||||
}
|
||||
} else {
|
||||
LOG.info("{} is not a partition tale, set supportDynamicParition as false", this.tableName);
|
||||
this.supportDynamicPartition = false;
|
||||
}
|
||||
}
|
||||
|
||||
this.blockSizeInMB = this.originalConfig.getInt(Key.BLOCK_SIZE_IN_MB, 64);
|
||||
if(this.blockSizeInMB < 8) {
|
||||
this.blockSizeInMB = 8;
|
||||
}
|
||||
this.originalConfig.set(Key.BLOCK_SIZE_IN_MB, this.blockSizeInMB);
|
||||
LOG.info("blockSizeInMB={}.", this.blockSizeInMB);
|
||||
|
||||
if (IS_DEBUG) {
|
||||
LOG.debug("After master init(), job config now is: [\n{}\n] .",
|
||||
this.originalConfig.toJSON());
|
||||
// 分布式下不支持动态分区写入,如果是分布式模式则报错
|
||||
LOG.info("current run mode: {}", System.getProperty("datax.executeMode"));
|
||||
if (supportDynamicPartition && StringUtils.equalsIgnoreCase("distribute", System.getProperty("datax.executeMode"))) {
|
||||
LOG.error("Distribute mode don't support dynamic partition writing");
|
||||
System.exit(1);
|
||||
}
|
||||
}
|
||||
|
||||
@ -148,10 +258,29 @@ public class OdpsWriter extends Writer {
|
||||
// init odps config
|
||||
this.odps = OdpsUtil.initOdpsProject(this.originalConfig);
|
||||
|
||||
//检查表等配置是否正确
|
||||
this.table = OdpsUtil.getTable(odps,this.projectName,this.tableName);
|
||||
List<String> preSqls = this.originalConfig.getList(Key.PRE_SQL, String.class);
|
||||
if (preSqls != null && !preSqls.isEmpty()) {
|
||||
LOG.info(String.format("Beigin to exectue preSql : %s. \n Attention: these preSqls must be idempotent!!!",
|
||||
JSON.toJSONString(preSqls)));
|
||||
long beginTime = System.currentTimeMillis();
|
||||
for (String preSql : preSqls) {
|
||||
preSql = preSql.trim();
|
||||
if (!preSql.endsWith(";")) {
|
||||
preSql = String.format("%s;", preSql);
|
||||
}
|
||||
OdpsUtil.runSqlTaskWithRetry(this.odps, preSql, "preSql");
|
||||
}
|
||||
long endTime = System.currentTimeMillis();
|
||||
LOG.info(String.format("Exectue odpswriter preSql successfully! cost time: %s ms.", (endTime - beginTime)));
|
||||
}
|
||||
|
||||
OdpsUtil.dealTruncate(this.odps, this.table, this.partition, this.truncate);
|
||||
//检查表等配置是否正确
|
||||
this.table = OdpsUtil.getTable(odps, this.projectName, this.tableName);
|
||||
|
||||
// 如果是动态分区写入,因为无需配置分区信息,因此也无法在任务初始化时进行 truncate
|
||||
if (!supportDynamicPartition) {
|
||||
OdpsUtil.dealTruncate(this.odps, this.table, this.partition, this.truncate);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
@ -169,20 +298,34 @@ public class OdpsWriter extends Writer {
|
||||
tableTunnel.setEndpoint(tunnelServer);
|
||||
}
|
||||
|
||||
this.masterUpload = OdpsUtil.createMasterTunnelUpload(
|
||||
tableTunnel, this.projectName, this.tableName, this.partition);
|
||||
this.uploadId = this.masterUpload.getId();
|
||||
LOG.info("Master uploadId:[{}].", this.uploadId);
|
||||
|
||||
TableSchema schema = this.masterUpload.getSchema();
|
||||
TableSchema schema = this.table.getSchema();
|
||||
List<String> allColumns = OdpsUtil.getAllColumns(schema);
|
||||
LOG.info("allColumnList: {} .", StringUtils.join(allColumns, ','));
|
||||
List<String> allPartColumns = OdpsUtil.getAllPartColumns(this.table.getSchema());
|
||||
LOG.info("allPartColumnsList: {} .", StringUtils.join(allPartColumns, ','));
|
||||
dealColumn(this.originalConfig, allColumns, allPartColumns);
|
||||
this.originalConfig.set("allColumns", allColumns);
|
||||
|
||||
dealColumn(this.originalConfig, allColumns);
|
||||
// 动态分区模式下,无法事先根据分区创建好 session,
|
||||
if (!supportDynamicPartition) {
|
||||
this.masterUpload = OdpsUtil.createMasterTunnelUpload(
|
||||
tableTunnel, this.projectName, this.tableName, this.partition);
|
||||
this.uploadId = this.masterUpload.getId();
|
||||
LOG.info("Master uploadId:[{}].", this.uploadId);
|
||||
}
|
||||
|
||||
for (int i = 0; i < mandatoryNumber; i++) {
|
||||
Configuration tempConfig = this.originalConfig.clone();
|
||||
|
||||
// 非动态分区模式下,设置了统一提交,则需要克隆主 upload session,否则各个 task "各自为战"
|
||||
if (!supportDynamicPartition && this.consistencyCommit) {
|
||||
tempConfig.set(Key.UPLOAD_ID, uploadId);
|
||||
tempConfig.set(Key.TASK_COUNT, mandatoryNumber);
|
||||
}
|
||||
|
||||
// 设置task的supportDynamicPartition属性
|
||||
tempConfig.set(Key.SUPPORT_DYNAMIC_PARTITION, this.supportDynamicPartition);
|
||||
|
||||
configurations.add(tempConfig);
|
||||
}
|
||||
|
||||
@ -190,14 +333,18 @@ public class OdpsWriter extends Writer {
|
||||
LOG.debug("After master split, the job config now is:[\n{}\n].", this.originalConfig);
|
||||
}
|
||||
|
||||
this.masterUpload = null;
|
||||
|
||||
return configurations;
|
||||
}
|
||||
|
||||
private void dealColumn(Configuration originalConfig, List<String> allColumns) {
|
||||
private void dealColumn(Configuration originalConfig, List<String> allColumns, List<String> allPartColumns) {
|
||||
//之前已经检查了userConfiguredColumns 一定不为空
|
||||
List<String> userConfiguredColumns = originalConfig.getList(Key.COLUMN, String.class);
|
||||
|
||||
// 动态分区下column不支持配置*
|
||||
if (supportDynamicPartition && userConfiguredColumns.contains("*")) {
|
||||
throw DataXException.asDataXException(OdpsWriterErrorCode.ILLEGAL_VALUE,
|
||||
"In dynamic partition write mode you can't specify column with *.");
|
||||
}
|
||||
if (1 == userConfiguredColumns.size() && "*".equals(userConfiguredColumns.get(0))) {
|
||||
userConfiguredColumns = allColumns;
|
||||
originalConfig.set(Key.COLUMN, allColumns);
|
||||
@ -206,15 +353,51 @@ public class OdpsWriter extends Writer {
|
||||
ListUtil.makeSureNoValueDuplicate(userConfiguredColumns, false);
|
||||
|
||||
//检查列是否存在,大小写不敏感
|
||||
ListUtil.makeSureBInA(allColumns, userConfiguredColumns, false);
|
||||
if (supportDynamicPartition) {
|
||||
List<String> allColumnList = new ArrayList<String>();
|
||||
allColumnList.addAll(allColumns);
|
||||
allColumnList.addAll(allPartColumns);
|
||||
ListUtil.makeSureBInA(allColumnList, userConfiguredColumns, false);
|
||||
} else {
|
||||
ListUtil.makeSureBInA(allColumns, userConfiguredColumns, false);
|
||||
}
|
||||
}
|
||||
|
||||
List<Integer> columnPositions = OdpsUtil.parsePosition(allColumns, userConfiguredColumns);
|
||||
// 获取配置的所有数据列在目标表中所有数据列中的真正位置, -1 代表该列为分区列
|
||||
List<Integer> columnPositions = OdpsUtil.parsePosition(allColumns, allPartColumns, userConfiguredColumns);
|
||||
originalConfig.set(Constant.COLUMN_POSITION, columnPositions);
|
||||
}
|
||||
|
||||
@Override
|
||||
public void post() {
|
||||
|
||||
if (supportDynamicPartition) {
|
||||
LOG.info("Total create partition cnt:{}", partitionCnt);
|
||||
}
|
||||
|
||||
if (!supportDynamicPartition && this.consistencyCommit) {
|
||||
LOG.info("Master which uploadId=[{}] begin to commit blocks.", this.uploadId);
|
||||
OdpsUtil.masterComplete(this.masterUpload);
|
||||
LOG.info("Master which uploadId=[{}] commit blocks ok.", this.uploadId);
|
||||
}
|
||||
|
||||
List<String> postSqls = this.originalConfig.getList(Key.POST_SQL, String.class);
|
||||
if (postSqls != null && !postSqls.isEmpty()) {
|
||||
LOG.info(String.format("Beigin to exectue postSql : %s. \n Attention: these postSqls must be idempotent!!!",
|
||||
JSON.toJSONString(postSqls)));
|
||||
long beginTime = System.currentTimeMillis();
|
||||
for (String postSql : postSqls) {
|
||||
postSql = postSql.trim();
|
||||
if (!postSql.endsWith(";")) {
|
||||
postSql = String.format("%s;", postSql);
|
||||
}
|
||||
OdpsUtil.runSqlTaskWithRetry(this.odps, postSql, "postSql");
|
||||
}
|
||||
long endTime = System.currentTimeMillis();
|
||||
LOG.info(String.format("Exectue odpswriter postSql successfully! cost time: %s ms.", (endTime - beginTime)));
|
||||
}
|
||||
|
||||
LOG.info("truncated record count: {}", globalTotalTruncatedRecordNumber.intValue() );
|
||||
}
|
||||
|
||||
@Override
|
||||
@ -226,6 +409,7 @@ public class OdpsWriter extends Writer {
|
||||
public static class Task extends Writer.Task {
|
||||
private static final Logger LOG = LoggerFactory
|
||||
.getLogger(Task.class);
|
||||
private static final MessageSource MESSAGE_SOURCE = MessageSource.loadResourceBundle(OdpsWriter.class);
|
||||
|
||||
private static final boolean IS_DEBUG = LOG.isDebugEnabled();
|
||||
|
||||
@ -246,18 +430,54 @@ public class OdpsWriter extends Writer {
|
||||
private List<Long> blocks;
|
||||
private int blockSizeInMB;
|
||||
|
||||
private boolean consistencyCommit;
|
||||
|
||||
private int taskId;
|
||||
private int taskCount;
|
||||
|
||||
private Integer failoverState = 0; //0 未failover 1准备failover 2已提交,不能failover
|
||||
private byte[] lock = new byte[0];
|
||||
private List<String> allColumns;
|
||||
|
||||
/*
|
||||
* Partition 和 session 的对应关系,处理 record 时,路由到哪个分区,则通过对应的 proxy 上传
|
||||
* Key 为 所有分区列的值按配置顺序拼接
|
||||
*/
|
||||
private HashMap</*partition*/String, Pair<OdpsWriterProxy, /*blocks*/List<Long>>> partitionUploadSessionHashMap;
|
||||
private Boolean supportDynamicPartition;
|
||||
private TableTunnel tableTunnel;
|
||||
private Table table;
|
||||
|
||||
/**
|
||||
* 保存分区列格式转换规则,只支持源表是 Date 列,或者内容为日期的 String 列
|
||||
*/
|
||||
private HashMap<String, DateTransForm> dateTransFormMap;
|
||||
|
||||
private Long writeTimeOutInMs;
|
||||
|
||||
private String overLengthRule;
|
||||
private int maxFieldLength;
|
||||
private Boolean enableOverLengthOutput;
|
||||
|
||||
/**
|
||||
* 动态分区写入模式下,内存使用率达到80%则flush时间间隔,单位分钟
|
||||
* 默认5分钟做flush, 避免出现频繁的flush导致小文件问题
|
||||
*/
|
||||
private int dynamicPartitionMemUsageFlushIntervalInMinute = 1;
|
||||
|
||||
private long latestFlushTime = 0;
|
||||
|
||||
@Override
|
||||
public void init() {
|
||||
this.sliceConfig = super.getPluginJobConf();
|
||||
|
||||
// 默认十分钟超时时间
|
||||
this.writeTimeOutInMs = this.sliceConfig.getLong(Key.WRITE_TIMEOUT_IN_MS, 10 * 60 * 1000);
|
||||
this.projectName = this.sliceConfig.getString(Key.PROJECT);
|
||||
this.tableName = this.sliceConfig.getString(Key.TABLE);
|
||||
this.tunnelServer = this.sliceConfig.getString(Key.TUNNEL_SERVER, null);
|
||||
this.partition = OdpsUtil.formatPartition(this.sliceConfig
|
||||
.getString(Key.PARTITION, ""));
|
||||
.getString(Key.PARTITION, ""), true);
|
||||
this.sliceConfig.set(Key.PARTITION, this.partition);
|
||||
|
||||
this.emptyAsNull = this.sliceConfig.getBool(Key.EMPTY_AS_NULL);
|
||||
@ -265,9 +485,49 @@ public class OdpsWriter extends Writer {
|
||||
this.isCompress = this.sliceConfig.getBool(Key.IS_COMPRESS, false);
|
||||
if (this.blockSizeInMB < 1 || this.blockSizeInMB > 512) {
|
||||
throw DataXException.asDataXException(OdpsWriterErrorCode.ILLEGAL_VALUE,
|
||||
String.format("您配置的blockSizeInMB:%s 参数错误. 正确的配置是[1-512]之间的整数. 请修改此参数的值为该区间内的数值", this.blockSizeInMB));
|
||||
MESSAGE_SOURCE.message("odpswriter.3", this.blockSizeInMB));
|
||||
}
|
||||
|
||||
this.taskId = this.getTaskId();
|
||||
this.taskCount = this.sliceConfig.getInt(Key.TASK_COUNT, 0);
|
||||
|
||||
this.supportDynamicPartition = this.sliceConfig.getBool(Key.SUPPORT_DYNAMIC_PARTITION, false);
|
||||
|
||||
if (!supportDynamicPartition) {
|
||||
this.consistencyCommit = this.sliceConfig.getBool(Key.CONSISTENCY_COMMIT, false);
|
||||
if (consistencyCommit) {
|
||||
this.uploadId = this.sliceConfig.getString(Key.UPLOAD_ID);
|
||||
if (this.uploadId == null || this.uploadId.isEmpty()) {
|
||||
throw DataXException.asDataXException(OdpsWriterErrorCode.ILLEGAL_VALUE,
|
||||
MESSAGE_SOURCE.message("odpswriter.3", this.uploadId));
|
||||
}
|
||||
}
|
||||
} else {
|
||||
this.partitionUploadSessionHashMap = new HashMap<>();
|
||||
|
||||
// 根据 partColFormats 参数初始化 dateTransFormMap
|
||||
String dateTransListStr = this.sliceConfig.getString(Key.PARTITION_COL_MAPPING);
|
||||
if (StringUtils.isNotBlank(dateTransListStr)) {
|
||||
this.dateTransFormMap = new HashMap<>();
|
||||
JSONArray dateTransFormJsonArray = JSONArray.parseArray(dateTransListStr);
|
||||
for (Object dateTransFormJson : dateTransFormJsonArray) {
|
||||
DateTransForm dateTransForm = new DateTransForm(
|
||||
((JSONObject)dateTransFormJson).getString(Key.PARTITION_COL_MAPPING_NAME),
|
||||
((JSONObject)dateTransFormJson).getString(Key.PARTITION_COL_MAPPING_SRC_COL_DATEFORMAT),
|
||||
((JSONObject)dateTransFormJson).getString(Key.PARTITION_COL_MAPPING_DATEFORMAT));
|
||||
this.dateTransFormMap.put(((JSONObject)dateTransFormJson).getString(Key.PARTITION_COL_MAPPING_NAME), dateTransForm);
|
||||
}
|
||||
}
|
||||
}
|
||||
this.allColumns = this.sliceConfig.getList("allColumns", String.class);
|
||||
this.overLengthRule = this.sliceConfig.getString(Key.OVER_LENGTH_RULE, "keepOn").toUpperCase();
|
||||
this.maxFieldLength = this.sliceConfig.getInt(Key.MAX_FIELD_LENGTH, Constant.DEFAULT_FIELD_MAX_SIZE);
|
||||
this.enableOverLengthOutput = this.sliceConfig.getBool(Key.ENABLE_OVER_LENGTH_OUTPUT, true);
|
||||
maxOutputOverLengthRecord = this.sliceConfig.getLong(Key.MAX_OVER_LENGTH_OUTPUT_COUNT);
|
||||
maxOdpsFieldLength = this.sliceConfig.getInt(Key.MAX_ODPS_FIELD_LENGTH, Constant.DEFAULT_FIELD_MAX_SIZE);
|
||||
|
||||
this.dynamicPartitionMemUsageFlushIntervalInMinute = this.sliceConfig.getInt(Key.DYNAMIC_PARTITION_MEM_USAGE_FLUSH_INTERVAL_IN_MINUTE,
|
||||
1);
|
||||
if (IS_DEBUG) {
|
||||
LOG.debug("After init in task, sliceConfig now is:[\n{}\n].", this.sliceConfig);
|
||||
}
|
||||
@ -277,24 +537,32 @@ public class OdpsWriter extends Writer {
|
||||
@Override
|
||||
public void prepare() {
|
||||
this.odps = OdpsUtil.initOdpsProject(this.sliceConfig);
|
||||
this.tableTunnel = new TableTunnel(this.odps);
|
||||
|
||||
TableTunnel tableTunnel = new TableTunnel(this.odps);
|
||||
if (StringUtils.isNoneBlank(tunnelServer)) {
|
||||
tableTunnel.setEndpoint(tunnelServer);
|
||||
if (! supportDynamicPartition ) {
|
||||
if (StringUtils.isNoneBlank(tunnelServer)) {
|
||||
tableTunnel.setEndpoint(tunnelServer);
|
||||
}
|
||||
if (this.consistencyCommit) {
|
||||
this.managerUpload = OdpsUtil.getSlaveTunnelUpload(this.tableTunnel, this.projectName, this.tableName,
|
||||
this.partition, this.uploadId);
|
||||
} else {
|
||||
this.managerUpload = OdpsUtil.createMasterTunnelUpload(this.tableTunnel, this.projectName,
|
||||
this.tableName, this.partition);
|
||||
this.uploadId = this.managerUpload.getId();
|
||||
}
|
||||
LOG.info("task uploadId:[{}].", this.uploadId);
|
||||
this.workerUpload = OdpsUtil.getSlaveTunnelUpload(this.tableTunnel, this.projectName,
|
||||
this.tableName, this.partition, uploadId);
|
||||
} else {
|
||||
this.table = OdpsUtil.getTable(this.odps, this.projectName, this.tableName);
|
||||
}
|
||||
|
||||
this.managerUpload = OdpsUtil.createMasterTunnelUpload(tableTunnel, this.projectName,
|
||||
this.tableName, this.partition);
|
||||
this.uploadId = this.managerUpload.getId();
|
||||
LOG.info("task uploadId:[{}].", this.uploadId);
|
||||
|
||||
this.workerUpload = OdpsUtil.getSlaveTunnelUpload(tableTunnel, this.projectName,
|
||||
this.tableName, this.partition, uploadId);
|
||||
}
|
||||
|
||||
@Override
|
||||
public void startWrite(RecordReceiver recordReceiver) {
|
||||
blocks = new ArrayList<Long>();
|
||||
List<Long> currentWriteBlocks;
|
||||
|
||||
AtomicLong blockId = new AtomicLong(0);
|
||||
|
||||
@ -304,35 +572,212 @@ public class OdpsWriter extends Writer {
|
||||
try {
|
||||
TaskPluginCollector taskPluginCollector = super.getTaskPluginCollector();
|
||||
|
||||
OdpsWriterProxy proxy = new OdpsWriterProxy(this.workerUpload, this.blockSizeInMB, blockId,
|
||||
columnPositions, taskPluginCollector, this.emptyAsNull, this.isCompress);
|
||||
OdpsWriterProxy proxy;
|
||||
// 可以配置化,保平安
|
||||
boolean checkWithGetSize = this.sliceConfig.getBool("checkWithGetSize", true);
|
||||
if (!supportDynamicPartition) {
|
||||
if (this.consistencyCommit) {
|
||||
proxy = new OdpsWriterProxy(this.workerUpload, this.blockSizeInMB, blockId, taskId, taskCount,
|
||||
columnPositions, taskPluginCollector, this.emptyAsNull, this.isCompress, checkWithGetSize, this.allColumns, this.writeTimeOutInMs, this.sliceConfig, this.overLengthRule, this.maxFieldLength, this.enableOverLengthOutput);
|
||||
} else {
|
||||
proxy = new OdpsWriterProxy(this.workerUpload, this.blockSizeInMB, blockId,
|
||||
columnPositions, taskPluginCollector, this.emptyAsNull, this.isCompress, checkWithGetSize, this.allColumns, false, this.writeTimeOutInMs, this.sliceConfig, this.overLengthRule, this.maxFieldLength, this.enableOverLengthOutput);
|
||||
}
|
||||
currentWriteBlocks = blocks;
|
||||
} else {
|
||||
proxy = null;
|
||||
currentWriteBlocks = null;
|
||||
}
|
||||
|
||||
com.alibaba.datax.common.element.Record dataXRecord = null;
|
||||
|
||||
PerfRecord blockClose = new PerfRecord(super.getTaskGroupId(),super.getTaskId(), PerfRecord.PHASE.ODPS_BLOCK_CLOSE);
|
||||
PerfRecord blockClose = new PerfRecord(super.getTaskGroupId(), super.getTaskId(), PerfRecord.PHASE.ODPS_BLOCK_CLOSE);
|
||||
blockClose.start();
|
||||
long blockCloseUsedTime = 0;
|
||||
boolean columnCntChecked = false;
|
||||
while ((dataXRecord = recordReceiver.getFromReader()) != null) {
|
||||
blockCloseUsedTime += proxy.writeOneRecord(dataXRecord, blocks);
|
||||
if (supportDynamicPartition) {
|
||||
if (!columnCntChecked) {
|
||||
// 动态分区模式下,读写两端的column数量必须相同
|
||||
if (dataXRecord.getColumnNumber() != this.sliceConfig.getList(Key.COLUMN).size()) {
|
||||
throw DataXException.asDataXException(OdpsWriterErrorCode.ILLEGAL_VALUE,
|
||||
"In dynamic partition write mode you must make sure reader and writer has same column count.");
|
||||
}
|
||||
columnCntChecked = true;
|
||||
}
|
||||
|
||||
// 如果是动态分区模式,则需要根据record内容来选择proxy
|
||||
|
||||
String partitionFormatType = sliceConfig.getString("partitionFormatType");
|
||||
String partition;
|
||||
if("custom".equalsIgnoreCase(partitionFormatType)){
|
||||
List<PartitionInfo> partitions = getListWithJson(sliceConfig,"customPartitionColumns",PartitionInfo.class);
|
||||
List<UserDefinedFunction> functions = getListWithJson(sliceConfig,"customPartitionFunctions",UserDefinedFunction.class);
|
||||
|
||||
partition = CustomPartitionUtils.generate(dataXRecord,functions,
|
||||
partitions,sliceConfig.getList(Key.COLUMN, String.class));
|
||||
}else{
|
||||
partition = OdpsUtil.getPartColValFromDataXRecord(dataXRecord, columnPositions,
|
||||
this.sliceConfig.getList(Key.COLUMN, String.class),
|
||||
this.dateTransFormMap);
|
||||
partition = OdpsUtil.formatPartition(partition, false);
|
||||
}
|
||||
|
||||
Pair<OdpsWriterProxy, List<Long>> proxyBlocksPair = this.partitionUploadSessionHashMap.get(partition);
|
||||
if (null != proxyBlocksPair) {
|
||||
proxy = proxyBlocksPair.getLeft();
|
||||
currentWriteBlocks = proxyBlocksPair.getRight();
|
||||
if (null == proxy || null == currentWriteBlocks) {
|
||||
throw DataXException.asDataXException("Get OdpsWriterProxy failed.");
|
||||
}
|
||||
} else {
|
||||
/*
|
||||
* 第一次写入该目标分区:处理truncate
|
||||
* truncate 为 true,且还没有被truncate过,则truncate,加互斥锁
|
||||
*/
|
||||
Boolean truncate = this.sliceConfig.getBool(Key.TRUNCATE);
|
||||
if (truncate && !partitionsDealedTruncate.contains(partition)) {
|
||||
synchronized (lockForPartitionDealedTruncate) {
|
||||
if (!partitionsDealedTruncate.contains(partition)) {
|
||||
LOG.info("Start to truncate partition {}", partition);
|
||||
OdpsUtil.dealTruncate(this.odps, this.table, partition, truncate);
|
||||
partitionsDealedTruncate.add(partition);
|
||||
}
|
||||
/*
|
||||
* 判断分区是否创建过多,如果创建过多,则报错
|
||||
*/
|
||||
if (partitionCnt.addAndGet(1) > maxPartitionCnt) {
|
||||
throw new DataXException("Create too many partitions. Please make sure you config the right partition column");
|
||||
}
|
||||
}
|
||||
}
|
||||
TableTunnel.UploadSession uploadSession = OdpsUtil.createMasterTunnelUpload(tableTunnel, this.projectName,
|
||||
this.tableName, partition);
|
||||
proxy = new OdpsWriterProxy(uploadSession, this.blockSizeInMB, blockId,
|
||||
columnPositions, taskPluginCollector, this.emptyAsNull, this.isCompress, checkWithGetSize, this.allColumns, true, this.writeTimeOutInMs, this.sliceConfig, this.overLengthRule, this.maxFieldLength, this.enableOverLengthOutput);
|
||||
currentWriteBlocks = new ArrayList<>();
|
||||
partitionUploadSessionHashMap.put(partition, new MutablePair<>(proxy, currentWriteBlocks));
|
||||
}
|
||||
}
|
||||
blockCloseUsedTime += proxy.writeOneRecord(dataXRecord, currentWriteBlocks);
|
||||
|
||||
// 动态分区写入模式下,如果内存使用达到一定程度 80%,清理较久没有活动且缓存较多数据的分区
|
||||
if (supportDynamicPartition) {
|
||||
boolean isNeedFush = checkIfNeedFlush();
|
||||
if (isNeedFush) {
|
||||
LOG.info("====The memory used exceed 80%, start to clear...===");
|
||||
int releaseCnt = 0;
|
||||
int remainCnt = 0;
|
||||
for (String onePartition : partitionUploadSessionHashMap.keySet()) {
|
||||
OdpsWriterProxy oneIdleProxy = partitionUploadSessionHashMap.get(onePartition) == null ? null : partitionUploadSessionHashMap.get(onePartition).getLeft();
|
||||
if (oneIdleProxy == null) {
|
||||
continue;
|
||||
}
|
||||
|
||||
Long idleTime = System.currentTimeMillis() - oneIdleProxy.getLastActiveTime();
|
||||
if (idleTime > Constant.PROXY_MAX_IDLE_TIME_MS || oneIdleProxy.getCurrentTotalBytes() > (this.blockSizeInMB*1014*1024 / 2)) {
|
||||
// 如果空闲一定时间,先把数据写出
|
||||
LOG.info("{} partition has no data last {} seconds, so release its uploadSession", onePartition, Constant.PROXY_MAX_IDLE_TIME_MS / 1000);
|
||||
currentWriteBlocks = partitionUploadSessionHashMap.get(onePartition).getRight();
|
||||
blockCloseUsedTime += oneIdleProxy.writeRemainingRecord(currentWriteBlocks);
|
||||
// 再清除
|
||||
partitionUploadSessionHashMap.put(onePartition, null);
|
||||
releaseCnt++;
|
||||
} else {
|
||||
remainCnt++;
|
||||
}
|
||||
}
|
||||
|
||||
// 释放的不足够多,再释放一次,这次随机释放,直到释放数量达到一半
|
||||
for (String onePartition : partitionUploadSessionHashMap.keySet()) {
|
||||
if (releaseCnt >= remainCnt) {
|
||||
break;
|
||||
}
|
||||
|
||||
if (partitionUploadSessionHashMap.get(onePartition) != null) {
|
||||
OdpsWriterProxy oneIdleProxy = partitionUploadSessionHashMap.get(onePartition).getLeft();
|
||||
currentWriteBlocks = partitionUploadSessionHashMap.get(onePartition).getRight();
|
||||
blockCloseUsedTime += oneIdleProxy.writeRemainingRecord(currentWriteBlocks);
|
||||
partitionUploadSessionHashMap.put(onePartition, null);
|
||||
|
||||
releaseCnt++;
|
||||
remainCnt--;
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
this.latestFlushTime = System.currentTimeMillis();
|
||||
LOG.info("===complete===");
|
||||
}
|
||||
|
||||
}
|
||||
}
|
||||
|
||||
blockCloseUsedTime += proxy.writeRemainingRecord(blocks);
|
||||
blockClose.end(blockCloseUsedTime);
|
||||
// 对所有分区进行剩余 records 写入
|
||||
if (supportDynamicPartition) {
|
||||
for (String partition : partitionUploadSessionHashMap.keySet()) {
|
||||
if (partitionUploadSessionHashMap.get(partition) == null) {
|
||||
continue;
|
||||
}
|
||||
proxy = partitionUploadSessionHashMap.get(partition).getLeft();
|
||||
currentWriteBlocks = partitionUploadSessionHashMap.get(partition).getRight();
|
||||
blockCloseUsedTime += proxy.writeRemainingRecord(currentWriteBlocks);
|
||||
blockClose.end(blockCloseUsedTime);
|
||||
}
|
||||
}
|
||||
else {
|
||||
blockCloseUsedTime += proxy.writeRemainingRecord(blocks);
|
||||
blockClose.end(blockCloseUsedTime);
|
||||
}
|
||||
} catch (Exception e) {
|
||||
throw DataXException.asDataXException(OdpsWriterErrorCode.WRITER_RECORD_FAIL, "写入 ODPS 目的表失败. 请联系 ODPS 管理员处理.", e);
|
||||
throw DataXException.asDataXException(OdpsWriterErrorCode.WRITER_RECORD_FAIL, MESSAGE_SOURCE.message("odpswriter.4"), e);
|
||||
}
|
||||
}
|
||||
|
||||
private boolean checkIfNeedFlush() {
|
||||
|
||||
//检查是否到达flush时间,超过flush间隔时间
|
||||
boolean isArriveFlushTime = (System.currentTimeMillis() - this.latestFlushTime) > this.dynamicPartitionMemUsageFlushIntervalInMinute * 60 * 1000;
|
||||
if (!isArriveFlushTime) {
|
||||
//如果flush时间没有到,直接return掉
|
||||
return false;
|
||||
}
|
||||
|
||||
MemoryUsage memoryUsage = ManagementFactory.getMemoryMXBean().getHeapMemoryUsage();
|
||||
boolean isMemUsageExceed = (double)memoryUsage.getUsed() / memoryUsage.getMax() > 0.8f;
|
||||
return isMemUsageExceed;
|
||||
}
|
||||
|
||||
@Override
|
||||
public void post() {
|
||||
synchronized (lock){
|
||||
if(failoverState==0){
|
||||
synchronized (lock) {
|
||||
if (failoverState == 0) {
|
||||
failoverState = 2;
|
||||
LOG.info("Slave which uploadId=[{}] begin to commit blocks:[\n{}\n].", this.uploadId,
|
||||
StringUtils.join(blocks, ","));
|
||||
OdpsUtil.masterCompleteBlocks(this.managerUpload, blocks.toArray(new Long[0]));
|
||||
LOG.info("Slave which uploadId=[{}] commit blocks ok.", this.uploadId);
|
||||
}else{
|
||||
if (! supportDynamicPartition) {
|
||||
if (! this.consistencyCommit) {
|
||||
LOG.info("Slave which uploadId=[{}] begin to commit blocks:[\n{}\n].", this.uploadId,
|
||||
StringUtils.join(blocks, ","));
|
||||
OdpsUtil.masterCompleteBlocks(this.managerUpload, blocks.toArray(new Long[0]));
|
||||
LOG.info("Slave which uploadId=[{}] commit blocks ok.", this.uploadId);
|
||||
} else {
|
||||
LOG.info("Slave which uploadId=[{}] begin to check blocks:[\n{}\n].", this.uploadId,
|
||||
StringUtils.join(blocks, ","));
|
||||
OdpsUtil.checkBlockComplete(this.managerUpload, blocks.toArray(new Long[0]));
|
||||
LOG.info("Slave which uploadId=[{}] check blocks ok.", this.uploadId);
|
||||
}
|
||||
} else {
|
||||
for (String partition : partitionUploadSessionHashMap.keySet()) {
|
||||
OdpsWriterProxy proxy = partitionUploadSessionHashMap.get(partition).getLeft();
|
||||
List<Long> blocks = partitionUploadSessionHashMap.get(partition).getRight();
|
||||
TableTunnel.UploadSession uploadSession = proxy.getSlaveUpload();
|
||||
LOG.info("Slave which uploadId=[{}] begin to check blocks:[\n{}\n].", uploadSession.getId(),
|
||||
StringUtils.join(blocks, ","));
|
||||
OdpsUtil.masterCompleteBlocks(uploadSession, blocks.toArray(new Long[0]));
|
||||
LOG.info("Slave which uploadId=[{}] check blocks ok.", uploadSession.getId());
|
||||
}
|
||||
}
|
||||
|
||||
} else {
|
||||
throw DataXException.asDataXException(CommonErrorCode.SHUT_DOWN_TASK, "");
|
||||
}
|
||||
}
|
||||
@ -343,9 +788,9 @@ public class OdpsWriter extends Writer {
|
||||
}
|
||||
|
||||
@Override
|
||||
public boolean supportFailOver(){
|
||||
synchronized (lock){
|
||||
if(failoverState==0){
|
||||
public boolean supportFailOver() {
|
||||
synchronized (lock) {
|
||||
if (failoverState == 0) {
|
||||
failoverState = 1;
|
||||
return true;
|
||||
}
|
||||
|
@ -1,42 +1,43 @@
|
||||
package com.alibaba.datax.plugin.writer.odpswriter;
|
||||
|
||||
import com.alibaba.datax.common.spi.ErrorCode;
|
||||
import com.alibaba.datax.common.util.MessageSource;
|
||||
|
||||
public enum OdpsWriterErrorCode implements ErrorCode {
|
||||
REQUIRED_VALUE("OdpsWriter-00", "您缺失了必须填写的参数值."),
|
||||
ILLEGAL_VALUE("OdpsWriter-01", "您配置的值不合法."),
|
||||
UNSUPPORTED_COLUMN_TYPE("OdpsWriter-02", "DataX 不支持写入 ODPS 的目的表的此种数据类型."),
|
||||
REQUIRED_VALUE("OdpsWriter-00", MessageSource.loadResourceBundle(OdpsWriterErrorCode.class).message("errorcode.required_value")),
|
||||
ILLEGAL_VALUE("OdpsWriter-01", MessageSource.loadResourceBundle(OdpsWriterErrorCode.class).message("errorcode.illegal_value")),
|
||||
UNSUPPORTED_COLUMN_TYPE("OdpsWriter-02", MessageSource.loadResourceBundle(OdpsWriterErrorCode.class).message("errorcode.unsupported_column_type")),
|
||||
|
||||
TABLE_TRUNCATE_ERROR("OdpsWriter-03", "清空 ODPS 目的表时出错."),
|
||||
CREATE_MASTER_UPLOAD_FAIL("OdpsWriter-04", "创建 ODPS 的 uploadSession 失败."),
|
||||
GET_SLAVE_UPLOAD_FAIL("OdpsWriter-05", "获取 ODPS 的 uploadSession 失败."),
|
||||
GET_ID_KEY_FAIL("OdpsWriter-06", "获取 accessId/accessKey 失败."),
|
||||
GET_PARTITION_FAIL("OdpsWriter-07", "获取 ODPS 目的表的所有分区失败."),
|
||||
TABLE_TRUNCATE_ERROR("OdpsWriter-03", MessageSource.loadResourceBundle(OdpsWriterErrorCode.class).message("errorcode.table_truncate_error")),
|
||||
CREATE_MASTER_UPLOAD_FAIL("OdpsWriter-04", MessageSource.loadResourceBundle(OdpsWriterErrorCode.class).message("errorcode.create_master_upload_fail")),
|
||||
GET_SLAVE_UPLOAD_FAIL("OdpsWriter-05", MessageSource.loadResourceBundle(OdpsWriterErrorCode.class).message("errorcode.get_slave_upload_fail")),
|
||||
GET_ID_KEY_FAIL("OdpsWriter-06", MessageSource.loadResourceBundle(OdpsWriterErrorCode.class).message("errorcode.get_id_key_fail")),
|
||||
GET_PARTITION_FAIL("OdpsWriter-07", MessageSource.loadResourceBundle(OdpsWriterErrorCode.class).message("errorcode.get_partition_fail")),
|
||||
|
||||
ADD_PARTITION_FAILED("OdpsWriter-08", "添加分区到 ODPS 目的表失败."),
|
||||
WRITER_RECORD_FAIL("OdpsWriter-09", "写入数据到 ODPS 目的表失败."),
|
||||
ADD_PARTITION_FAILED("OdpsWriter-08", MessageSource.loadResourceBundle(OdpsWriterErrorCode.class).message("errorcode.add_partition_failed")),
|
||||
WRITER_RECORD_FAIL("OdpsWriter-09", MessageSource.loadResourceBundle(OdpsWriterErrorCode.class).message("errorcode.writer_record_fail")),
|
||||
|
||||
COMMIT_BLOCK_FAIL("OdpsWriter-10", "提交 block 到 ODPS 目的表失败."),
|
||||
RUN_SQL_FAILED("OdpsWriter-11", "执行 ODPS Sql 失败."),
|
||||
CHECK_IF_PARTITIONED_TABLE_FAILED("OdpsWriter-12", "检查 ODPS 目的表:%s 是否为分区表失败."),
|
||||
COMMIT_BLOCK_FAIL("OdpsWriter-10", MessageSource.loadResourceBundle(OdpsWriterErrorCode.class).message("errorcode.commit_block_fail")),
|
||||
RUN_SQL_FAILED("OdpsWriter-11", MessageSource.loadResourceBundle(OdpsWriterErrorCode.class).message("errorcode.run_sql_failed")),
|
||||
CHECK_IF_PARTITIONED_TABLE_FAILED("OdpsWriter-12", MessageSource.loadResourceBundle(OdpsWriterErrorCode.class).message("errorcode.check_if_partitioned_table_failed")),
|
||||
|
||||
RUN_SQL_ODPS_EXCEPTION("OdpsWriter-13", "执行 ODPS Sql 时抛出异常, 可重试"),
|
||||
RUN_SQL_ODPS_EXCEPTION("OdpsWriter-13", MessageSource.loadResourceBundle(OdpsWriterErrorCode.class).message("errorcode.run_sql_odps_exception")),
|
||||
|
||||
ACCOUNT_TYPE_ERROR("OdpsWriter-30", "账号类型错误."),
|
||||
ACCOUNT_TYPE_ERROR("OdpsWriter-30", MessageSource.loadResourceBundle(OdpsWriterErrorCode.class).message("errorcode.account_type_error")),
|
||||
|
||||
PARTITION_ERROR("OdpsWriter-31", "分区配置错误."),
|
||||
PARTITION_ERROR("OdpsWriter-31", MessageSource.loadResourceBundle(OdpsWriterErrorCode.class).message("errorcode.partition_error")),
|
||||
|
||||
COLUMN_NOT_EXIST("OdpsWriter-32", "用户配置的列不存在."),
|
||||
COLUMN_NOT_EXIST("OdpsWriter-32", MessageSource.loadResourceBundle(OdpsWriterErrorCode.class).message("errorcode.column_not_exist")),
|
||||
|
||||
ODPS_PROJECT_NOT_FOUNT("OdpsWriter-100", "您配置的值不合法, odps project 不存在."), //ODPS-0420111: Project not found
|
||||
ODPS_PROJECT_NOT_FOUNT("OdpsWriter-100", MessageSource.loadResourceBundle(OdpsWriterErrorCode.class).message("errorcode.odps_project_not_fount")), //ODPS-0420111: Project not found
|
||||
|
||||
ODPS_TABLE_NOT_FOUNT("OdpsWriter-101", "您配置的值不合法, odps table 不存在"), // ODPS-0130131:Table not found
|
||||
ODPS_TABLE_NOT_FOUNT("OdpsWriter-101", MessageSource.loadResourceBundle(OdpsWriterErrorCode.class).message("errorcode.odps_table_not_fount")), // ODPS-0130131:Table not found
|
||||
|
||||
ODPS_ACCESS_KEY_ID_NOT_FOUND("OdpsWriter-102", "您配置的值不合法, odps accessId,accessKey 不存在"), //ODPS-0410051:Invalid credentials - accessKeyId not found
|
||||
ODPS_ACCESS_KEY_ID_NOT_FOUND("OdpsWriter-102", MessageSource.loadResourceBundle(OdpsWriterErrorCode.class).message("errorcode.odps_access_key_id_not_found")), //ODPS-0410051:Invalid credentials - accessKeyId not found
|
||||
|
||||
ODPS_ACCESS_KEY_INVALID("OdpsWriter-103", "您配置的值不合法, odps accessKey 错误"), //ODPS-0410042:Invalid signature value - User signature dose not match;
|
||||
ODPS_ACCESS_KEY_INVALID("OdpsWriter-103", MessageSource.loadResourceBundle(OdpsWriterErrorCode.class).message("errorcode.odps_access_key_invalid")), //ODPS-0410042:Invalid signature value - User signature dose not match;
|
||||
|
||||
ODPS_ACCESS_DENY("OdpsWriter-104", "拒绝访问, 您不在 您配置的 project 中") //ODPS-0420095: Access Denied - Authorization Failed [4002], You doesn't exist in project
|
||||
ODPS_ACCESS_DENY("OdpsWriter-104", MessageSource.loadResourceBundle(OdpsWriterErrorCode.class).message("errorcode.odps_access_deny")) //ODPS-0420095: Access Denied - Authorization Failed [4002], You doesn't exist in project
|
||||
|
||||
;
|
||||
|
||||
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user