diff --git a/README.md b/README.md index 46bbe03c..01bbc3ea 100644 --- a/README.md +++ b/README.md @@ -1,9 +1,10 @@ ![Datax-logo](https://github.com/alibaba/DataX/blob/master/images/DataX-logo.jpg) - # DataX -DataX 是阿里云 [DataWorks数据集成](https://www.aliyun.com/product/bigdata/ide) 的开源版本,在阿里巴巴集团内被广泛使用的离线数据同步工具/平台。DataX 实现了包括 MySQL、Oracle、OceanBase、SqlServer、Postgre、HDFS、Hive、ADS、HBase、TableStore(OTS)、MaxCompute(ODPS)、Hologres、DRDS 等各种异构数据源之间高效的数据同步功能。 +[![Leaderboard](https://img.shields.io/badge/DataX-%E6%9F%A5%E7%9C%8B%E8%B4%A1%E7%8C%AE%E6%8E%92%E8%A1%8C%E6%A6%9C-orange)](https://opensource.alibaba.com/contribution_leaderboard/details?projectValue=datax) + +DataX 是阿里云 [DataWorks数据集成](https://www.aliyun.com/product/bigdata/ide) 的开源版本,在阿里巴巴集团内被广泛使用的离线数据同步工具/平台。DataX 实现了包括 MySQL、Oracle、OceanBase、SqlServer、Postgre、HDFS、Hive、ADS、HBase、TableStore(OTS)、MaxCompute(ODPS)、Hologres、DRDS, databend 等各种异构数据源之间高效的数据同步功能。 # DataX 商业版本 阿里云DataWorks数据集成是DataX团队在阿里云上的商业化产品,致力于提供复杂网络环境下、丰富的异构数据源之间高速稳定的数据移动能力,以及繁杂业务背景下的数据同步解决方案。目前已经支持云上近3000家客户,单日同步数据超过3万亿条。DataWorks数据集成目前支持离线50+种数据源,可以进行整库迁移、批量上云、增量同步、分库分表等各类同步解决方案。2020年更新实时同步能力,支持10+种数据源的读写任意组合。提供MySQL,Oracle等多种数据源到阿里云MaxCompute,Hologres等大数据引擎的一键全增量同步解决方案。 @@ -25,7 +26,7 @@ DataX本身作为数据同步框架,将不同数据源的同步抽象为从源 # Quick Start -##### Download [DataX下载地址](https://datax-opensource.oss-cn-hangzhou.aliyuncs.com/202210/datax.tar.gz) +##### Download [DataX下载地址](https://datax-opensource.oss-cn-hangzhou.aliyuncs.com/202303/datax.tar.gz) ##### 请点击:[Quick Start](https://github.com/alibaba/DataX/blob/master/userGuid.md) @@ -36,44 +37,47 @@ DataX本身作为数据同步框架,将不同数据源的同步抽象为从源 DataX目前已经有了比较全面的插件体系,主流的RDBMS数据库、NOSQL、大数据计算系统都已经接入,目前支持数据如下图,详情请点击:[DataX数据源参考指南](https://github.com/alibaba/DataX/wiki/DataX-all-data-channels) -| 类型 | 数据源 | Reader(读) |Writer(写)| 文档 | -|--------------|---------------------------|:---------:|:-------:|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:| -| RDBMS 关系型数据库 | MySQL |√|√| [读](https://github.com/alibaba/DataX/blob/master/mysqlreader/doc/mysqlreader.md) 、[写](https://github.com/alibaba/DataX/blob/master/mysqlwriter/doc/mysqlwriter.md) | -| | Oracle |√|√| [读](https://github.com/alibaba/DataX/blob/master/oraclereader/doc/oraclereader.md) 、[写](https://github.com/alibaba/DataX/blob/master/oraclewriter/doc/oraclewriter.md) | -| | OceanBase |√|√| [读](https://open.oceanbase.com/docs/community/oceanbase-database/V3.1.0/use-datax-to-full-migration-data-to-oceanbase) 、[写](https://open.oceanbase.com/docs/community/oceanbase-database/V3.1.0/use-datax-to-full-migration-data-to-oceanbase) | -| | SQLServer |√|√| [读](https://github.com/alibaba/DataX/blob/master/sqlserverreader/doc/sqlserverreader.md) 、[写](https://github.com/alibaba/DataX/blob/master/sqlserverwriter/doc/sqlserverwriter.md) | -| | PostgreSQL |√|√| [读](https://github.com/alibaba/DataX/blob/master/postgresqlreader/doc/postgresqlreader.md) 、[写](https://github.com/alibaba/DataX/blob/master/postgresqlwriter/doc/postgresqlwriter.md) | -| | DRDS |√|√| [读](https://github.com/alibaba/DataX/blob/master/drdsreader/doc/drdsreader.md) 、[写](https://github.com/alibaba/DataX/blob/master/drdswriter/doc/drdswriter.md) | -| | Kingbase |√|√| [读](https://github.com/alibaba/DataX/blob/master/drdsreader/doc/drdsreader.md) 、[写](https://github.com/alibaba/DataX/blob/master/drdswriter/doc/drdswriter.md) | -| | 通用RDBMS(支持所有关系型数据库) |√|√| [读](https://github.com/alibaba/DataX/blob/master/rdbmsreader/doc/rdbmsreader.md) 、[写](https://github.com/alibaba/DataX/blob/master/rdbmswriter/doc/rdbmswriter.md) | -| 阿里云数仓数据存储 | ODPS |√|√| [读](https://github.com/alibaba/DataX/blob/master/odpsreader/doc/odpsreader.md) 、[写](https://github.com/alibaba/DataX/blob/master/odpswriter/doc/odpswriter.md) | -| | ADS | |√| [写](https://github.com/alibaba/DataX/blob/master/adswriter/doc/adswriter.md) | -| | OSS |√|√| [读](https://github.com/alibaba/DataX/blob/master/ossreader/doc/ossreader.md) 、[写](https://github.com/alibaba/DataX/blob/master/osswriter/doc/osswriter.md) | -| | OCS | |√| [写](https://github.com/alibaba/DataX/blob/master/ocswriter/doc/ocswriter.md) | -| | Hologres | |√| [写](https://github.com/alibaba/DataX/blob/master/hologresjdbcwriter/doc/hologresjdbcwriter.md) | -| | AnalyticDB For PostgreSQL | |√| 写 | -| 阿里云中间件 | datahub |√|√| 读 、写 | -| | SLS |√|√| 读 、写 | -| 阿里云图数据库 | GDB |√|√| [读](https://github.com/alibaba/DataX/blob/master/gdbreader/doc/gdbreader.md) 、[写](https://github.com/alibaba/DataX/blob/master/gdbwriter/doc/gdbwriter.md) | -| NoSQL数据存储 | OTS |√|√| [读](https://github.com/alibaba/DataX/blob/master/otsreader/doc/otsreader.md) 、[写](https://github.com/alibaba/DataX/blob/master/otswriter/doc/otswriter.md) | -| | Hbase0.94 |√|√| [读](https://github.com/alibaba/DataX/blob/master/hbase094xreader/doc/hbase094xreader.md) 、[写](https://github.com/alibaba/DataX/blob/master/hbase094xwriter/doc/hbase094xwriter.md) | -| | Hbase1.1 |√|√| [读](https://github.com/alibaba/DataX/blob/master/hbase11xreader/doc/hbase11xreader.md) 、[写](https://github.com/alibaba/DataX/blob/master/hbase11xwriter/doc/hbase11xwriter.md) | -| | Phoenix4.x |√|√| [读](https://github.com/alibaba/DataX/blob/master/hbase11xsqlreader/doc/hbase11xsqlreader.md) 、[写](https://github.com/alibaba/DataX/blob/master/hbase11xsqlwriter/doc/hbase11xsqlwriter.md) | -| | Phoenix5.x |√|√| [读](https://github.com/alibaba/DataX/blob/master/hbase20xsqlreader/doc/hbase20xsqlreader.md) 、[写](https://github.com/alibaba/DataX/blob/master/hbase20xsqlwriter/doc/hbase20xsqlwriter.md) | -| | MongoDB |√|√| [读](https://github.com/alibaba/DataX/blob/master/mongodbreader/doc/mongodbreader.md) 、[写](https://github.com/alibaba/DataX/blob/master/mongodbwriter/doc/mongodbwriter.md) | -| | Cassandra |√|√| [读](https://github.com/alibaba/DataX/blob/master/cassandrareader/doc/cassandrareader.md) 、[写](https://github.com/alibaba/DataX/blob/master/cassandrawriter/doc/cassandrawriter.md) | -| 数仓数据存储 | StarRocks |√|√| 读 、[写](https://github.com/alibaba/DataX/blob/master/starrockswriter/doc/starrockswriter.md) | -| | ApacheDoris | |√| [写](https://github.com/alibaba/DataX/blob/master/doriswriter/doc/doriswriter.md) | -| | ClickHouse | |√| 写| -| | Hive |√|√| [读](https://github.com/alibaba/DataX/blob/master/hdfsreader/doc/hdfsreader.md) 、[写](https://github.com/alibaba/DataX/blob/master/hdfswriter/doc/hdfswriter.md) | -| | kudu | |√| [写](https://github.com/alibaba/DataX/blob/master/hdfswriter/doc/hdfswriter.md) | -| 无结构化数据存储 | TxtFile |√|√| [读](https://github.com/alibaba/DataX/blob/master/txtfilereader/doc/txtfilereader.md) 、[写](https://github.com/alibaba/DataX/blob/master/txtfilewriter/doc/txtfilewriter.md) | -| | FTP |√|√| [读](https://github.com/alibaba/DataX/blob/master/ftpreader/doc/ftpreader.md) 、[写](https://github.com/alibaba/DataX/blob/master/ftpwriter/doc/ftpwriter.md) | -| | HDFS |√|√| [读](https://github.com/alibaba/DataX/blob/master/hdfsreader/doc/hdfsreader.md) 、[写](https://github.com/alibaba/DataX/blob/master/hdfswriter/doc/hdfswriter.md) | -| | Elasticsearch | |√| [写](https://github.com/alibaba/DataX/blob/master/elasticsearchwriter/doc/elasticsearchwriter.md) | -| 时间序列数据库 | OpenTSDB |√| | [读](https://github.com/alibaba/DataX/blob/master/opentsdbreader/doc/opentsdbreader.md) | -| | TSDB |√|√| [读](https://github.com/alibaba/DataX/blob/master/tsdbreader/doc/tsdbreader.md) 、[写](https://github.com/alibaba/DataX/blob/master/tsdbwriter/doc/tsdbhttpwriter.md) | -| | TDengine |√|√| [读](https://github.com/alibaba/DataX/blob/master/tdenginereader/doc/tdenginereader-CN.md) 、[写](https://github.com/alibaba/DataX/blob/master/tdenginewriter/doc/tdenginewriter-CN.md) | +| 类型 | 数据源 | Reader(读) | Writer(写) | 文档 | +|--------------|---------------------------|:---------:|:---------:|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:| +| RDBMS 关系型数据库 | MySQL | √ | √ | [读](https://github.com/alibaba/DataX/blob/master/mysqlreader/doc/mysqlreader.md) 、[写](https://github.com/alibaba/DataX/blob/master/mysqlwriter/doc/mysqlwriter.md) | +| | Oracle | √ | √ | [读](https://github.com/alibaba/DataX/blob/master/oraclereader/doc/oraclereader.md) 、[写](https://github.com/alibaba/DataX/blob/master/oraclewriter/doc/oraclewriter.md) | +| | OceanBase | √ | √ | [读](https://open.oceanbase.com/docs/community/oceanbase-database/V3.1.0/use-datax-to-full-migration-data-to-oceanbase) 、[写](https://open.oceanbase.com/docs/community/oceanbase-database/V3.1.0/use-datax-to-full-migration-data-to-oceanbase) | +| | SQLServer | √ | √ | [读](https://github.com/alibaba/DataX/blob/master/sqlserverreader/doc/sqlserverreader.md) 、[写](https://github.com/alibaba/DataX/blob/master/sqlserverwriter/doc/sqlserverwriter.md) | +| | PostgreSQL | √ | √ | [读](https://github.com/alibaba/DataX/blob/master/postgresqlreader/doc/postgresqlreader.md) 、[写](https://github.com/alibaba/DataX/blob/master/postgresqlwriter/doc/postgresqlwriter.md) | +| | DRDS | √ | √ | [读](https://github.com/alibaba/DataX/blob/master/drdsreader/doc/drdsreader.md) 、[写](https://github.com/alibaba/DataX/blob/master/drdswriter/doc/drdswriter.md) | +| | Kingbase | √ | √ | [读](https://github.com/alibaba/DataX/blob/master/drdsreader/doc/drdsreader.md) 、[写](https://github.com/alibaba/DataX/blob/master/drdswriter/doc/drdswriter.md) | +| | 通用RDBMS(支持所有关系型数据库) | √ | √ | [读](https://github.com/alibaba/DataX/blob/master/rdbmsreader/doc/rdbmsreader.md) 、[写](https://github.com/alibaba/DataX/blob/master/rdbmswriter/doc/rdbmswriter.md) | +| 阿里云数仓数据存储 | ODPS | √ | √ | [读](https://github.com/alibaba/DataX/blob/master/odpsreader/doc/odpsreader.md) 、[写](https://github.com/alibaba/DataX/blob/master/odpswriter/doc/odpswriter.md) | +| | ADB | | √ | [写](https://github.com/alibaba/DataX/blob/master/adbmysqlwriter/doc/adbmysqlwriter.md) | +| | ADS | | √ | [写](https://github.com/alibaba/DataX/blob/master/adswriter/doc/adswriter.md) | +| | OSS | √ | √ | [读](https://github.com/alibaba/DataX/blob/master/ossreader/doc/ossreader.md) 、[写](https://github.com/alibaba/DataX/blob/master/osswriter/doc/osswriter.md) | +| | OCS | | √ | [写](https://github.com/alibaba/DataX/blob/master/ocswriter/doc/ocswriter.md) | +| | Hologres | | √ | [写](https://github.com/alibaba/DataX/blob/master/hologresjdbcwriter/doc/hologresjdbcwriter.md) | +| | AnalyticDB For PostgreSQL | | √ | 写 | +| 阿里云中间件 | datahub | √ | √ | 读 、写 | +| | SLS | √ | √ | 读 、写 | +| 阿里云图数据库 | GDB | √ | √ | [读](https://github.com/alibaba/DataX/blob/master/gdbreader/doc/gdbreader.md) 、[写](https://github.com/alibaba/DataX/blob/master/gdbwriter/doc/gdbwriter.md) | +| NoSQL数据存储 | OTS | √ | √ | [读](https://github.com/alibaba/DataX/blob/master/otsreader/doc/otsreader.md) 、[写](https://github.com/alibaba/DataX/blob/master/otswriter/doc/otswriter.md) | +| | Hbase0.94 | √ | √ | [读](https://github.com/alibaba/DataX/blob/master/hbase094xreader/doc/hbase094xreader.md) 、[写](https://github.com/alibaba/DataX/blob/master/hbase094xwriter/doc/hbase094xwriter.md) | +| | Hbase1.1 | √ | √ | [读](https://github.com/alibaba/DataX/blob/master/hbase11xreader/doc/hbase11xreader.md) 、[写](https://github.com/alibaba/DataX/blob/master/hbase11xwriter/doc/hbase11xwriter.md) | +| | Phoenix4.x | √ | √ | [读](https://github.com/alibaba/DataX/blob/master/hbase11xsqlreader/doc/hbase11xsqlreader.md) 、[写](https://github.com/alibaba/DataX/blob/master/hbase11xsqlwriter/doc/hbase11xsqlwriter.md) | +| | Phoenix5.x | √ | √ | [读](https://github.com/alibaba/DataX/blob/master/hbase20xsqlreader/doc/hbase20xsqlreader.md) 、[写](https://github.com/alibaba/DataX/blob/master/hbase20xsqlwriter/doc/hbase20xsqlwriter.md) | +| | MongoDB | √ | √ | [读](https://github.com/alibaba/DataX/blob/master/mongodbreader/doc/mongodbreader.md) 、[写](https://github.com/alibaba/DataX/blob/master/mongodbwriter/doc/mongodbwriter.md) | +| | Cassandra | √ | √ | [读](https://github.com/alibaba/DataX/blob/master/cassandrareader/doc/cassandrareader.md) 、[写](https://github.com/alibaba/DataX/blob/master/cassandrawriter/doc/cassandrawriter.md) | +| 数仓数据存储 | StarRocks | √ | √ | 读 、[写](https://github.com/alibaba/DataX/blob/master/starrockswriter/doc/starrockswriter.md) | +| | ApacheDoris | | √ | [写](https://github.com/alibaba/DataX/blob/master/doriswriter/doc/doriswriter.md) | +| | ClickHouse | | √ | 写 | +| | Databend | | √ | [写](https://github.com/alibaba/DataX/blob/master/databendwriter/doc/databendwriter.md) | +| | Hive | √ | √ | [读](https://github.com/alibaba/DataX/blob/master/hdfsreader/doc/hdfsreader.md) 、[写](https://github.com/alibaba/DataX/blob/master/hdfswriter/doc/hdfswriter.md) | +| | kudu | | √ | [写](https://github.com/alibaba/DataX/blob/master/hdfswriter/doc/hdfswriter.md) | +| | selectdb | | √ | [写](https://github.com/alibaba/DataX/blob/master/selectdbwriter/doc/selectdbwriter.md) | +| 无结构化数据存储 | TxtFile | √ | √ | [读](https://github.com/alibaba/DataX/blob/master/txtfilereader/doc/txtfilereader.md) 、[写](https://github.com/alibaba/DataX/blob/master/txtfilewriter/doc/txtfilewriter.md) | +| | FTP | √ | √ | [读](https://github.com/alibaba/DataX/blob/master/ftpreader/doc/ftpreader.md) 、[写](https://github.com/alibaba/DataX/blob/master/ftpwriter/doc/ftpwriter.md) | +| | HDFS | √ | √ | [读](https://github.com/alibaba/DataX/blob/master/hdfsreader/doc/hdfsreader.md) 、[写](https://github.com/alibaba/DataX/blob/master/hdfswriter/doc/hdfswriter.md) | +| | Elasticsearch | | √ | [写](https://github.com/alibaba/DataX/blob/master/elasticsearchwriter/doc/elasticsearchwriter.md) | +| 时间序列数据库 | OpenTSDB | √ | | [读](https://github.com/alibaba/DataX/blob/master/opentsdbreader/doc/opentsdbreader.md) | +| | TSDB | √ | √ | [读](https://github.com/alibaba/DataX/blob/master/tsdbreader/doc/tsdbreader.md) 、[写](https://github.com/alibaba/DataX/blob/master/tsdbwriter/doc/tsdbhttpwriter.md) | +| | TDengine | √ | √ | [读](https://github.com/alibaba/DataX/blob/master/tdenginereader/doc/tdenginereader-CN.md) 、[写](https://github.com/alibaba/DataX/blob/master/tdenginewriter/doc/tdenginewriter-CN.md) | # 阿里云DataWorks数据集成 @@ -105,6 +109,12 @@ DataX目前已经有了比较全面的插件体系,主流的RDBMS数据库、N DataX 后续计划月度迭代更新,也欢迎感兴趣的同学提交 Pull requests,月度更新内容会介绍介绍如下。 +- [datax_v202303](https://github.com/alibaba/DataX/releases/tag/datax_v202303) + - 精简代码 + - 新增插件(adbmysqlwriter、databendwriter、selectdbwriter) + - 优化插件、修复问题(sqlserver、hdfs、cassandra、kudu、oss) + - fastjson 升级到 fastjson2 + - [datax_v202210](https://github.com/alibaba/DataX/releases/tag/datax_v202210) - 涉及通道能力更新(OceanBase、Tdengine、Doris等) diff --git a/adbmysqlwriter/doc/adbmysqlwriter.md b/adbmysqlwriter/doc/adbmysqlwriter.md new file mode 100644 index 00000000..27ac6b10 --- /dev/null +++ b/adbmysqlwriter/doc/adbmysqlwriter.md @@ -0,0 +1,338 @@ +# DataX AdbMysqlWriter + + +--- + + +## 1 快速介绍 + +AdbMysqlWriter 插件实现了写入数据到 ADB MySQL 目的表的功能。在底层实现上, AdbMysqlWriter 通过 JDBC 连接远程 ADB MySQL 数据库,并执行相应的 `insert into ...` 或者 ( `replace into ...` ) 的 SQL 语句将数据写入 ADB MySQL,内部会分批次提交入库。 + +AdbMysqlWriter 面向ETL开发工程师,他们使用 AdbMysqlWriter 从数仓导入数据到 ADB MySQL。同时 AdbMysqlWriter 亦可以作为数据迁移工具为DBA等用户提供服务。 + + +## 2 实现原理 + +AdbMysqlWriter 通过 DataX 框架获取 Reader 生成的协议数据,AdbMysqlWriter 通过 JDBC 连接远程 ADB MySQL 数据库,并执行相应的 `insert into ...` 或者 ( `replace into ...` ) 的 SQL 语句将数据写入 ADB MySQL。 + + +* `insert into...`(遇到主键重复时会自动忽略当前写入数据,不做更新,作用等同于`insert ignore into`) + +##### 或者 + +* `replace into...`(没有遇到主键/唯一性索引冲突时,与 insert into 行为一致,冲突时会用新行替换原有行所有字段) 的语句写入数据到 MySQL。出于性能考虑,采用了 `PreparedStatement + Batch`,并且设置了:`rewriteBatchedStatements=true`,将数据缓冲到线程上下文 Buffer 中,当 Buffer 累计到预定阈值时,才发起写入请求。 + +
+ + 注意:整个任务至少需要具备 `insert/replace into...` 的权限,是否需要其他权限,取决于你任务配置中在 preSql 和 postSql 中指定的语句。 + + +## 3 功能说明 + +### 3.1 配置样例 + +* 这里使用一份从内存产生到 ADB MySQL 导入的数据。 + +```json +{ + "job": { + "setting": { + "speed": { + "channel": 1 + } + }, + "content": [ + { + "reader": { + "name": "streamreader", + "parameter": { + "column" : [ + { + "value": "DataX", + "type": "string" + }, + { + "value": 19880808, + "type": "long" + }, + { + "value": "1988-08-08 08:08:08", + "type": "date" + }, + { + "value": true, + "type": "bool" + }, + { + "value": "test", + "type": "bytes" + } + ], + "sliceRecordCount": 1000 + } + }, + "writer": { + "name": "adbmysqlwriter", + "parameter": { + "writeMode": "replace", + "username": "root", + "password": "root", + "column": [ + "*" + ], + "preSql": [ + "truncate table @table" + ], + "connection": [ + { + "jdbcUrl": "jdbc:mysql://ip:port/database?useUnicode=true", + "table": [ + "test" + ] + } + ] + } + } + } + ] + } +} + +``` + + +### 3.2 参数说明 + +* **jdbcUrl** + + * 描述:目的数据库的 JDBC 连接信息。作业运行时,DataX 会在你提供的 jdbcUrl 后面追加如下属性:yearIsDateType=false&zeroDateTimeBehavior=convertToNull&rewriteBatchedStatements=true + + 注意:1、在一个数据库上只能配置一个 jdbcUrl + 2、一个 AdbMySQL 写入任务仅能配置一个 jdbcUrl + 3、jdbcUrl按照MySQL官方规范,并可以填写连接附加控制信息,比如想指定连接编码为 gbk ,则在 jdbcUrl 后面追加属性 useUnicode=true&characterEncoding=gbk。具体请参看 Mysql官方文档或者咨询对应 DBA。 + + * 必选:是
+ + * 默认值:无
+ +* **username** + + * 描述:目的数据库的用户名
+ + * 必选:是
+ + * 默认值:无
+ +* **password** + + * 描述:目的数据库的密码
+ + * 必选:是
+ + * 默认值:无
+ +* **table** + + * 描述:目的表的表名称。只能配置一个 AdbMySQL 的表名称。 + + 注意:table 和 jdbcUrl 必须包含在 connection 配置单元中 + + * 必选:是
+ + * 默认值:无
+ +* **column** + + * 描述:目的表需要写入数据的字段,字段之间用英文逗号分隔。例如: "column": ["id", "name", "age"]。如果要依次写入全部列,使用`*`表示, 例如: `"column": ["*"]`。 + + **column配置项必须指定,不能留空!** + + 注意:1、我们强烈不推荐你这样配置,因为当你目的表字段个数、类型等有改动时,你的任务可能运行不正确或者失败 + 2、 column 不能配置任何常量值 + + * 必选:是
+ + * 默认值:否
+ +* **session** + + * 描述: DataX在获取 ADB MySQL 连接时,执行session指定的SQL语句,修改当前connection session属性 + + * 必须: 否 + + * 默认值: 空 + +* **preSql** + + * 描述:写入数据到目的表前,会先执行这里的标准语句。如果 Sql 中有你需要操作到的表名称,请使用 `@table` 表示,这样在实际执行 SQL 语句时,会对变量按照实际表名称进行替换。比如希望导入数据前,先对表中数据进行删除操作,那么你可以这样配置:`"preSql":["truncate table @table"]`,效果是:在执行到每个表写入数据前,会先执行对应的 `truncate table 对应表名称`
+ + * 必选:否
+ + * 默认值:无
+ +* **postSql** + + * 描述:写入数据到目的表后,会执行这里的标准语句。(原理同 preSql )
+ + * 必选:否
+ + * 默认值:无
+ +* **writeMode** + + * 描述:控制写入数据到目标表采用 `insert into` 或者 `replace into` 或者 `ON DUPLICATE KEY UPDATE` 语句
+ + * 必选:是
+ + * 所有选项:insert/replace/update
+ + * 默认值:replace
+ +* **batchSize** + + * 描述:一次性批量提交的记录数大小,该值可以极大减少DataX与 Adb MySQL 的网络交互次数,并提升整体吞吐量。但是该值设置过大可能会造成DataX运行进程OOM情况。
+ + * 必选:否
+ + * 默认值:2048
+ + +### 3.3 类型转换 + +目前 AdbMysqlWriter 支持大部分 MySQL 类型,但也存在部分个别类型没有支持的情况,请注意检查你的类型。 + +下面列出 AdbMysqlWriter 针对 MySQL 类型转换列表: + +| DataX 内部类型 | AdbMysql 数据类型 | +|---------------|---------------------------------| +| Long | tinyint, smallint, int, bigint | +| Double | float, double, decimal | +| String | varchar | +| Date | date, time, datetime, timestamp | +| Boolean | boolean | +| Bytes | binary | + +## 4 性能报告 + +### 4.1 环境准备 + +#### 4.1.1 数据特征 +TPC-H 数据集 lineitem 表,共 17 个字段, 随机生成总记录行数 59986052。未压缩总数据量:7.3GiB + +建表语句: + + CREATE TABLE `datax_adbmysqlwriter_perf_lineitem` ( + `l_orderkey` bigint NOT NULL COMMENT '', + `l_partkey` int NOT NULL COMMENT '', + `l_suppkey` int NOT NULL COMMENT '', + `l_linenumber` int NOT NULL COMMENT '', + `l_quantity` decimal(15,2) NOT NULL COMMENT '', + `l_extendedprice` decimal(15,2) NOT NULL COMMENT '', + `l_discount` decimal(15,2) NOT NULL COMMENT '', + `l_tax` decimal(15,2) NOT NULL COMMENT '', + `l_returnflag` varchar(1024) NOT NULL COMMENT '', + `l_linestatus` varchar(1024) NOT NULL COMMENT '', + `l_shipdate` date NOT NULL COMMENT '', + `l_commitdate` date NOT NULL COMMENT '', + `l_receiptdate` date NOT NULL COMMENT '', + `l_shipinstruct` varchar(1024) NOT NULL COMMENT '', + `l_shipmode` varchar(1024) NOT NULL COMMENT '', + `l_comment` varchar(1024) NOT NULL COMMENT '', + `dummy` varchar(1024), + PRIMARY KEY (`l_orderkey`, `l_linenumber`) + ) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT='datax perf test'; + +单行记录类似于: + + l_orderkey: 2122789 + l_partkey: 1233571 + l_suppkey: 8608 + l_linenumber: 1 + l_quantity: 35.00 + l_extendedprice: 52657.85 + l_discount: 0.02 + l_tax: 0.07 + l_returnflag: N + l_linestatus: O + l_shipdate: 1996-11-03 + l_commitdate: 1996-12-07 + l_receiptdate: 1996-11-16 + l_shipinstruct: COLLECT COD + l_shipmode: FOB + l_comment: ld, regular theodolites. + dummy: + +#### 4.1.2 机器参数 + +* DataX ECS: 24Core48GB + +* Adb MySQL 数据库 + * 计算资源:16Core64GB(集群版) + * 弹性IO资源:3 + +#### 4.1.3 DataX jvm 参数 + + -Xms1G -Xmx10G -XX:+HeapDumpOnOutOfMemoryError + +### 4.2 测试报告 + +| 通道数 | 批量提交行数 | DataX速度(Rec/s) | DataX流量(MB/s) | 导入用时(s) | +|-----|-------|------------------|---------------|---------| +| 1 | 512 | 23071 | 2.34 | 2627 | +| 1 | 1024 | 26080 | 2.65 | 2346 | +| 1 | 2048 | 28162 | 2.86 | 2153 | +| 1 | 4096 | 28978 | 2.94 | 2119 | +| 4 | 512 | 56590 | 5.74 | 1105 | +| 4 | 1024 | 81062 | 8.22 | 763 | +| 4 | 2048 | 107117 | 10.87 | 605 | +| 4 | 4096 | 113181 | 11.48 | 579 | +| 8 | 512 | 81062 | 8.22 | 786 | +| 8 | 1024 | 127629 | 12.95 | 519 | +| 8 | 2048 | 187456 | 19.01 | 369 | +| 8 | 4096 | 206848 | 20.98 | 341 | +| 16 | 512 | 130404 | 13.23 | 513 | +| 16 | 1024 | 214235 | 21.73 | 335 | +| 16 | 2048 | 299930 | 30.42 | 253 | +| 16 | 4096 | 333255 | 33.80 | 227 | +| 32 | 512 | 206848 | 20.98 | 347 | +| 32 | 1024 | 315716 | 32.02 | 241 | +| 32 | 2048 | 399907 | 40.56 | 199 | +| 32 | 4096 | 461431 | 46.80 | 184 | +| 64 | 512 | 333255 | 33.80 | 231 | +| 64 | 1024 | 399907 | 40.56 | 204 | +| 64 | 2048 | 428471 | 43.46 | 199 | +| 64 | 4096 | 461431 | 46.80 | 187 | +| 128 | 512 | 333255 | 33.80 | 235 | +| 128 | 1024 | 399907 | 40.56 | 203 | +| 128 | 2048 | 425432 | 43.15 | 197 | +| 128 | 4096 | 387006 | 39.26 | 211 | + +说明: + +1. datax 使用 txtfilereader 读取本地文件,避免源端存在性能瓶颈。 + +#### 性能测试小结 +1. channel通道个数和batchSize对性能影响比较大 +2. 通常不建议写入数据库时,通道个数 > 32 + +## 5 约束限制 + +## FAQ + +*** + +**Q: AdbMysqlWriter 执行 postSql 语句报错,那么数据导入到目标数据库了吗?** + +A: DataX 导入过程存在三块逻辑,pre 操作、导入操作、post 操作,其中任意一环报错,DataX 作业报错。由于 DataX 不能保证在同一个事务完成上述几个操作,因此有可能数据已经落入到目标端。 + +*** + +**Q: 按照上述说法,那么有部分脏数据导入数据库,如果影响到线上数据库怎么办?** + +A: 目前有两种解法,第一种配置 pre 语句,该 sql 可以清理当天导入数据, DataX 每次导入时候可以把上次清理干净并导入完整数据。第二种,向临时表导入数据,完成后再 rename 到线上表。 + +*** + +**Q: 上面第二种方法可以避免对线上数据造成影响,那我具体怎样操作?** + +A: 可以配置临时表导入 diff --git a/adbmysqlwriter/pom.xml b/adbmysqlwriter/pom.xml new file mode 100755 index 00000000..6ffcab85 --- /dev/null +++ b/adbmysqlwriter/pom.xml @@ -0,0 +1,79 @@ + + 4.0.0 + + com.alibaba.datax + datax-all + 0.0.1-SNAPSHOT + + adbmysqlwriter + adbmysqlwriter + jar + + + + com.alibaba.datax + datax-common + ${datax-project-version} + + + slf4j-log4j12 + org.slf4j + + + + + org.slf4j + slf4j-api + + + ch.qos.logback + logback-classic + + + + com.alibaba.datax + plugin-rdbms-util + ${datax-project-version} + + + + mysql + mysql-connector-java + 5.1.40 + + + + + + + + maven-compiler-plugin + + ${jdk-version} + ${jdk-version} + ${project-sourceEncoding} + + + + + maven-assembly-plugin + + + src/main/assembly/package.xml + + datax + + + + dwzip + package + + single + + + + + + + diff --git a/adbmysqlwriter/src/main/assembly/package.xml b/adbmysqlwriter/src/main/assembly/package.xml new file mode 100755 index 00000000..7192e531 --- /dev/null +++ b/adbmysqlwriter/src/main/assembly/package.xml @@ -0,0 +1,35 @@ + + + + dir + + false + + + src/main/resources + + plugin.json + plugin_job_template.json + + plugin/writer/adbmysqlwriter + + + target/ + + adbmysqlwriter-0.0.1-SNAPSHOT.jar + + plugin/writer/adbmysqlwriter + + + + + + false + plugin/writer/adbmysqlwriter/libs + runtime + + + diff --git a/adbmysqlwriter/src/main/java/com/alibaba/datax/plugin/writer/adbmysqlwriter/AdbMysqlWriter.java b/adbmysqlwriter/src/main/java/com/alibaba/datax/plugin/writer/adbmysqlwriter/AdbMysqlWriter.java new file mode 100755 index 00000000..762c4934 --- /dev/null +++ b/adbmysqlwriter/src/main/java/com/alibaba/datax/plugin/writer/adbmysqlwriter/AdbMysqlWriter.java @@ -0,0 +1,138 @@ +package com.alibaba.datax.plugin.writer.adbmysqlwriter; + +import com.alibaba.datax.common.element.Record; +import com.alibaba.datax.common.plugin.RecordReceiver; +import com.alibaba.datax.common.spi.Writer; +import com.alibaba.datax.common.util.Configuration; +import com.alibaba.datax.plugin.rdbms.util.DataBaseType; +import com.alibaba.datax.plugin.rdbms.writer.CommonRdbmsWriter; +import com.alibaba.datax.plugin.rdbms.writer.Key; +import org.apache.commons.lang3.StringUtils; + +import java.sql.Connection; +import java.sql.SQLException; +import java.util.List; + +public class AdbMysqlWriter extends Writer { + private static final DataBaseType DATABASE_TYPE = DataBaseType.ADB; + + public static class Job extends Writer.Job { + private Configuration originalConfig = null; + private CommonRdbmsWriter.Job commonRdbmsWriterJob; + + @Override + public void preCheck(){ + this.init(); + this.commonRdbmsWriterJob.writerPreCheck(this.originalConfig, DATABASE_TYPE); + } + + @Override + public void init() { + this.originalConfig = super.getPluginJobConf(); + this.commonRdbmsWriterJob = new CommonRdbmsWriter.Job(DATABASE_TYPE); + this.commonRdbmsWriterJob.init(this.originalConfig); + } + + // 一般来说,是需要推迟到 task 中进行pre 的执行(单表情况例外) + @Override + public void prepare() { + //实跑先不支持 权限 检验 + //this.commonRdbmsWriterJob.privilegeValid(this.originalConfig, DATABASE_TYPE); + this.commonRdbmsWriterJob.prepare(this.originalConfig); + } + + @Override + public List split(int mandatoryNumber) { + return this.commonRdbmsWriterJob.split(this.originalConfig, mandatoryNumber); + } + + // 一般来说,是需要推迟到 task 中进行post 的执行(单表情况例外) + @Override + public void post() { + this.commonRdbmsWriterJob.post(this.originalConfig); + } + + @Override + public void destroy() { + this.commonRdbmsWriterJob.destroy(this.originalConfig); + } + + } + + public static class Task extends Writer.Task { + + private Configuration writerSliceConfig; + private CommonRdbmsWriter.Task commonRdbmsWriterTask; + + public static class DelegateClass extends CommonRdbmsWriter.Task { + private long writeTime = 0L; + private long writeCount = 0L; + private long lastLogTime = 0; + + public DelegateClass(DataBaseType dataBaseType) { + super(dataBaseType); + } + + @Override + protected void doBatchInsert(Connection connection, List buffer) + throws SQLException { + long startTime = System.currentTimeMillis(); + + super.doBatchInsert(connection, buffer); + + writeCount = writeCount + buffer.size(); + writeTime = writeTime + (System.currentTimeMillis() - startTime); + + // log write metrics every 10 seconds + if (System.currentTimeMillis() - lastLogTime > 10000) { + lastLogTime = System.currentTimeMillis(); + logTotalMetrics(); + } + } + + public void logTotalMetrics() { + LOG.info(Thread.currentThread().getName() + ", AdbMySQL writer take " + writeTime + " ms, write " + writeCount + " records."); + } + } + + @Override + public void init() { + this.writerSliceConfig = super.getPluginJobConf(); + + if (StringUtils.isBlank(this.writerSliceConfig.getString(Key.WRITE_MODE))) { + this.writerSliceConfig.set(Key.WRITE_MODE, "REPLACE"); + } + + this.commonRdbmsWriterTask = new DelegateClass(DATABASE_TYPE); + this.commonRdbmsWriterTask.init(this.writerSliceConfig); + } + + @Override + public void prepare() { + this.commonRdbmsWriterTask.prepare(this.writerSliceConfig); + } + + //TODO 改用连接池,确保每次获取的连接都是可用的(注意:连接可能需要每次都初始化其 session) + public void startWrite(RecordReceiver recordReceiver) { + this.commonRdbmsWriterTask.startWrite(recordReceiver, this.writerSliceConfig, + super.getTaskPluginCollector()); + } + + @Override + public void post() { + this.commonRdbmsWriterTask.post(this.writerSliceConfig); + } + + @Override + public void destroy() { + this.commonRdbmsWriterTask.destroy(this.writerSliceConfig); + } + + @Override + public boolean supportFailOver(){ + String writeMode = writerSliceConfig.getString(Key.WRITE_MODE); + return "replace".equalsIgnoreCase(writeMode); + } + + } +} diff --git a/adbmysqlwriter/src/main/resources/plugin.json b/adbmysqlwriter/src/main/resources/plugin.json new file mode 100755 index 00000000..58c69533 --- /dev/null +++ b/adbmysqlwriter/src/main/resources/plugin.json @@ -0,0 +1,6 @@ +{ + "name": "adbmysqlwriter", + "class": "com.alibaba.datax.plugin.writer.adbmysqlwriter.AdbMysqlWriter", + "description": "useScene: prod. mechanism: Jdbc connection using the database, execute insert sql. warn: The more you know about the database, the less problems you encounter.", + "developer": "alibaba" +} \ No newline at end of file diff --git a/adbmysqlwriter/src/main/resources/plugin_job_template.json b/adbmysqlwriter/src/main/resources/plugin_job_template.json new file mode 100644 index 00000000..9537ee5a --- /dev/null +++ b/adbmysqlwriter/src/main/resources/plugin_job_template.json @@ -0,0 +1,20 @@ +{ + "name": "adbmysqlwriter", + "parameter": { + "username": "username", + "password": "password", + "column": ["col1", "col2", "col3"], + "connection": [ + { + "jdbcUrl": "jdbc:mysql://:[/]", + "table": ["table1", "table2"] + } + ], + "preSql": [], + "postSql": [], + "batchSize": 65536, + "batchByteSize": 134217728, + "dryRun": false, + "writeMode": "insert" + } +} \ No newline at end of file diff --git a/adswriter/doc/adswriter.md b/adswriter/doc/adswriter.md index 4a0fd961..c02f8018 100644 --- a/adswriter/doc/adswriter.md +++ b/adswriter/doc/adswriter.md @@ -110,7 +110,6 @@ DataX 将数据直连ADS接口,利用ADS暴露的INSERT接口直写到ADS。 "account": "xxx@aliyun.com", "odpsServer": "xxx", "tunnelServer": "xxx", - "accountType": "aliyun", "project": "transfer_project" }, "writeMode": "load", diff --git a/adswriter/src/main/java/com/alibaba/datax/plugin/writer/adswriter/insert/AdsClientProxy.java b/adswriter/src/main/java/com/alibaba/datax/plugin/writer/adswriter/insert/AdsClientProxy.java index 8fdc70d6..326b464d 100644 --- a/adswriter/src/main/java/com/alibaba/datax/plugin/writer/adswriter/insert/AdsClientProxy.java +++ b/adswriter/src/main/java/com/alibaba/datax/plugin/writer/adswriter/insert/AdsClientProxy.java @@ -18,7 +18,7 @@ import com.alibaba.datax.plugin.writer.adswriter.AdsWriterErrorCode; import com.alibaba.datax.plugin.writer.adswriter.ads.TableInfo; import com.alibaba.datax.plugin.writer.adswriter.util.Constant; import com.alibaba.datax.plugin.writer.adswriter.util.Key; -import com.alibaba.fastjson.JSON; +import com.alibaba.fastjson2.JSON; import org.apache.commons.lang3.StringUtils; import org.apache.commons.lang3.tuple.Pair; import org.slf4j.Logger; diff --git a/adswriter/src/main/java/com/alibaba/datax/plugin/writer/adswriter/load/TransferProjectConf.java b/adswriter/src/main/java/com/alibaba/datax/plugin/writer/adswriter/load/TransferProjectConf.java index bff4b7b9..3d28a833 100644 --- a/adswriter/src/main/java/com/alibaba/datax/plugin/writer/adswriter/load/TransferProjectConf.java +++ b/adswriter/src/main/java/com/alibaba/datax/plugin/writer/adswriter/load/TransferProjectConf.java @@ -12,7 +12,6 @@ public class TransferProjectConf { public final static String KEY_ACCOUNT = "odps.account"; public final static String KEY_ODPS_SERVER = "odps.odpsServer"; public final static String KEY_ODPS_TUNNEL = "odps.tunnelServer"; - public final static String KEY_ACCOUNT_TYPE = "odps.accountType"; public final static String KEY_PROJECT = "odps.project"; private String accessId; @@ -20,7 +19,6 @@ public class TransferProjectConf { private String account; private String odpsServer; private String odpsTunnel; - private String accountType; private String project; public static TransferProjectConf create(Configuration adsWriterConf) { @@ -30,7 +28,6 @@ public class TransferProjectConf { res.account = adsWriterConf.getString(KEY_ACCOUNT); res.odpsServer = adsWriterConf.getString(KEY_ODPS_SERVER); res.odpsTunnel = adsWriterConf.getString(KEY_ODPS_TUNNEL); - res.accountType = adsWriterConf.getString(KEY_ACCOUNT_TYPE, "aliyun"); res.project = adsWriterConf.getString(KEY_PROJECT); return res; } @@ -55,9 +52,6 @@ public class TransferProjectConf { return odpsTunnel; } - public String getAccountType() { - return accountType; - } public String getProject() { return project; diff --git a/cassandrareader/src/main/java/com/alibaba/datax/plugin/reader/cassandrareader/CassandraReaderHelper.java b/cassandrareader/src/main/java/com/alibaba/datax/plugin/reader/cassandrareader/CassandraReaderHelper.java index 0a4e83fa..f5937c2f 100644 --- a/cassandrareader/src/main/java/com/alibaba/datax/plugin/reader/cassandrareader/CassandraReaderHelper.java +++ b/cassandrareader/src/main/java/com/alibaba/datax/plugin/reader/cassandrareader/CassandraReaderHelper.java @@ -23,7 +23,7 @@ import com.alibaba.datax.common.element.StringColumn; import com.alibaba.datax.common.exception.DataXException; import com.alibaba.datax.common.plugin.TaskPluginCollector; import com.alibaba.datax.common.util.Configuration; -import com.alibaba.fastjson.JSON; +import com.alibaba.fastjson2.JSON; import com.datastax.driver.core.Cluster; import com.datastax.driver.core.CodecRegistry; @@ -298,6 +298,7 @@ public class CassandraReaderHelper { record.addColumn(new LongColumn(rs.getInt(i))); break; + case COUNTER: case BIGINT: record.addColumn(new LongColumn(rs.getLong(i))); break; @@ -558,26 +559,6 @@ public class CassandraReaderHelper { String.format( "配置信息有错误.列信息中需要包含'%s'字段 .",Key.COLUMN_NAME)); } - if( name.startsWith(Key.WRITE_TIME) ) { - String colName = name.substring(Key.WRITE_TIME.length(),name.length() - 1 ); - ColumnMetadata col = tableMetadata.getColumn(colName); - if( col == null ) { - throw DataXException - .asDataXException( - CassandraReaderErrorCode.CONF_ERROR, - String.format( - "配置信息有错误.列'%s'不存在 .",colName)); - } - } else { - ColumnMetadata col = tableMetadata.getColumn(name); - if( col == null ) { - throw DataXException - .asDataXException( - CassandraReaderErrorCode.CONF_ERROR, - String.format( - "配置信息有错误.列'%s'不存在 .",name)); - } - } } } diff --git a/cassandrawriter/src/main/java/com/alibaba/datax/plugin/writer/cassandrawriter/CassandraWriterHelper.java b/cassandrawriter/src/main/java/com/alibaba/datax/plugin/writer/cassandrawriter/CassandraWriterHelper.java index b68af281..5ac392b7 100644 --- a/cassandrawriter/src/main/java/com/alibaba/datax/plugin/writer/cassandrawriter/CassandraWriterHelper.java +++ b/cassandrawriter/src/main/java/com/alibaba/datax/plugin/writer/cassandrawriter/CassandraWriterHelper.java @@ -18,10 +18,10 @@ import java.util.UUID; import com.alibaba.datax.common.element.Column; import com.alibaba.datax.common.exception.DataXException; -import com.alibaba.fastjson.JSON; -import com.alibaba.fastjson.JSONArray; -import com.alibaba.fastjson.JSONException; -import com.alibaba.fastjson.JSONObject; +import com.alibaba.fastjson2.JSON; +import com.alibaba.fastjson2.JSONArray; +import com.alibaba.fastjson2.JSONException; +import com.alibaba.fastjson2.JSONObject; import com.datastax.driver.core.BoundStatement; import com.datastax.driver.core.CodecRegistry; @@ -204,7 +204,7 @@ public class CassandraWriterHelper { case MAP: { Map m = new HashMap(); - for (JSONObject.Entry e : ((JSONObject)jsonObject).entrySet()) { + for (Map.Entry e : ((JSONObject)jsonObject).entrySet()) { Object k = parseFromString((String) e.getKey(), type.getTypeArguments().get(0)); Object v = parseFromJson(e.getValue(), type.getTypeArguments().get(1)); m.put(k,v); @@ -233,7 +233,7 @@ public class CassandraWriterHelper { case UDT: { UDTValue t = ((UserType) type).newValue(); UserType userType = t.getType(); - for (JSONObject.Entry e : ((JSONObject)jsonObject).entrySet()) { + for (Map.Entry e : ((JSONObject)jsonObject).entrySet()) { DataType eleType = userType.getFieldType((String)e.getKey()); t.set((String)e.getKey(), parseFromJson(e.getValue(), eleType), registry.codecFor(eleType).getJavaType()); } diff --git a/clickhousewriter/src/main/java/com/alibaba/datax/plugin/writer/clickhousewriter/ClickhouseWriter.java b/clickhousewriter/src/main/java/com/alibaba/datax/plugin/writer/clickhousewriter/ClickhouseWriter.java index 31ffdfec..83c421ee 100644 --- a/clickhousewriter/src/main/java/com/alibaba/datax/plugin/writer/clickhousewriter/ClickhouseWriter.java +++ b/clickhousewriter/src/main/java/com/alibaba/datax/plugin/writer/clickhousewriter/ClickhouseWriter.java @@ -10,8 +10,8 @@ import com.alibaba.datax.common.util.Configuration; import com.alibaba.datax.plugin.rdbms.util.DBUtilErrorCode; import com.alibaba.datax.plugin.rdbms.util.DataBaseType; import com.alibaba.datax.plugin.rdbms.writer.CommonRdbmsWriter; -import com.alibaba.fastjson.JSON; -import com.alibaba.fastjson.JSONArray; +import com.alibaba.fastjson2.JSON; +import com.alibaba.fastjson2.JSONArray; import java.sql.Array; import java.sql.Connection; diff --git a/common/pom.xml b/common/pom.xml index eafdb5da..59d7073d 100755 --- a/common/pom.xml +++ b/common/pom.xml @@ -17,8 +17,8 @@ commons-lang3 - com.alibaba - fastjson + com.alibaba.fastjson2 + fastjson2 commons-io diff --git a/common/src/main/java/com/alibaba/datax/common/element/Column.java b/common/src/main/java/com/alibaba/datax/common/element/Column.java index 2e093a7a..13cfc7de 100755 --- a/common/src/main/java/com/alibaba/datax/common/element/Column.java +++ b/common/src/main/java/com/alibaba/datax/common/element/Column.java @@ -1,6 +1,6 @@ package com.alibaba.datax.common.element; -import com.alibaba.fastjson.JSON; +import com.alibaba.fastjson2.JSON; import java.math.BigDecimal; import java.math.BigInteger; diff --git a/common/src/main/java/com/alibaba/datax/common/statistics/PerfTrace.java b/common/src/main/java/com/alibaba/datax/common/statistics/PerfTrace.java index ea9aa421..cf0457bc 100644 --- a/common/src/main/java/com/alibaba/datax/common/statistics/PerfTrace.java +++ b/common/src/main/java/com/alibaba/datax/common/statistics/PerfTrace.java @@ -31,7 +31,6 @@ public class PerfTrace { private int taskGroupId; private int channelNumber; - private int priority; private int batchSize = 500; private volatile boolean perfReportEnable = true; @@ -54,12 +53,12 @@ public class PerfTrace { * @param taskGroupId * @return */ - public static PerfTrace getInstance(boolean isJob, long jobId, int taskGroupId, int priority, boolean enable) { + public static PerfTrace getInstance(boolean isJob, long jobId, int taskGroupId, boolean enable) { if (instance == null) { synchronized (lock) { if (instance == null) { - instance = new PerfTrace(isJob, jobId, taskGroupId, priority, enable); + instance = new PerfTrace(isJob, jobId, taskGroupId, enable); } } } @@ -76,22 +75,21 @@ public class PerfTrace { LOG.error("PerfTrace instance not be init! must have some error! "); synchronized (lock) { if (instance == null) { - instance = new PerfTrace(false, -1111, -1111, 0, false); + instance = new PerfTrace(false, -1111, -1111, false); } } } return instance; } - private PerfTrace(boolean isJob, long jobId, int taskGroupId, int priority, boolean enable) { + private PerfTrace(boolean isJob, long jobId, int taskGroupId, boolean enable) { try { this.perfTraceId = isJob ? "job_" + jobId : String.format("taskGroup_%s_%s", jobId, taskGroupId); this.enable = enable; this.isJob = isJob; this.taskGroupId = taskGroupId; this.instId = jobId; - this.priority = priority; - LOG.info(String.format("PerfTrace traceId=%s, isEnable=%s, priority=%s", this.perfTraceId, this.enable, this.priority)); + LOG.info(String.format("PerfTrace traceId=%s, isEnable=%s", this.perfTraceId, this.enable)); } catch (Exception e) { // do nothing @@ -398,7 +396,6 @@ public class PerfTrace { jdo.setWindowEnd(this.windowEnd); jdo.setJobStartTime(jobStartTime); jdo.setJobRunTimeMs(System.currentTimeMillis() - jobStartTime.getTime()); - jdo.setJobPriority(this.priority); jdo.setChannelNum(this.channelNumber); jdo.setCluster(this.cluster); jdo.setJobDomain(this.jobDomain); @@ -609,7 +606,6 @@ public class PerfTrace { private Date jobStartTime; private Date jobEndTime; private Long jobRunTimeMs; - private Integer jobPriority; private Integer channelNum; private String cluster; private String jobDomain; @@ -680,10 +676,6 @@ public class PerfTrace { return jobRunTimeMs; } - public Integer getJobPriority() { - return jobPriority; - } - public Integer getChannelNum() { return channelNum; } @@ -816,10 +808,6 @@ public class PerfTrace { this.jobRunTimeMs = jobRunTimeMs; } - public void setJobPriority(Integer jobPriority) { - this.jobPriority = jobPriority; - } - public void setChannelNum(Integer channelNum) { this.channelNum = channelNum; } diff --git a/common/src/main/java/com/alibaba/datax/common/util/Configuration.java b/common/src/main/java/com/alibaba/datax/common/util/Configuration.java index cd88e84a..c1194532 100755 --- a/common/src/main/java/com/alibaba/datax/common/util/Configuration.java +++ b/common/src/main/java/com/alibaba/datax/common/util/Configuration.java @@ -3,8 +3,8 @@ package com.alibaba.datax.common.util; import com.alibaba.datax.common.exception.CommonErrorCode; import com.alibaba.datax.common.exception.DataXException; import com.alibaba.datax.common.spi.ErrorCode; -import com.alibaba.fastjson.JSON; -import com.alibaba.fastjson.serializer.SerializerFeature; +import com.alibaba.fastjson2.JSON; +import com.alibaba.fastjson2.JSONWriter; import org.apache.commons.io.IOUtils; import org.apache.commons.lang3.CharUtils; import org.apache.commons.lang3.StringUtils; @@ -586,7 +586,7 @@ public class Configuration { */ public String beautify() { return JSON.toJSONString(this.getInternal(), - SerializerFeature.PrettyFormat); + JSONWriter.Feature.PrettyFormat); } /** diff --git a/common/src/main/java/com/alibaba/datax/common/util/IdAndKeyRollingUtil.java b/common/src/main/java/com/alibaba/datax/common/util/IdAndKeyRollingUtil.java deleted file mode 100644 index 8bab301e..00000000 --- a/common/src/main/java/com/alibaba/datax/common/util/IdAndKeyRollingUtil.java +++ /dev/null @@ -1,62 +0,0 @@ -package com.alibaba.datax.common.util; - -import java.util.Map; - -import org.apache.commons.lang3.StringUtils; -import org.slf4j.Logger; -import org.slf4j.LoggerFactory; - -import com.alibaba.datax.common.exception.DataXException; - -public class IdAndKeyRollingUtil { - private static Logger LOGGER = LoggerFactory.getLogger(IdAndKeyRollingUtil.class); - public static final String SKYNET_ACCESSID = "SKYNET_ACCESSID"; - public static final String SKYNET_ACCESSKEY = "SKYNET_ACCESSKEY"; - - public final static String ACCESS_ID = "accessId"; - public final static String ACCESS_KEY = "accessKey"; - - public static String parseAkFromSkynetAccessKey() { - Map envProp = System.getenv(); - String skynetAccessID = envProp.get(IdAndKeyRollingUtil.SKYNET_ACCESSID); - String skynetAccessKey = envProp.get(IdAndKeyRollingUtil.SKYNET_ACCESSKEY); - String accessKey = null; - // follow 原有的判断条件 - // 环境变量中,如果存在SKYNET_ACCESSID/SKYNET_ACCESSKEy(只要有其中一个变量,则认为一定是两个都存在的! - // if (StringUtils.isNotBlank(skynetAccessID) || - // StringUtils.isNotBlank(skynetAccessKey)) { - // 检查严格,只有加密串不为空的时候才进去,不过 之前能跑的加密串都不应该为空 - if (StringUtils.isNotBlank(skynetAccessKey)) { - LOGGER.info("Try to get accessId/accessKey from environment SKYNET_ACCESSKEY."); - accessKey = DESCipher.decrypt(skynetAccessKey); - if (StringUtils.isBlank(accessKey)) { - // 环境变量里面有,但是解析不到 - throw DataXException.asDataXException(String.format( - "Failed to get the [accessId]/[accessKey] from the environment variable. The [accessId]=[%s]", - skynetAccessID)); - } - } - if (StringUtils.isNotBlank(accessKey)) { - LOGGER.info("Get accessId/accessKey from environment variables SKYNET_ACCESSKEY successfully."); - } - return accessKey; - } - - public static String getAccessIdAndKeyFromEnv(Configuration originalConfig) { - String accessId = null; - Map envProp = System.getenv(); - accessId = envProp.get(IdAndKeyRollingUtil.SKYNET_ACCESSID); - String accessKey = null; - if (StringUtils.isBlank(accessKey)) { - // 老的没有出异常,只是获取不到ak - accessKey = IdAndKeyRollingUtil.parseAkFromSkynetAccessKey(); - } - - if (StringUtils.isNotBlank(accessKey)) { - // 确认使用这个的都是 accessId、accessKey的命名习惯 - originalConfig.set(IdAndKeyRollingUtil.ACCESS_ID, accessId); - originalConfig.set(IdAndKeyRollingUtil.ACCESS_KEY, accessKey); - } - return accessKey; - } -} diff --git a/core/src/main/java/com/alibaba/datax/core/Engine.java b/core/src/main/java/com/alibaba/datax/core/Engine.java index 38342532..4ba9fc18 100755 --- a/core/src/main/java/com/alibaba/datax/core/Engine.java +++ b/core/src/main/java/com/alibaba/datax/core/Engine.java @@ -79,16 +79,9 @@ public class Engine { perfReportEnable = false; } - int priority = 0; - try { - priority = Integer.parseInt(System.getenv("SKYNET_PRIORITY")); - }catch (NumberFormatException e){ - LOG.warn("prioriy set to 0, because NumberFormatException, the value is: "+System.getProperty("PROIORY")); - } - Configuration jobInfoConfig = allConf.getConfiguration(CoreConstant.DATAX_JOB_JOBINFO); //初始化PerfTrace - PerfTrace perfTrace = PerfTrace.getInstance(isJob, instanceId, taskGroupId, priority, traceEnable); + PerfTrace perfTrace = PerfTrace.getInstance(isJob, instanceId, taskGroupId, traceEnable); perfTrace.setJobInfo(jobInfoConfig,perfReportEnable,channelNumber); container.start(); diff --git a/core/src/main/java/com/alibaba/datax/core/container/util/JobAssignUtil.java b/core/src/main/java/com/alibaba/datax/core/container/util/JobAssignUtil.java index 31ba60a4..cbd0d2a1 100755 --- a/core/src/main/java/com/alibaba/datax/core/container/util/JobAssignUtil.java +++ b/core/src/main/java/com/alibaba/datax/core/container/util/JobAssignUtil.java @@ -114,7 +114,7 @@ public final class JobAssignUtil { * 需要实现的效果通过例子来说是: *
      * a 库上有表:0, 1, 2
-     * a 库上有表:3, 4
+     * b 库上有表:3, 4
      * c 库上有表:5, 6, 7
      *
      * 如果有 4个 taskGroup
diff --git a/core/src/main/java/com/alibaba/datax/core/job/JobContainer.java b/core/src/main/java/com/alibaba/datax/core/job/JobContainer.java
index 26b2989f..49f5a0a1 100755
--- a/core/src/main/java/com/alibaba/datax/core/job/JobContainer.java
+++ b/core/src/main/java/com/alibaba/datax/core/job/JobContainer.java
@@ -27,7 +27,7 @@ import com.alibaba.datax.core.util.container.ClassLoaderSwapper;
 import com.alibaba.datax.core.util.container.CoreConstant;
 import com.alibaba.datax.core.util.container.LoadUtil;
 import com.alibaba.datax.dataxservice.face.domain.enums.ExecuteMode;
-import com.alibaba.fastjson.JSON;
+import com.alibaba.fastjson2.JSON;
 import org.apache.commons.lang.StringUtils;
 import org.apache.commons.lang.Validate;
 import org.slf4j.Logger;
diff --git a/core/src/main/java/com/alibaba/datax/core/statistics/communication/CommunicationTool.java b/core/src/main/java/com/alibaba/datax/core/statistics/communication/CommunicationTool.java
index 51a601ae..1815ea02 100755
--- a/core/src/main/java/com/alibaba/datax/core/statistics/communication/CommunicationTool.java
+++ b/core/src/main/java/com/alibaba/datax/core/statistics/communication/CommunicationTool.java
@@ -2,7 +2,7 @@ package com.alibaba.datax.core.statistics.communication;
 
 import com.alibaba.datax.common.statistics.PerfTrace;
 import com.alibaba.datax.common.util.StrUtil;
-import com.alibaba.fastjson.JSON;
+import com.alibaba.fastjson2.JSON;
 import org.apache.commons.lang.Validate;
 
 import java.text.DecimalFormat;
diff --git a/core/src/main/java/com/alibaba/datax/core/statistics/plugin/task/StdoutPluginCollector.java b/core/src/main/java/com/alibaba/datax/core/statistics/plugin/task/StdoutPluginCollector.java
index 8b2a8378..d88ad0a8 100755
--- a/core/src/main/java/com/alibaba/datax/core/statistics/plugin/task/StdoutPluginCollector.java
+++ b/core/src/main/java/com/alibaba/datax/core/statistics/plugin/task/StdoutPluginCollector.java
@@ -6,7 +6,7 @@ import com.alibaba.datax.common.util.Configuration;
 import com.alibaba.datax.core.statistics.communication.Communication;
 import com.alibaba.datax.core.util.container.CoreConstant;
 import com.alibaba.datax.core.statistics.plugin.task.util.DirtyRecord;
-import com.alibaba.fastjson.JSON;
+import com.alibaba.fastjson2.JSON;
 
 import org.apache.commons.lang3.StringUtils;
 import org.slf4j.Logger;
diff --git a/core/src/main/java/com/alibaba/datax/core/statistics/plugin/task/util/DirtyRecord.java b/core/src/main/java/com/alibaba/datax/core/statistics/plugin/task/util/DirtyRecord.java
index 1b0d5238..caa4cb5b 100755
--- a/core/src/main/java/com/alibaba/datax/core/statistics/plugin/task/util/DirtyRecord.java
+++ b/core/src/main/java/com/alibaba/datax/core/statistics/plugin/task/util/DirtyRecord.java
@@ -4,7 +4,7 @@ import com.alibaba.datax.common.element.Column;
 import com.alibaba.datax.common.element.Record;
 import com.alibaba.datax.common.exception.DataXException;
 import com.alibaba.datax.core.util.FrameworkErrorCode;
-import com.alibaba.fastjson.JSON;
+import com.alibaba.fastjson2.JSON;
 
 import java.math.BigDecimal;
 import java.math.BigInteger;
diff --git a/core/src/main/java/com/alibaba/datax/core/taskgroup/TaskGroupContainer.java b/core/src/main/java/com/alibaba/datax/core/taskgroup/TaskGroupContainer.java
index c30c94d9..b4b45695 100755
--- a/core/src/main/java/com/alibaba/datax/core/taskgroup/TaskGroupContainer.java
+++ b/core/src/main/java/com/alibaba/datax/core/taskgroup/TaskGroupContainer.java
@@ -27,7 +27,7 @@ import com.alibaba.datax.core.util.TransformerUtil;
 import com.alibaba.datax.core.util.container.CoreConstant;
 import com.alibaba.datax.core.util.container.LoadUtil;
 import com.alibaba.datax.dataxservice.face.domain.enums.State;
-import com.alibaba.fastjson.JSON;
+import com.alibaba.fastjson2.JSON;
 import org.apache.commons.lang3.Validate;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
diff --git a/core/src/main/java/com/alibaba/datax/core/transport/record/DefaultRecord.java b/core/src/main/java/com/alibaba/datax/core/transport/record/DefaultRecord.java
index c78a2a87..1dfa02e8 100755
--- a/core/src/main/java/com/alibaba/datax/core/transport/record/DefaultRecord.java
+++ b/core/src/main/java/com/alibaba/datax/core/transport/record/DefaultRecord.java
@@ -5,7 +5,7 @@ import com.alibaba.datax.common.element.Record;
 import com.alibaba.datax.common.exception.DataXException;
 import com.alibaba.datax.core.util.ClassSize;
 import com.alibaba.datax.core.util.FrameworkErrorCode;
-import com.alibaba.fastjson.JSON;
+import com.alibaba.fastjson2.JSON;
 
 import java.util.ArrayList;
 import java.util.HashMap;
diff --git a/databendwriter/doc/databendwriter-CN.md b/databendwriter/doc/databendwriter-CN.md
new file mode 100644
index 00000000..d6a8f1f3
--- /dev/null
+++ b/databendwriter/doc/databendwriter-CN.md
@@ -0,0 +1,171 @@
+# DataX DatabendWriter
+[简体中文](./databendwriter-CN.md) | [English](./databendwriter.md)
+
+## 1 快速介绍
+
+Databend Writer 是一个 DataX 的插件,用于从 DataX 中写入数据到 Databend 表中。
+该插件基于[databend JDBC driver](https://github.com/databendcloud/databend-jdbc) ,它使用 [RESTful http protocol](https://databend.rs/doc/integrations/api/rest)
+在开源的 databend 和 [databend cloud](https://app.databend.com/) 上执行查询。
+
+在每个写入批次中,databend writer 将批量数据上传到内部的 S3 stage,然后执行相应的 insert SQL 将数据上传到 databend 表中。
+
+为了最佳的用户体验,如果您使用的是 databend 社区版本,您应该尝试采用 [S3](https://aws.amazon.com/s3/)/[minio](https://min.io/)/[OSS](https://www.alibabacloud.com/product/object-storage-service) 作为其底层存储层,因为
+它们支持预签名上传操作,否则您可能会在数据传输上浪费不必要的成本。
+
+您可以在[文档](https://databend.rs/doc/deploy/deploying-databend)中了解更多详细信息
+
+## 2 实现原理
+
+Databend Writer 将使用 DataX 从 DataX Reader 中获取生成的记录,并将记录批量插入到 databend 表中指定的列中。
+
+## 3 功能说明
+
+### 3.1 配置样例
+
+* 以下配置将从内存中读取一些生成的数据,并将数据上传到databend表中
+
+#### 准备工作
+```sql
+--- create table in databend
+drop table if exists datax.sample1;
+drop database if exists datax;
+create database if not exists datax;
+create table if not exsits datax.sample1(a string, b int64, c date, d timestamp, e bool, f string, g variant);
+```
+
+#### 配置样例
+```json
+{
+  "job": {
+    "content": [
+      {
+        "reader": {
+          "name": "streamreader",
+          "parameter": {
+            "column" : [
+              {
+                "value": "DataX",
+                "type": "string"
+              },
+              {
+                "value": 19880808,
+                "type": "long"
+              },
+              {
+                "value": "1926-08-08 08:08:08",
+                "type": "date"
+              },
+              {
+                "value": "1988-08-08 08:08:08",
+                "type": "date"
+              },
+              {
+                "value": true,
+                "type": "bool"
+              },
+              {
+                "value": "test",
+                "type": "bytes"
+              },
+              {
+                "value": "{\"type\": \"variant\", \"value\": \"test\"}",
+                "type": "string"
+              }
+
+            ],
+            "sliceRecordCount": 10000
+          }
+        },
+        "writer": {
+          "name": "databendwriter",
+          "parameter": {
+            "username": "databend",
+            "password": "databend",
+            "column": ["a", "b", "c", "d", "e", "f", "g"],
+            "batchSize": 1000,
+            "preSql": [
+            ],
+            "postSql": [
+            ],
+            "connection": [
+              {
+                "jdbcUrl": "jdbc:databend://localhost:8000/datax",
+                "table": [
+                  "sample1"
+                ]
+              }
+            ]
+          }
+        }
+      }
+    ],
+    "setting": {
+      "speed": {
+        "channel": 1
+       }
+    }
+  }
+}
+```
+
+### 3.2 参数说明
+* jdbcUrl
+    * 描述: JDBC 数据源 url。请参阅仓库中的详细[文档](https://github.com/databendcloud/databend-jdbc)
+    * 必选: 是
+    * 默认值: 无
+    * 示例: jdbc:databend://localhost:8000/datax
+* username
+    * 描述: JDBC 数据源用户名
+    * 必选: 是
+    * 默认值: 无
+    * 示例: databend
+* password
+    * 描述: JDBC 数据源密码
+    * 必选: 是
+    * 默认值: 无
+    * 示例: databend
+* table
+    * 描述: 表名的集合,table应该包含column参数中的所有列。
+    * 必选: 是
+    * 默认值: 无
+    * 示例: ["sample1"]
+* column
+    * 描述: 表中的列名集合,字段顺序应该与reader的record中的column类型对应
+    * 必选: 是
+    * 默认值: 无
+    * 示例: ["a", "b", "c", "d", "e", "f", "g"]
+* batchSize
+    * 描述: 每个批次的记录数
+    * 必选: 否
+    * 默认值: 1000
+    * 示例: 1000
+* preSql
+    * 描述: 在写入数据之前执行的SQL语句
+    * 必选: 否
+    * 默认值: 无
+    * 示例: ["delete from datax.sample1"]
+* postSql
+    * 描述: 在写入数据之后执行的SQL语句
+    * 必选: 否
+    * 默认值: 无
+    * 示例: ["select count(*) from datax.sample1"]
+
+### 3.3 类型转化
+DataX中的数据类型可以转换为databend中的相应数据类型。下表显示了两种类型之间的对应关系。
+
+| DataX 内部类型 | Databend 数据类型                                             |
+|------------|-----------------------------------------------------------|
+| INT        | TINYINT, INT8, SMALLINT, INT16, INT, INT32, BIGINT, INT64 |
+| LONG       | TINYINT, INT8, SMALLINT, INT16, INT, INT32, BIGINT, INT64 |
+| STRING     | STRING, VARCHAR                                           |
+| DOUBLE     | FLOAT, DOUBLE                                             |
+| BOOL       | BOOLEAN, BOOL                                             |
+| DATE       | DATE, TIMESTAMP                                           |
+| BYTES      | STRING, VARCHAR                                           |
+
+## 4 性能测试
+
+## 5 约束限制
+目前,复杂数据类型支持不稳定,如果您想使用复杂数据类型,例如元组,数组,请检查databend和jdbc驱动程序的进一步版本。
+
+## FAQ
\ No newline at end of file
diff --git a/databendwriter/doc/databendwriter.md b/databendwriter/doc/databendwriter.md
new file mode 100644
index 00000000..0b57bf13
--- /dev/null
+++ b/databendwriter/doc/databendwriter.md
@@ -0,0 +1,166 @@
+# DataX DatabendWriter
+[简体中文](./databendwriter-CN.md) | [English](./databendwriter.md)
+
+## 1 Introduction
+Databend Writer is a plugin for DataX to write data to Databend Table from dataX records.
+The plugin is based on [databend JDBC driver](https://github.com/databendcloud/databend-jdbc) which use [RESTful http protocol](https://databend.rs/doc/integrations/api/rest)
+to execute query on open source databend and [databend cloud](https://app.databend.com/).
+
+During each write batch, databend writer will upload batch data into internal S3 stage and execute corresponding insert SQL to upload data into databend table.
+
+For best user experience, if you are using databend community distribution, you should try to adopt [S3](https://aws.amazon.com/s3/)/[minio](https://min.io/)/[OSS](https://www.alibabacloud.com/product/object-storage-service) as its underlying storage layer since 
+they support presign upload operation otherwise you may expend unneeded cost on data transfer.
+
+You could see more details on the [doc](https://databend.rs/doc/deploy/deploying-databend)
+
+## 2 Detailed Implementation
+Databend Writer would use DataX to fetch records generated by DataX Reader, and then batch insert records to the designated columns for your databend table.
+
+## 3 Features
+### 3.1 Example Configurations
+* the following configuration would read some generated data in memory and upload data into databend table
+
+#### Preparation
+```sql
+--- create table in databend
+drop table if exists datax.sample1;
+drop database if exists datax;
+create database if not exists datax;
+create table if not exsits datax.sample1(a string, b int64, c date, d timestamp, e bool, f string, g variant);
+```
+
+#### Configurations
+```json
+{
+  "job": {
+    "content": [
+      {
+        "reader": {
+          "name": "streamreader",
+          "parameter": {
+            "column" : [
+              {
+                "value": "DataX",
+                "type": "string"
+              },
+              {
+                "value": 19880808,
+                "type": "long"
+              },
+              {
+                "value": "1926-08-08 08:08:08",
+                "type": "date"
+              },
+              {
+                "value": "1988-08-08 08:08:08",
+                "type": "date"
+              },
+              {
+                "value": true,
+                "type": "bool"
+              },
+              {
+                "value": "test",
+                "type": "bytes"
+              },
+              {
+                "value": "{\"type\": \"variant\", \"value\": \"test\"}",
+                "type": "string"
+              }
+
+            ],
+            "sliceRecordCount": 10000
+          }
+        },
+        "writer": {
+          "name": "databendwriter",
+          "parameter": {
+            "username": "databend",
+            "password": "databend",
+            "column": ["a", "b", "c", "d", "e", "f", "g"],
+            "batchSize": 1000,
+            "preSql": [
+            ],
+            "postSql": [
+            ],
+            "connection": [
+              {
+                "jdbcUrl": "jdbc:databend://localhost:8000/datax",
+                "table": [
+                  "sample1"
+                ]
+              }
+            ]
+          }
+        }
+      }
+    ],
+    "setting": {
+      "speed": {
+        "channel": 1
+       }
+    }
+  }
+}
+```
+
+### 3.2 Configuration Description
+* jdbcUrl
+  * Description: JDBC Data source url in Databend. Please take a look at repository for detailed [doc](https://github.com/databendcloud/databend-jdbc)
+  * Required: yes
+  * Default: none
+  * Example: jdbc:databend://localhost:8000/datax
+* username
+  * Description: Databend user name
+  * Required: yes
+  * Default: none
+  * Example: databend
+* password
+  * Description: Databend user password
+  * Required: yes
+  * Default: none
+  * Example: databend
+* table
+  * Description: A list of table names that should contain all of the columns in the column parameter.
+  * Required: yes
+  * Default: none
+  * Example: ["sample1"]
+* column
+  * Description: A list of column field names that should be inserted into the table. if you want to insert all column fields use `["*"]` instead.
+  * Required: yes
+  * Default: none
+  * Example: ["a", "b", "c", "d", "e", "f", "g"]
+* batchSize
+  * Description: The number of records to be inserted in each batch.
+  * Required: no
+  * Default: 1024
+* preSql
+  * Description: A list of SQL statements that will be executed before the write operation.
+  * Required: no
+  * Default: none
+* postSql
+  * Description: A list of SQL statements that will be executed after the write operation.
+  * Required: no
+  * Default: none
+
+### 3.3 Type Convert
+Data types in datax can be converted to the corresponding data types in databend. The following table shows the correspondence between the two types.
+
+| DataX Type | Databend Type                                             |
+|------------|-----------------------------------------------------------|
+| INT        | TINYINT, INT8, SMALLINT, INT16, INT, INT32, BIGINT, INT64 |
+| LONG       | TINYINT, INT8, SMALLINT, INT16, INT, INT32, BIGINT, INT64 |
+| STRING     | STRING, VARCHAR                                           |
+| DOUBLE     | FLOAT, DOUBLE                                             |
+| BOOL       | BOOLEAN, BOOL                                             |
+| DATE       | DATE, TIMESTAMP                                           |
+| BYTES      | STRING, VARCHAR                                           |
+
+
+## 4 Performance Test
+
+
+## 5 Restrictions
+Currently, complex data type support is not stable, if you want to use complex data type such as tuple, array, please check further release version of databend and jdbc driver.
+
+## FAQ
diff --git a/databendwriter/pom.xml b/databendwriter/pom.xml
new file mode 100644
index 00000000..976ecd6a
--- /dev/null
+++ b/databendwriter/pom.xml
@@ -0,0 +1,101 @@
+
+
+    
+        datax-all
+        com.alibaba.datax
+        0.0.1-SNAPSHOT
+    
+
+    4.0.0
+    databendwriter
+    databendwriter
+    jar
+
+    
+        
+            com.databend
+            databend-jdbc
+            0.0.5
+        
+        
+            com.alibaba.datax
+            datax-core
+            ${datax-project-version}
+        
+        
+            com.alibaba.datax
+            datax-common
+            ${datax-project-version}
+        
+        
+            org.slf4j
+            slf4j-api
+        
+
+        
+            ch.qos.logback
+            logback-classic
+        
+
+        
+            com.alibaba.datax
+            plugin-rdbms-util
+            ${datax-project-version}
+            
+                
+                    com.google.guava
+                    guava
+                
+            
+        
+
+
+        
+            junit
+            junit
+            test
+        
+    
+    
+        
+            
+                src/main/java
+                
+                    **/*.properties
+                
+            
+        
+        
+            
+            
+                maven-compiler-plugin
+                
+                    ${jdk-version}
+                    ${jdk-version}
+                    ${project-sourceEncoding}
+                
+            
+            
+            
+                maven-assembly-plugin
+                
+                    
+                        src/main/assembly/package.xml
+                    
+                    datax
+                
+                
+                    
+                        dwzip
+                        package
+                        
+                            single
+                        
+                    
+                
+            
+        
+    
+
diff --git a/databendwriter/src/main/assembly/package.xml b/databendwriter/src/main/assembly/package.xml
new file mode 100755
index 00000000..8a9ba1b2
--- /dev/null
+++ b/databendwriter/src/main/assembly/package.xml
@@ -0,0 +1,34 @@
+
+    
+    
+        dir
+    
+    false
+    
+        
+            src/main/resources
+            
+                plugin.json
+ 				plugin_job_template.json
+ 			
+            plugin/writer/databendwriter
+        
+        
+            target/
+            
+                databendwriter-0.0.1-SNAPSHOT.jar
+            
+            plugin/writer/databendwriter
+        
+    
+
+    
+        
+            false
+            plugin/writer/databendwriter/libs
+        
+    
+
diff --git a/databendwriter/src/main/java/com/alibaba/datax/plugin/writer/databendwriter/DatabendWriter.java b/databendwriter/src/main/java/com/alibaba/datax/plugin/writer/databendwriter/DatabendWriter.java
new file mode 100644
index 00000000..a4222f08
--- /dev/null
+++ b/databendwriter/src/main/java/com/alibaba/datax/plugin/writer/databendwriter/DatabendWriter.java
@@ -0,0 +1,248 @@
+package com.alibaba.datax.plugin.writer.databendwriter;
+
+import com.alibaba.datax.common.element.Column;
+import com.alibaba.datax.common.element.StringColumn;
+import com.alibaba.datax.common.exception.CommonErrorCode;
+import com.alibaba.datax.common.exception.DataXException;
+import com.alibaba.datax.common.plugin.RecordReceiver;
+import com.alibaba.datax.common.spi.Writer;
+import com.alibaba.datax.common.util.Configuration;
+import com.alibaba.datax.plugin.rdbms.util.DataBaseType;
+import com.alibaba.datax.plugin.rdbms.writer.CommonRdbmsWriter;
+import com.alibaba.datax.plugin.writer.databendwriter.util.DatabendWriterUtil;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.sql.*;
+import java.util.List;
+import java.util.regex.Pattern;
+
+public class DatabendWriter extends Writer
+{
+    private static final DataBaseType DATABASE_TYPE = DataBaseType.Databend;
+
+    public static class Job
+            extends Writer.Job
+    {
+        private static final Logger LOG = LoggerFactory.getLogger(Job.class);
+        private Configuration originalConfig;
+        private CommonRdbmsWriter.Job commonRdbmsWriterMaster;
+
+        @Override
+        public void init()
+        {
+            this.originalConfig = super.getPluginJobConf();
+            this.commonRdbmsWriterMaster = new CommonRdbmsWriter.Job(DATABASE_TYPE);
+            this.commonRdbmsWriterMaster.init(this.originalConfig);
+            // placeholder currently not supported by databend driver, needs special treatment
+            DatabendWriterUtil.dealWriteMode(this.originalConfig);
+        }
+
+        @Override
+        public void preCheck()
+        {
+            this.init();
+            this.commonRdbmsWriterMaster.writerPreCheck(this.originalConfig, DATABASE_TYPE);
+        }
+
+        @Override
+        public void prepare() {
+            this.commonRdbmsWriterMaster.prepare(this.originalConfig);
+        }
+
+        @Override
+        public List split(int mandatoryNumber) {
+            return this.commonRdbmsWriterMaster.split(this.originalConfig, mandatoryNumber);
+        }
+
+        @Override
+        public void post() {
+            this.commonRdbmsWriterMaster.post(this.originalConfig);
+        }
+
+        @Override
+        public void destroy() {
+            this.commonRdbmsWriterMaster.destroy(this.originalConfig);
+        }
+    }
+
+
+    public static class Task extends Writer.Task
+    {
+        private static final Logger LOG = LoggerFactory.getLogger(Task.class);
+
+        private Configuration writerSliceConfig;
+
+        private CommonRdbmsWriter.Task commonRdbmsWriterSlave;
+
+        @Override
+        public void init()
+        {
+            this.writerSliceConfig = super.getPluginJobConf();
+
+            this.commonRdbmsWriterSlave = new CommonRdbmsWriter.Task(DataBaseType.Databend){
+                @Override
+                protected PreparedStatement fillPreparedStatementColumnType(PreparedStatement preparedStatement, int columnIndex, int columnSqltype, String typeName, Column column) throws SQLException {
+                    try {
+                        if (column.getRawData() == null) {
+                            preparedStatement.setNull(columnIndex + 1, columnSqltype);
+                            return preparedStatement;
+                        }
+
+                        java.util.Date utilDate;
+                        switch (columnSqltype) {
+
+                            case Types.TINYINT:
+                            case Types.SMALLINT:
+                            case Types.INTEGER:
+                                preparedStatement.setInt(columnIndex + 1, column.asBigInteger().intValue());
+                                break;
+                            case Types.BIGINT:
+                                preparedStatement.setLong(columnIndex + 1, column.asLong());
+                                break;
+                            case Types.DECIMAL:
+                                preparedStatement.setBigDecimal(columnIndex + 1, column.asBigDecimal());
+                                break;
+                            case Types.FLOAT:
+                            case Types.REAL:
+                                preparedStatement.setFloat(columnIndex + 1, column.asDouble().floatValue());
+                                break;
+                            case Types.DOUBLE:
+                                preparedStatement.setDouble(columnIndex + 1, column.asDouble());
+                                break;
+                            case Types.DATE:
+                                java.sql.Date sqlDate = null;
+                                try {
+                                    utilDate = column.asDate();
+                                } catch (DataXException e) {
+                                    throw new SQLException(String.format(
+                                            "Date type conversion error: [%s]", column));
+                                }
+
+                                if (null != utilDate) {
+                                    sqlDate = new java.sql.Date(utilDate.getTime());
+                                }
+                                preparedStatement.setDate(columnIndex + 1, sqlDate);
+                                break;
+
+                            case Types.TIME:
+                                java.sql.Time sqlTime = null;
+                                try {
+                                    utilDate = column.asDate();
+                                } catch (DataXException e) {
+                                    throw new SQLException(String.format(
+                                            "Date type conversion error: [%s]", column));
+                                }
+
+                                if (null != utilDate) {
+                                    sqlTime = new java.sql.Time(utilDate.getTime());
+                                }
+                                preparedStatement.setTime(columnIndex + 1, sqlTime);
+                                break;
+
+                            case Types.TIMESTAMP:
+                                Timestamp sqlTimestamp = null;
+                                if (column instanceof StringColumn && column.asString() != null) {
+                                    String timeStampStr = column.asString();
+                                    // JAVA TIMESTAMP 类型入参必须是 "2017-07-12 14:39:00.123566" 格式
+                                    String pattern = "^\\d+-\\d+-\\d+ \\d+:\\d+:\\d+.\\d+";
+                                    boolean isMatch = Pattern.matches(pattern, timeStampStr);
+                                    if (isMatch) {
+                                        sqlTimestamp = Timestamp.valueOf(timeStampStr);
+                                        preparedStatement.setTimestamp(columnIndex + 1, sqlTimestamp);
+                                        break;
+                                    }
+                                }
+                                try {
+                                    utilDate = column.asDate();
+                                } catch (DataXException e) {
+                                    throw new SQLException(String.format(
+                                            "Date type conversion error: [%s]", column));
+                                }
+
+                                if (null != utilDate) {
+                                    sqlTimestamp = new Timestamp(
+                                            utilDate.getTime());
+                                }
+                                preparedStatement.setTimestamp(columnIndex + 1, sqlTimestamp);
+                                break;
+
+                            case Types.BINARY:
+                            case Types.VARBINARY:
+                            case Types.BLOB:
+                            case Types.LONGVARBINARY:
+                                preparedStatement.setBytes(columnIndex + 1, column
+                                        .asBytes());
+                                break;
+
+                            case Types.BOOLEAN:
+
+                            // warn: bit(1) -> Types.BIT 可使用setBoolean
+                            // warn: bit(>1) -> Types.VARBINARY 可使用setBytes
+                            case Types.BIT:
+                                if (this.dataBaseType == DataBaseType.MySql) {
+                                    Boolean asBoolean = column.asBoolean();
+                                    if (asBoolean != null) {
+                                        preparedStatement.setBoolean(columnIndex + 1, asBoolean);
+                                    } else {
+                                        preparedStatement.setNull(columnIndex + 1, Types.BIT);
+                                    }
+                                } else {
+                                    preparedStatement.setString(columnIndex + 1, column.asString());
+                                }
+                                break;
+
+                            default:
+                                // cast variant / array into string is fine.
+                                preparedStatement.setString(columnIndex + 1, column.asString());
+                                break;
+                        }
+                        return preparedStatement;
+                    } catch (DataXException e) {
+                        // fix类型转换或者溢出失败时,将具体哪一列打印出来
+                        if (e.getErrorCode() == CommonErrorCode.CONVERT_NOT_SUPPORT ||
+                                e.getErrorCode() == CommonErrorCode.CONVERT_OVER_FLOW) {
+                            throw DataXException
+                                    .asDataXException(
+                                            e.getErrorCode(),
+                                            String.format(
+                                                    "type conversion error. columnName: [%s], columnType:[%d], columnJavaType: [%s]. please change the data type in given column field or do not sync on the column.",
+                                                    this.resultSetMetaData.getLeft()
+                                                            .get(columnIndex),
+                                                    this.resultSetMetaData.getMiddle()
+                                                            .get(columnIndex),
+                                                    this.resultSetMetaData.getRight()
+                                                            .get(columnIndex)));
+                        } else {
+                            throw e;
+                        }
+                    }
+                }
+
+            };
+            this.commonRdbmsWriterSlave.init(this.writerSliceConfig);
+        }
+
+        @Override
+        public void destroy()
+        {
+            this.commonRdbmsWriterSlave.destroy(this.writerSliceConfig);
+        }
+
+        @Override
+        public void prepare() {
+            this.commonRdbmsWriterSlave.prepare(this.writerSliceConfig);
+        }
+
+        @Override
+        public void post() {
+            this.commonRdbmsWriterSlave.post(this.writerSliceConfig);
+        }
+        @Override
+        public void startWrite(RecordReceiver lineReceiver)
+        {
+            this.commonRdbmsWriterSlave.startWrite(lineReceiver, this.writerSliceConfig, this.getTaskPluginCollector());
+        }
+
+    }
+}
diff --git a/databendwriter/src/main/java/com/alibaba/datax/plugin/writer/databendwriter/util/DatabendWriterUtil.java b/databendwriter/src/main/java/com/alibaba/datax/plugin/writer/databendwriter/util/DatabendWriterUtil.java
new file mode 100644
index 00000000..a862e920
--- /dev/null
+++ b/databendwriter/src/main/java/com/alibaba/datax/plugin/writer/databendwriter/util/DatabendWriterUtil.java
@@ -0,0 +1,40 @@
+package com.alibaba.datax.plugin.writer.databendwriter.util;
+import com.alibaba.datax.common.util.Configuration;
+import com.alibaba.datax.plugin.rdbms.writer.Constant;
+import com.alibaba.datax.plugin.rdbms.writer.Key;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.util.List;
+import java.util.StringJoiner;
+
+public final class DatabendWriterUtil
+{
+    private static final Logger LOG = LoggerFactory.getLogger(DatabendWriterUtil.class);
+
+    private DatabendWriterUtil() {}
+    public static void dealWriteMode(Configuration originalConfig)
+    {
+        List columns = originalConfig.getList(Key.COLUMN, String.class);
+
+        String jdbcUrl = originalConfig.getString(String.format("%s[0].%s",
+                Constant.CONN_MARK, Key.JDBC_URL, String.class));
+
+        String writeMode = originalConfig.getString(Key.WRITE_MODE, "INSERT");
+
+        StringBuilder writeDataSqlTemplate = new StringBuilder();
+        writeDataSqlTemplate.append("INSERT INTO %s");
+        StringJoiner columnString = new StringJoiner(",");
+
+        for (String column : columns) {
+            columnString.add(column);
+        }
+        writeDataSqlTemplate.append(String.format("(%s)", columnString));
+        writeDataSqlTemplate.append(" VALUES");
+
+        LOG.info("Write data [\n{}\n], which jdbcUrl like:[{}]", writeDataSqlTemplate, jdbcUrl);
+
+        originalConfig.set(Constant.INSERT_OR_REPLACE_TEMPLATE_MARK, writeDataSqlTemplate);
+    }
+}
\ No newline at end of file
diff --git a/databendwriter/src/main/resources/plugin.json b/databendwriter/src/main/resources/plugin.json
new file mode 100644
index 00000000..bab0130d
--- /dev/null
+++ b/databendwriter/src/main/resources/plugin.json
@@ -0,0 +1,6 @@
+{
+  "name": "databendwriter",
+  "class": "com.alibaba.datax.plugin.writer.databendwriter.DatabendWriter",
+  "description": "execute batch insert sql to write dataX data into databend",
+  "developer": "databend"
+}
\ No newline at end of file
diff --git a/databendwriter/src/main/resources/plugin_job_template.json b/databendwriter/src/main/resources/plugin_job_template.json
new file mode 100644
index 00000000..34d4b251
--- /dev/null
+++ b/databendwriter/src/main/resources/plugin_job_template.json
@@ -0,0 +1,19 @@
+{
+  "name": "databendwriter",
+  "parameter": {
+    "username": "username",
+    "password": "password",
+    "column": ["col1", "col2", "col3"],
+    "connection": [
+      {
+        "jdbcUrl": "jdbc:databend://:[/]",
+        "table": "table1"
+      }
+    ],
+    "preSql": [],
+    "postSql": [],
+
+    "maxBatchRows": 65536,
+    "maxBatchSize": 134217728
+  }
+}
\ No newline at end of file
diff --git a/datahubreader/src/main/java/com/alibaba/datax/plugin/reader/datahubreader/DatahubClientHelper.java b/datahubreader/src/main/java/com/alibaba/datax/plugin/reader/datahubreader/DatahubClientHelper.java
index 6f601fb4..2b7bcec4 100644
--- a/datahubreader/src/main/java/com/alibaba/datax/plugin/reader/datahubreader/DatahubClientHelper.java
+++ b/datahubreader/src/main/java/com/alibaba/datax/plugin/reader/datahubreader/DatahubClientHelper.java
@@ -1,8 +1,8 @@
 package com.alibaba.datax.plugin.reader.datahubreader;
 
 import com.alibaba.datax.common.util.Configuration;
-import com.alibaba.fastjson.JSON;
-import com.alibaba.fastjson.TypeReference;
+import com.alibaba.fastjson2.JSON;
+import com.alibaba.fastjson2.TypeReference;
 import com.aliyun.datahub.client.DatahubClient;
 import com.aliyun.datahub.client.DatahubClientBuilder;
 import com.aliyun.datahub.client.auth.Account;
diff --git a/datahubwriter/src/main/java/com/alibaba/datax/plugin/writer/datahubwriter/DatahubClientHelper.java b/datahubwriter/src/main/java/com/alibaba/datax/plugin/writer/datahubwriter/DatahubClientHelper.java
index 2d94212c..c25d1210 100644
--- a/datahubwriter/src/main/java/com/alibaba/datax/plugin/writer/datahubwriter/DatahubClientHelper.java
+++ b/datahubwriter/src/main/java/com/alibaba/datax/plugin/writer/datahubwriter/DatahubClientHelper.java
@@ -3,8 +3,8 @@ package com.alibaba.datax.plugin.writer.datahubwriter;
 import org.apache.commons.lang3.StringUtils;
 
 import com.alibaba.datax.common.util.Configuration;
-import com.alibaba.fastjson.JSON;
-import com.alibaba.fastjson.TypeReference;
+import com.alibaba.fastjson2.JSON;
+import com.alibaba.fastjson2.TypeReference;
 import com.aliyun.datahub.client.DatahubClient;
 import com.aliyun.datahub.client.DatahubClientBuilder;
 import com.aliyun.datahub.client.auth.Account;
diff --git a/datahubwriter/src/main/java/com/alibaba/datax/plugin/writer/datahubwriter/DatahubWriter.java b/datahubwriter/src/main/java/com/alibaba/datax/plugin/writer/datahubwriter/DatahubWriter.java
index f6dc1105..cd414fc5 100644
--- a/datahubwriter/src/main/java/com/alibaba/datax/plugin/writer/datahubwriter/DatahubWriter.java
+++ b/datahubwriter/src/main/java/com/alibaba/datax/plugin/writer/datahubwriter/DatahubWriter.java
@@ -8,7 +8,7 @@ import com.alibaba.datax.common.spi.Writer;
 import com.alibaba.datax.common.util.Configuration;
 import com.alibaba.datax.common.util.DataXCaseEnvUtil;
 import com.alibaba.datax.common.util.RetryUtil;
-import com.alibaba.fastjson.JSON;
+import com.alibaba.fastjson2.JSON;
 import com.aliyun.datahub.client.DatahubClient;
 import com.aliyun.datahub.client.model.FieldType;
 import com.aliyun.datahub.client.model.GetTopicResult;
diff --git a/doriswriter/src/main/java/com/alibaba/datax/plugin/writer/doriswriter/DorisJsonCodec.java b/doriswriter/src/main/java/com/alibaba/datax/plugin/writer/doriswriter/DorisJsonCodec.java
index e6c05733..68abd9eb 100644
--- a/doriswriter/src/main/java/com/alibaba/datax/plugin/writer/doriswriter/DorisJsonCodec.java
+++ b/doriswriter/src/main/java/com/alibaba/datax/plugin/writer/doriswriter/DorisJsonCodec.java
@@ -1,7 +1,7 @@
 package com.alibaba.datax.plugin.writer.doriswriter;
 
 import com.alibaba.datax.common.element.Record;
-import com.alibaba.fastjson.JSON;
+import com.alibaba.fastjson2.JSON;
 
 import java.util.HashMap;
 import java.util.List;
diff --git a/doriswriter/src/main/java/com/alibaba/datax/plugin/writer/doriswriter/DorisStreamLoadObserver.java b/doriswriter/src/main/java/com/alibaba/datax/plugin/writer/doriswriter/DorisStreamLoadObserver.java
index efb3d9db..6f7e9a5a 100644
--- a/doriswriter/src/main/java/com/alibaba/datax/plugin/writer/doriswriter/DorisStreamLoadObserver.java
+++ b/doriswriter/src/main/java/com/alibaba/datax/plugin/writer/doriswriter/DorisStreamLoadObserver.java
@@ -1,6 +1,6 @@
 package com.alibaba.datax.plugin.writer.doriswriter;
 
-import com.alibaba.fastjson.JSON;
+import com.alibaba.fastjson2.JSON;
 import org.apache.commons.codec.binary.Base64;
 import org.apache.http.HttpEntity;
 import org.apache.http.HttpHeaders;
diff --git a/elasticsearchwriter/src/main/java/com/alibaba/datax/plugin/writer/elasticsearchwriter/ElasticSearchClient.java b/elasticsearchwriter/src/main/java/com/alibaba/datax/plugin/writer/elasticsearchwriter/ElasticSearchClient.java
index 12ac3dd9..08486e1f 100644
--- a/elasticsearchwriter/src/main/java/com/alibaba/datax/plugin/writer/elasticsearchwriter/ElasticSearchClient.java
+++ b/elasticsearchwriter/src/main/java/com/alibaba/datax/plugin/writer/elasticsearchwriter/ElasticSearchClient.java
@@ -5,8 +5,8 @@ import com.alibaba.datax.common.util.Configuration;
 import com.alibaba.datax.plugin.writer.elasticsearchwriter.jest.ClusterInfo;
 import com.alibaba.datax.plugin.writer.elasticsearchwriter.jest.ClusterInfoResult;
 import com.alibaba.datax.plugin.writer.elasticsearchwriter.jest.PutMapping7;
-import com.alibaba.fastjson.JSON;
-import com.alibaba.fastjson.JSONObject;
+import com.alibaba.fastjson2.JSON;
+import com.alibaba.fastjson2.JSONObject;
 import com.google.gson.Gson;
 import com.google.gson.JsonElement;
 import com.google.gson.JsonObject;
@@ -53,6 +53,8 @@ public class ElasticSearchClient {
     public ElasticSearchClient(Configuration conf) {
         this.conf = conf;
         String endpoint = Key.getEndpoint(conf);
+        //es是支持集群写入的
+        String[] endpoints = endpoint.split(",");
         String user = Key.getUsername(conf);
         String passwd = Key.getPassword(conf);
         boolean multiThread = Key.isMultiThread(conf);
@@ -63,7 +65,7 @@ public class ElasticSearchClient {
         int totalConnection = this.conf.getInt("maxTotalConnection", 200);
         JestClientFactory factory = new JestClientFactory();
         Builder httpClientConfig = new HttpClientConfig
-                .Builder(endpoint)
+                .Builder(Arrays.asList(endpoints))
 //                .setPreemptiveAuth(new HttpHost(endpoint))
                 .multiThreaded(multiThread)
                 .connTimeout(readTimeout)
diff --git a/elasticsearchwriter/src/main/java/com/alibaba/datax/plugin/writer/elasticsearchwriter/ElasticSearchWriter.java b/elasticsearchwriter/src/main/java/com/alibaba/datax/plugin/writer/elasticsearchwriter/ElasticSearchWriter.java
index 6236e333..2c8ed2d0 100644
--- a/elasticsearchwriter/src/main/java/com/alibaba/datax/plugin/writer/elasticsearchwriter/ElasticSearchWriter.java
+++ b/elasticsearchwriter/src/main/java/com/alibaba/datax/plugin/writer/elasticsearchwriter/ElasticSearchWriter.java
@@ -9,11 +9,11 @@ import com.alibaba.datax.common.util.Configuration;
 import com.alibaba.datax.common.util.DataXCaseEnvUtil;
 import com.alibaba.datax.common.util.RetryUtil;
 import com.alibaba.datax.plugin.writer.elasticsearchwriter.Key.ActionType;
-import com.alibaba.fastjson.JSON;
-import com.alibaba.fastjson.JSONArray;
-import com.alibaba.fastjson.JSONObject;
-import com.alibaba.fastjson.TypeReference;
-import com.alibaba.fastjson.serializer.SerializerFeature;
+import com.alibaba.fastjson2.JSON;
+import com.alibaba.fastjson2.JSONArray;
+import com.alibaba.fastjson2.JSONObject;
+import com.alibaba.fastjson2.TypeReference;
+import com.alibaba.fastjson2.JSONWriter;
 import com.google.common.base.Joiner;
 import io.searchbox.client.JestResult;
 import io.searchbox.core.*;
@@ -927,9 +927,8 @@ public class ElasticSearchWriter extends Writer {
                             Index.Builder builder = null;
                             if (this.enableWriteNull) {
                                 builder = new Index.Builder(
-                                        JSONObject.toJSONString(data, SerializerFeature.WriteMapNullValue,
-                                                SerializerFeature.QuoteFieldNames, SerializerFeature.SkipTransientField,
-                                                SerializerFeature.WriteEnumUsingToString, SerializerFeature.SortField));
+                                        JSONObject.toJSONString(data, JSONWriter.Feature.WriteMapNullValue,
+                                                JSONWriter.Feature.WriteEnumUsingToString));
                             } else {
                                 builder = new Index.Builder(JSONObject.toJSONString(data));
                             }
@@ -958,9 +957,8 @@ public class ElasticSearchWriter extends Writer {
                             if (this.enableWriteNull) {
                                 // write: {a:"1",b:null}
                             update = new Update.Builder(
-                                    JSONObject.toJSONString(updateDoc, SerializerFeature.WriteMapNullValue,
-                                            SerializerFeature.QuoteFieldNames, SerializerFeature.SkipTransientField,
-                                            SerializerFeature.WriteEnumUsingToString, SerializerFeature.SortField));
+                                    JSONObject.toJSONString(updateDoc, JSONWriter.Feature.WriteMapNullValue,
+                                            JSONWriter.Feature.WriteEnumUsingToString));
                             // 在DEFAULT_GENERATE_FEATURE基础上,只增加了SerializerFeature.WRITE_MAP_NULL_FEATURES
                             } else {
                                 // write: {"a":"1"}
diff --git a/elasticsearchwriter/src/main/java/com/alibaba/datax/plugin/writer/elasticsearchwriter/JsonPathUtil.java b/elasticsearchwriter/src/main/java/com/alibaba/datax/plugin/writer/elasticsearchwriter/JsonPathUtil.java
index 49703435..e7619e7c 100644
--- a/elasticsearchwriter/src/main/java/com/alibaba/datax/plugin/writer/elasticsearchwriter/JsonPathUtil.java
+++ b/elasticsearchwriter/src/main/java/com/alibaba/datax/plugin/writer/elasticsearchwriter/JsonPathUtil.java
@@ -2,7 +2,7 @@ package com.alibaba.datax.plugin.writer.elasticsearchwriter;
 
 import java.util.List;
 
-import com.alibaba.fastjson.JSONObject;
+import com.alibaba.fastjson2.JSONObject;
 
 public class JsonPathUtil {
 
diff --git a/elasticsearchwriter/src/main/java/com/alibaba/datax/plugin/writer/elasticsearchwriter/JsonUtil.java b/elasticsearchwriter/src/main/java/com/alibaba/datax/plugin/writer/elasticsearchwriter/JsonUtil.java
index e73c87be..ad6c01be 100644
--- a/elasticsearchwriter/src/main/java/com/alibaba/datax/plugin/writer/elasticsearchwriter/JsonUtil.java
+++ b/elasticsearchwriter/src/main/java/com/alibaba/datax/plugin/writer/elasticsearchwriter/JsonUtil.java
@@ -1,8 +1,8 @@
 package com.alibaba.datax.plugin.writer.elasticsearchwriter;
 
-import com.alibaba.fastjson.JSON;
-import com.alibaba.fastjson.JSONException;
-import com.alibaba.fastjson.JSONObject;
+import com.alibaba.fastjson2.JSON;
+import com.alibaba.fastjson2.JSONException;
+import com.alibaba.fastjson2.JSONObject;
 
 /**
  * @author bozu
diff --git a/elasticsearchwriter/src/main/java/com/alibaba/datax/plugin/writer/elasticsearchwriter/Key.java b/elasticsearchwriter/src/main/java/com/alibaba/datax/plugin/writer/elasticsearchwriter/Key.java
index af197711..fcaac935 100644
--- a/elasticsearchwriter/src/main/java/com/alibaba/datax/plugin/writer/elasticsearchwriter/Key.java
+++ b/elasticsearchwriter/src/main/java/com/alibaba/datax/plugin/writer/elasticsearchwriter/Key.java
@@ -1,8 +1,8 @@
 package com.alibaba.datax.plugin.writer.elasticsearchwriter;
 
 import com.alibaba.datax.common.util.Configuration;
-import com.alibaba.fastjson.JSON;
-import com.alibaba.fastjson.TypeReference;
+import com.alibaba.fastjson2.JSON;
+import com.alibaba.fastjson2.TypeReference;
 
 import org.apache.commons.lang3.StringUtils;
 
diff --git a/ftpwriter/doc/ftpwriter.md b/ftpwriter/doc/ftpwriter.md
index 6b1b2687..a38a1052 100644
--- a/ftpwriter/doc/ftpwriter.md
+++ b/ftpwriter/doc/ftpwriter.md
@@ -24,7 +24,7 @@ FtpWriter实现了从DataX协议转为FTP文件功能,FTP文件本身是无结
 
 我们不能做到:
 
-1. 单个文件不能支持并发写入。
+1. 单个文件并发写入。
 
 
 ## 3 功能说明
diff --git a/ftpwriter/src/main/java/com/alibaba/datax/plugin/writer/ftpwriter/util/SftpHelperImpl.java b/ftpwriter/src/main/java/com/alibaba/datax/plugin/writer/ftpwriter/util/SftpHelperImpl.java
index e6d78629..e748f12c 100644
--- a/ftpwriter/src/main/java/com/alibaba/datax/plugin/writer/ftpwriter/util/SftpHelperImpl.java
+++ b/ftpwriter/src/main/java/com/alibaba/datax/plugin/writer/ftpwriter/util/SftpHelperImpl.java
@@ -14,8 +14,8 @@ import org.slf4j.LoggerFactory;
 
 import com.alibaba.datax.common.exception.DataXException;
 import com.alibaba.datax.plugin.writer.ftpwriter.FtpWriterErrorCode;
-import com.alibaba.fastjson.JSON;
-import com.alibaba.fastjson.serializer.SerializerFeature;
+import com.alibaba.fastjson2.JSON;
+import com.alibaba.fastjson2.JSONWriter;
 import com.jcraft.jsch.ChannelSftp;
 import com.jcraft.jsch.JSch;
 import com.jcraft.jsch.JSchException;
@@ -251,7 +251,7 @@ public class SftpHelperImpl implements IFtpHelper {
             @SuppressWarnings("rawtypes")
             Vector allFiles = this.channelSftp.ls(dir);
             LOG.debug(String.format("ls: %s", JSON.toJSONString(allFiles,
-                    SerializerFeature.UseSingleQuotes)));
+                    JSONWriter.Feature.UseSingleQuotes)));
             for (int i = 0; i < allFiles.size(); i++) {
                 LsEntry le = (LsEntry) allFiles.get(i);
                 String strName = le.getFilename();
diff --git a/ftpwriter/src/main/java/com/alibaba/datax/plugin/writer/ftpwriter/util/StandardFtpHelperImpl.java b/ftpwriter/src/main/java/com/alibaba/datax/plugin/writer/ftpwriter/util/StandardFtpHelperImpl.java
index 8999b0a8..d5b9a746 100644
--- a/ftpwriter/src/main/java/com/alibaba/datax/plugin/writer/ftpwriter/util/StandardFtpHelperImpl.java
+++ b/ftpwriter/src/main/java/com/alibaba/datax/plugin/writer/ftpwriter/util/StandardFtpHelperImpl.java
@@ -18,8 +18,8 @@ import org.slf4j.LoggerFactory;
 
 import com.alibaba.datax.common.exception.DataXException;
 import com.alibaba.datax.plugin.writer.ftpwriter.FtpWriterErrorCode;
-import com.alibaba.fastjson.JSON;
-import com.alibaba.fastjson.serializer.SerializerFeature;
+import com.alibaba.fastjson2.JSON;
+import com.alibaba.fastjson2.JSONWriter;
 
 public class StandardFtpHelperImpl implements IFtpHelper {
     private static final Logger LOG = LoggerFactory
@@ -244,7 +244,7 @@ public class StandardFtpHelperImpl implements IFtpHelper {
             FTPFile[] fs = this.ftpClient.listFiles(dir);
             // LOG.debug(JSON.toJSONString(this.ftpClient.listNames(dir)));
             LOG.debug(String.format("ls: %s",
-                    JSON.toJSONString(fs, SerializerFeature.UseSingleQuotes)));
+                    JSON.toJSONString(fs, JSONWriter.Feature.UseSingleQuotes)));
             for (FTPFile ff : fs) {
                 String strName = ff.getName();
                 if (strName.startsWith(prefixFileName)) {
diff --git a/gdbwriter/src/main/java/com/alibaba/datax/plugin/writer/gdbwriter/mapping/DefaultGdbMapper.java b/gdbwriter/src/main/java/com/alibaba/datax/plugin/writer/gdbwriter/mapping/DefaultGdbMapper.java
index 73a94cf5..2c015879 100644
--- a/gdbwriter/src/main/java/com/alibaba/datax/plugin/writer/gdbwriter/mapping/DefaultGdbMapper.java
+++ b/gdbwriter/src/main/java/com/alibaba/datax/plugin/writer/gdbwriter/mapping/DefaultGdbMapper.java
@@ -19,8 +19,8 @@ import com.alibaba.datax.plugin.writer.gdbwriter.Key;
 import com.alibaba.datax.plugin.writer.gdbwriter.model.GdbEdge;
 import com.alibaba.datax.plugin.writer.gdbwriter.model.GdbElement;
 import com.alibaba.datax.plugin.writer.gdbwriter.model.GdbVertex;
-import com.alibaba.fastjson.JSONArray;
-import com.alibaba.fastjson.JSONObject;
+import com.alibaba.fastjson2.JSONArray;
+import com.alibaba.fastjson2.JSONObject;
 
 import lombok.extern.slf4j.Slf4j;
 
diff --git a/gdbwriter/src/main/java/com/alibaba/datax/plugin/writer/gdbwriter/util/ConfigHelper.java b/gdbwriter/src/main/java/com/alibaba/datax/plugin/writer/gdbwriter/util/ConfigHelper.java
index 178b5e7c..644f8898 100644
--- a/gdbwriter/src/main/java/com/alibaba/datax/plugin/writer/gdbwriter/util/ConfigHelper.java
+++ b/gdbwriter/src/main/java/com/alibaba/datax/plugin/writer/gdbwriter/util/ConfigHelper.java
@@ -12,8 +12,8 @@ import org.apache.commons.lang3.StringUtils;
 import com.alibaba.datax.common.exception.DataXException;
 import com.alibaba.datax.common.util.Configuration;
 import com.alibaba.datax.plugin.writer.gdbwriter.GdbWriterErrorCode;
-import com.alibaba.fastjson.JSON;
-import com.alibaba.fastjson.JSONObject;
+import com.alibaba.fastjson2.JSON;
+import com.alibaba.fastjson2.JSONObject;
 
 /**
  * @author jerrywang
diff --git a/hbase094xreader/src/main/java/com/alibaba/datax/plugin/reader/hbase094xreader/Hbase094xHelper.java b/hbase094xreader/src/main/java/com/alibaba/datax/plugin/reader/hbase094xreader/Hbase094xHelper.java
index c3e2a212..b9f16b17 100644
--- a/hbase094xreader/src/main/java/com/alibaba/datax/plugin/reader/hbase094xreader/Hbase094xHelper.java
+++ b/hbase094xreader/src/main/java/com/alibaba/datax/plugin/reader/hbase094xreader/Hbase094xHelper.java
@@ -2,8 +2,8 @@ package com.alibaba.datax.plugin.reader.hbase094xreader;
 
 import com.alibaba.datax.common.exception.DataXException;
 import com.alibaba.datax.common.util.Configuration;
-import com.alibaba.fastjson.JSON;
-import com.alibaba.fastjson.TypeReference;
+import com.alibaba.fastjson2.JSON;
+import com.alibaba.fastjson2.TypeReference;
 import org.apache.commons.lang3.StringUtils;
 import org.apache.commons.lang3.Validate;
 import org.apache.hadoop.fs.Path;
diff --git a/hbase094xwriter/src/main/java/com/alibaba/datax/plugin/writer/hbase094xwriter/Hbase094xHelper.java b/hbase094xwriter/src/main/java/com/alibaba/datax/plugin/writer/hbase094xwriter/Hbase094xHelper.java
index f671d31d..00b128f3 100644
--- a/hbase094xwriter/src/main/java/com/alibaba/datax/plugin/writer/hbase094xwriter/Hbase094xHelper.java
+++ b/hbase094xwriter/src/main/java/com/alibaba/datax/plugin/writer/hbase094xwriter/Hbase094xHelper.java
@@ -2,8 +2,8 @@ package com.alibaba.datax.plugin.writer.hbase094xwriter;
 
 import com.alibaba.datax.common.exception.DataXException;
 import com.alibaba.datax.common.util.Configuration;
-import com.alibaba.fastjson.JSON;
-import com.alibaba.fastjson.TypeReference;
+import com.alibaba.fastjson2.JSON;
+import com.alibaba.fastjson2.TypeReference;
 import org.apache.commons.lang3.StringUtils;
 import org.apache.commons.lang3.Validate;
 import org.apache.hadoop.fs.Path;
diff --git a/hbase11xreader/src/main/java/com/alibaba/datax/plugin/reader/hbase11xreader/Hbase11xHelper.java b/hbase11xreader/src/main/java/com/alibaba/datax/plugin/reader/hbase11xreader/Hbase11xHelper.java
index 643072a9..82ad7122 100644
--- a/hbase11xreader/src/main/java/com/alibaba/datax/plugin/reader/hbase11xreader/Hbase11xHelper.java
+++ b/hbase11xreader/src/main/java/com/alibaba/datax/plugin/reader/hbase11xreader/Hbase11xHelper.java
@@ -2,8 +2,8 @@ package com.alibaba.datax.plugin.reader.hbase11xreader;
 
 import com.alibaba.datax.common.exception.DataXException;
 import com.alibaba.datax.common.util.Configuration;
-import com.alibaba.fastjson.JSON;
-import com.alibaba.fastjson.TypeReference;
+import com.alibaba.fastjson2.JSON;
+import com.alibaba.fastjson2.TypeReference;
 import org.apache.commons.lang3.StringUtils;
 import org.apache.commons.lang3.Validate;
 import org.apache.hadoop.hbase.HBaseConfiguration;
diff --git a/hbase11xsqlreader/src/main/java/com/alibaba/datax/plugin/reader/hbase11xsqlreader/HbaseSQLHelper.java b/hbase11xsqlreader/src/main/java/com/alibaba/datax/plugin/reader/hbase11xsqlreader/HbaseSQLHelper.java
index 5309d1d9..71665a6b 100644
--- a/hbase11xsqlreader/src/main/java/com/alibaba/datax/plugin/reader/hbase11xsqlreader/HbaseSQLHelper.java
+++ b/hbase11xsqlreader/src/main/java/com/alibaba/datax/plugin/reader/hbase11xsqlreader/HbaseSQLHelper.java
@@ -2,8 +2,8 @@ package com.alibaba.datax.plugin.reader.hbase11xsqlreader;
 
 import com.alibaba.datax.common.exception.DataXException;
 import com.alibaba.datax.common.util.Configuration;
-import com.alibaba.fastjson.JSON;
-import com.alibaba.fastjson.TypeReference;
+import com.alibaba.fastjson2.JSON;
+import com.alibaba.fastjson2.TypeReference;
 import org.apache.hadoop.hbase.HConstants;
 import org.apache.hadoop.hbase.util.Pair;
 import org.apache.hadoop.mapreduce.InputSplit;
diff --git a/hbase11xsqlwriter/src/main/java/com/alibaba/datax/plugin/writer/hbase11xsqlwriter/HbaseSQLHelper.java b/hbase11xsqlwriter/src/main/java/com/alibaba/datax/plugin/writer/hbase11xsqlwriter/HbaseSQLHelper.java
index 41e57d4e..d1b23fdf 100644
--- a/hbase11xsqlwriter/src/main/java/com/alibaba/datax/plugin/writer/hbase11xsqlwriter/HbaseSQLHelper.java
+++ b/hbase11xsqlwriter/src/main/java/com/alibaba/datax/plugin/writer/hbase11xsqlwriter/HbaseSQLHelper.java
@@ -2,8 +2,8 @@ package com.alibaba.datax.plugin.writer.hbase11xsqlwriter;
 
 import com.alibaba.datax.common.exception.DataXException;
 import com.alibaba.datax.common.util.Configuration;
-import com.alibaba.fastjson.JSON;
-import com.alibaba.fastjson.TypeReference;
+import com.alibaba.fastjson2.JSON;
+import com.alibaba.fastjson2.TypeReference;
 import org.apache.hadoop.hbase.TableName;
 import org.apache.hadoop.hbase.client.Admin;
 import org.apache.hadoop.hbase.util.Pair;
diff --git a/hbase11xwriter/src/main/java/com/alibaba/datax/plugin/writer/hbase11xwriter/Hbase11xHelper.java b/hbase11xwriter/src/main/java/com/alibaba/datax/plugin/writer/hbase11xwriter/Hbase11xHelper.java
index 94b13b60..2889b647 100644
--- a/hbase11xwriter/src/main/java/com/alibaba/datax/plugin/writer/hbase11xwriter/Hbase11xHelper.java
+++ b/hbase11xwriter/src/main/java/com/alibaba/datax/plugin/writer/hbase11xwriter/Hbase11xHelper.java
@@ -2,8 +2,8 @@ package com.alibaba.datax.plugin.writer.hbase11xwriter;
 
 import com.alibaba.datax.common.exception.DataXException;
 import com.alibaba.datax.common.util.Configuration;
-import com.alibaba.fastjson.JSON;
-import com.alibaba.fastjson.TypeReference;
+import com.alibaba.fastjson2.JSON;
+import com.alibaba.fastjson2.TypeReference;
 import org.apache.commons.lang3.StringUtils;
 import org.apache.commons.lang3.Validate;
 import org.apache.hadoop.hbase.HBaseConfiguration;
diff --git a/hdfsreader/src/main/java/com/alibaba/datax/plugin/reader/hdfsreader/DFSUtil.java b/hdfsreader/src/main/java/com/alibaba/datax/plugin/reader/hdfsreader/DFSUtil.java
index c39d3847..5ba572e1 100644
--- a/hdfsreader/src/main/java/com/alibaba/datax/plugin/reader/hdfsreader/DFSUtil.java
+++ b/hdfsreader/src/main/java/com/alibaba/datax/plugin/reader/hdfsreader/DFSUtil.java
@@ -8,8 +8,8 @@ import com.alibaba.datax.common.util.Configuration;
 import com.alibaba.datax.plugin.unstructuredstorage.reader.ColumnEntry;
 import com.alibaba.datax.plugin.unstructuredstorage.reader.UnstructuredStorageReaderErrorCode;
 import com.alibaba.datax.plugin.unstructuredstorage.reader.UnstructuredStorageReaderUtil;
-import com.alibaba.fastjson.JSON;
-import com.alibaba.fastjson.JSONObject;
+import com.alibaba.fastjson2.JSON;
+import com.alibaba.fastjson2.JSONObject;
 import org.apache.commons.lang3.StringUtils;
 import org.apache.hadoop.fs.FSDataInputStream;
 import org.apache.hadoop.fs.FileStatus;
@@ -331,26 +331,30 @@ public class DFSUtil {
                 //If the network disconnected, will retry 45 times, each time the retry interval for 20 seconds
                 //Each file as a split
                 //TODO multy threads
-                InputSplit[] splits = in.getSplits(conf, 1);
+                // OrcInputFormat getSplits params numSplits not used, splits size = block numbers
+                InputSplit[] splits = in.getSplits(conf, -1);
+                for (InputSplit split : splits) {
+                    {
+                        RecordReader reader = in.getRecordReader(split, conf, Reporter.NULL);
+                        Object key = reader.createKey();
+                        Object value = reader.createValue();
+                        // 获取列信息
+                        List fields = inspector.getAllStructFieldRefs();
 
-                RecordReader reader = in.getRecordReader(splits[0], conf, Reporter.NULL);
-                Object key = reader.createKey();
-                Object value = reader.createValue();
-                // 获取列信息
-                List fields = inspector.getAllStructFieldRefs();
+                        List recordFields;
+                        while (reader.next(key, value)) {
+                            recordFields = new ArrayList();
 
-                List recordFields;
-                while (reader.next(key, value)) {
-                    recordFields = new ArrayList();
-
-                    for (int i = 0; i <= columnIndexMax; i++) {
-                        Object field = inspector.getStructFieldData(value, fields.get(i));
-                        recordFields.add(field);
+                            for (int i = 0; i <= columnIndexMax; i++) {
+                                Object field = inspector.getStructFieldData(value, fields.get(i));
+                                recordFields.add(field);
+                            }
+                            transportOneRecord(column, recordFields, recordSender,
+                                    taskPluginCollector, isReadAllColumns, nullFormat);
+                        }
+                        reader.close();
                     }
-                    transportOneRecord(column, recordFields, recordSender,
-                            taskPluginCollector, isReadAllColumns, nullFormat);
                 }
-                reader.close();
             } catch (Exception e) {
                 String message = String.format("从orcfile文件路径[%s]中读取数据发生异常,请联系系统管理员。"
                         , sourceOrcFilePath);
diff --git a/hdfswriter/src/main/java/com/alibaba/datax/plugin/writer/hdfswriter/HdfsHelper.java b/hdfswriter/src/main/java/com/alibaba/datax/plugin/writer/hdfswriter/HdfsHelper.java
index 1ecdb578..a9e157b7 100644
--- a/hdfswriter/src/main/java/com/alibaba/datax/plugin/writer/hdfswriter/HdfsHelper.java
+++ b/hdfswriter/src/main/java/com/alibaba/datax/plugin/writer/hdfswriter/HdfsHelper.java
@@ -8,8 +8,8 @@ import com.alibaba.datax.common.plugin.TaskPluginCollector;
 import com.alibaba.datax.common.util.Configuration;
 import com.alibaba.datax.plugin.unstructuredstorage.util.ColumnTypeUtil;
 import com.alibaba.datax.plugin.unstructuredstorage.util.HdfsUtil;
-import com.alibaba.fastjson.JSON;
-import com.alibaba.fastjson.JSONObject;
+import com.alibaba.fastjson2.JSON;
+import com.alibaba.fastjson2.JSONObject;
 import com.google.common.collect.Lists;
 import org.apache.commons.lang3.StringUtils;
 import org.apache.commons.lang3.Validate;
diff --git a/hologresjdbcwriter/src/main/java/com/alibaba/datax/plugin/writer/hologresjdbcwriter/BaseWriter.java b/hologresjdbcwriter/src/main/java/com/alibaba/datax/plugin/writer/hologresjdbcwriter/BaseWriter.java
index 2c390bcb..89df08b1 100644
--- a/hologresjdbcwriter/src/main/java/com/alibaba/datax/plugin/writer/hologresjdbcwriter/BaseWriter.java
+++ b/hologresjdbcwriter/src/main/java/com/alibaba/datax/plugin/writer/hologresjdbcwriter/BaseWriter.java
@@ -15,8 +15,8 @@ import com.alibaba.datax.plugin.rdbms.util.DataBaseType;
 import com.alibaba.datax.plugin.writer.hologresjdbcwriter.util.ConfLoader;
 import com.alibaba.datax.plugin.writer.hologresjdbcwriter.util.OriginalConfPretreatmentUtil;
 import com.alibaba.datax.plugin.writer.hologresjdbcwriter.util.WriterUtil;
-import com.alibaba.fastjson.JSONArray;
-import com.alibaba.fastjson.JSONObject;
+import com.alibaba.fastjson2.JSONArray;
+import com.alibaba.fastjson2.JSONObject;
 import com.alibaba.hologres.client.HoloClient;
 import com.alibaba.hologres.client.HoloConfig;
 import com.alibaba.hologres.client.Put;
diff --git a/kuduwriter/src/main/java/com/q1/datax/plugin/writer/kudu11xwriter/Kudu11xHelper.java b/kuduwriter/src/main/java/com/q1/datax/plugin/writer/kudu11xwriter/Kudu11xHelper.java
index cf1b0f8f..558693ff 100644
--- a/kuduwriter/src/main/java/com/q1/datax/plugin/writer/kudu11xwriter/Kudu11xHelper.java
+++ b/kuduwriter/src/main/java/com/q1/datax/plugin/writer/kudu11xwriter/Kudu11xHelper.java
@@ -3,7 +3,7 @@ package com.q1.datax.plugin.writer.kudu11xwriter;
 import com.alibaba.datax.common.element.Column;
 import com.alibaba.datax.common.exception.DataXException;
 import com.alibaba.datax.common.util.Configuration;
-import com.alibaba.fastjson.JSON;
+import com.alibaba.fastjson2.JSON;
 import org.apache.commons.lang3.StringUtils;
 import org.apache.commons.lang3.Validate;
 import org.apache.kudu.ColumnSchema;
diff --git a/kuduwriter/src/main/java/com/q1/datax/plugin/writer/kudu11xwriter/KuduWriterTask.java b/kuduwriter/src/main/java/com/q1/datax/plugin/writer/kudu11xwriter/KuduWriterTask.java
index bff3509f..df872842 100644
--- a/kuduwriter/src/main/java/com/q1/datax/plugin/writer/kudu11xwriter/KuduWriterTask.java
+++ b/kuduwriter/src/main/java/com/q1/datax/plugin/writer/kudu11xwriter/KuduWriterTask.java
@@ -134,7 +134,7 @@ public class KuduWriterTask {
                                         break;
                                     case BOOLEAN:
                                         synchronized (lock) {
-                                            row.addBoolean(name, Boolean.getBoolean(rawData));
+                                            row.addBoolean(name, Boolean.parseBoolean(rawData));
                                         }
                                         break;
                                     case STRING:
diff --git a/loghubreader/src/main/java/com/alibaba/datax/plugin/reader/loghubreader/LogHubReader.java b/loghubreader/src/main/java/com/alibaba/datax/plugin/reader/loghubreader/LogHubReader.java
index f25fbc61..c52ef62d 100644
--- a/loghubreader/src/main/java/com/alibaba/datax/plugin/reader/loghubreader/LogHubReader.java
+++ b/loghubreader/src/main/java/com/alibaba/datax/plugin/reader/loghubreader/LogHubReader.java
@@ -8,7 +8,7 @@ import com.alibaba.datax.common.spi.Reader;
 import com.alibaba.datax.common.util.Configuration;
 import com.alibaba.datax.common.util.DataXCaseEnvUtil;
 import com.alibaba.datax.common.util.RetryUtil;
-import com.alibaba.fastjson.JSONObject;
+import com.alibaba.fastjson2.JSONObject;
 import com.aliyun.openservices.log.Client;
 import com.aliyun.openservices.log.common.Consts.CursorMode;
 import com.aliyun.openservices.log.common.*;
diff --git a/mongodbreader/doc/mongodbreader.md b/mongodbreader/doc/mongodbreader.md
index 99d25731..297e598c 100644
--- a/mongodbreader/doc/mongodbreader.md
+++ b/mongodbreader/doc/mongodbreader.md
@@ -114,8 +114,7 @@ MongoDBReader通过Datax框架从MongoDB并行的读取数据,通过主控的J
 	                        "accessKey": "********************",
 	                        "truncate": true,
 	                        "odpsServer": "xxx/api",
-	                        "tunnelServer": "xxx",
-	                        "accountType": "aliyun"
+	                        "tunnelServer": "xxx"
 	                    }
 	                }
 	            }
diff --git a/mongodbreader/src/main/java/com/alibaba/datax/plugin/reader/mongodbreader/MongoDBReader.java b/mongodbreader/src/main/java/com/alibaba/datax/plugin/reader/mongodbreader/MongoDBReader.java
index ba7f07f4..4d129a5a 100644
--- a/mongodbreader/src/main/java/com/alibaba/datax/plugin/reader/mongodbreader/MongoDBReader.java
+++ b/mongodbreader/src/main/java/com/alibaba/datax/plugin/reader/mongodbreader/MongoDBReader.java
@@ -18,9 +18,9 @@ import com.alibaba.datax.common.spi.Reader;
 import com.alibaba.datax.common.util.Configuration;
 import com.alibaba.datax.plugin.reader.mongodbreader.util.CollectionSplitUtil;
 import com.alibaba.datax.plugin.reader.mongodbreader.util.MongoUtil;
-import com.alibaba.fastjson.JSON;
-import com.alibaba.fastjson.JSONArray;
-import com.alibaba.fastjson.JSONObject;
+import com.alibaba.fastjson2.JSON;
+import com.alibaba.fastjson2.JSONArray;
+import com.alibaba.fastjson2.JSONObject;
 
 import com.google.common.base.Joiner;
 import com.google.common.base.Strings;
diff --git a/mongodbwriter/src/main/java/com/alibaba/datax/plugin/writer/mongodbwriter/MongoDBWriter.java b/mongodbwriter/src/main/java/com/alibaba/datax/plugin/writer/mongodbwriter/MongoDBWriter.java
index 66c75078..76f35a40 100644
--- a/mongodbwriter/src/main/java/com/alibaba/datax/plugin/writer/mongodbwriter/MongoDBWriter.java
+++ b/mongodbwriter/src/main/java/com/alibaba/datax/plugin/writer/mongodbwriter/MongoDBWriter.java
@@ -7,9 +7,9 @@ import com.alibaba.datax.common.spi.Writer;
 import com.alibaba.datax.common.util.Configuration;
 import com.alibaba.datax.plugin.rdbms.writer.Key;
 import com.alibaba.datax.plugin.writer.mongodbwriter.util.MongoUtil;
-import com.alibaba.fastjson.JSON;
-import com.alibaba.fastjson.JSONArray;
-import com.alibaba.fastjson.JSONObject;
+import com.alibaba.fastjson2.JSON;
+import com.alibaba.fastjson2.JSONArray;
+import com.alibaba.fastjson2.JSONObject;
 import com.google.common.base.Strings;
 import com.mongodb.*;
 import com.mongodb.client.MongoCollection;
diff --git a/oceanbasev10reader/src/main/java/com/alibaba/datax/plugin/reader/oceanbasev10reader/ext/ReaderJob.java b/oceanbasev10reader/src/main/java/com/alibaba/datax/plugin/reader/oceanbasev10reader/ext/ReaderJob.java
index 642c99fe..2d60d0c6 100644
--- a/oceanbasev10reader/src/main/java/com/alibaba/datax/plugin/reader/oceanbasev10reader/ext/ReaderJob.java
+++ b/oceanbasev10reader/src/main/java/com/alibaba/datax/plugin/reader/oceanbasev10reader/ext/ReaderJob.java
@@ -10,7 +10,7 @@ import com.alibaba.datax.plugin.rdbms.reader.Constant;
 import com.alibaba.datax.plugin.reader.oceanbasev10reader.OceanBaseReader;
 import com.alibaba.datax.plugin.reader.oceanbasev10reader.util.ObReaderUtils;
 import com.alibaba.datax.plugin.reader.oceanbasev10reader.util.PartitionSplitUtil;
-import com.alibaba.fastjson.JSONObject;
+import com.alibaba.fastjson2.JSONObject;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
diff --git a/oceanbasev10writer/src/main/java/com/alibaba/datax/plugin/writer/oceanbasev10writer/OceanBaseV10Writer.java b/oceanbasev10writer/src/main/java/com/alibaba/datax/plugin/writer/oceanbasev10writer/OceanBaseV10Writer.java
index 62656843..3bcc1019 100644
--- a/oceanbasev10writer/src/main/java/com/alibaba/datax/plugin/writer/oceanbasev10writer/OceanBaseV10Writer.java
+++ b/oceanbasev10writer/src/main/java/com/alibaba/datax/plugin/writer/oceanbasev10writer/OceanBaseV10Writer.java
@@ -12,7 +12,7 @@ import com.alibaba.datax.plugin.rdbms.writer.util.WriterUtil;
 import com.alibaba.datax.plugin.writer.oceanbasev10writer.task.ConcurrentTableWriterTask;
 import com.alibaba.datax.plugin.writer.oceanbasev10writer.util.DbUtils;
 import com.alibaba.datax.plugin.writer.oceanbasev10writer.util.ObWriterUtils;
-import com.alibaba.fastjson.JSONObject;
+import com.alibaba.fastjson2.JSONObject;
 import org.apache.commons.lang3.StringUtils;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
diff --git a/odpsreader/src/main/java/com/alibaba/datax/plugin/reader/odpsreader/Constant.java b/odpsreader/src/main/java/com/alibaba/datax/plugin/reader/odpsreader/Constant.java
index dee2ef5c..cf34762d 100755
--- a/odpsreader/src/main/java/com/alibaba/datax/plugin/reader/odpsreader/Constant.java
+++ b/odpsreader/src/main/java/com/alibaba/datax/plugin/reader/odpsreader/Constant.java
@@ -14,20 +14,9 @@ public class Constant {
 
     public static final String PARTITION_SPLIT_MODE = "partition";
 
-    public static final String DEFAULT_ACCOUNT_TYPE = "aliyun";
-
-    public static final String TAOBAO_ACCOUNT_TYPE = "taobao";
-
     // 常量字段用COLUMN_CONSTANT_FLAG 首尾包住即可
     public final static String COLUMN_CONSTANT_FLAG = "'";
 
-    /**
-     * 以下是获取accesskey id 需要用到的常量值
-     */
-    public static final String SKYNET_ACCESSID = "SKYNET_ACCESSID";
-
-    public static final String SKYNET_ACCESSKEY = "SKYNET_ACCESSKEY";
-    
     public static final String PARTITION_COLUMNS = "partitionColumns";
     
     public static final String PARSED_COLUMNS = "parsedColumns";
diff --git a/odpsreader/src/main/java/com/alibaba/datax/plugin/reader/odpsreader/Key.java b/odpsreader/src/main/java/com/alibaba/datax/plugin/reader/odpsreader/Key.java
index 2cee65d1..6f8c7d92 100755
--- a/odpsreader/src/main/java/com/alibaba/datax/plugin/reader/odpsreader/Key.java
+++ b/odpsreader/src/main/java/com/alibaba/datax/plugin/reader/odpsreader/Key.java
@@ -24,9 +24,6 @@ public class Key {
     // 当值为:partition 则只切分到分区;当值为:record,则当按照分区切分后达不到adviceNum时,继续按照record切分
     public final static String SPLIT_MODE = "splitMode";
 
-    // 账号类型,默认为aliyun,也可能为taobao等其他类型
-    public final static String ACCOUNT_TYPE = "accountType";
-
     public final static String PACKAGE_AUTHORIZED_PROJECT = "packageAuthorizedProject";
 
     public final static String IS_COMPRESS = "isCompress";
diff --git a/odpsreader/src/main/java/com/alibaba/datax/plugin/reader/odpsreader/OdpsReader.java b/odpsreader/src/main/java/com/alibaba/datax/plugin/reader/odpsreader/OdpsReader.java
index d7ea6b1c..615cee50 100755
--- a/odpsreader/src/main/java/com/alibaba/datax/plugin/reader/odpsreader/OdpsReader.java
+++ b/odpsreader/src/main/java/com/alibaba/datax/plugin/reader/odpsreader/OdpsReader.java
@@ -7,7 +7,7 @@ import com.alibaba.datax.common.util.Configuration;
 import com.alibaba.datax.common.util.FilterUtil;
 import com.alibaba.datax.common.util.MessageSource;
 import com.alibaba.datax.plugin.reader.odpsreader.util.*;
-import com.alibaba.fastjson.JSON;
+import com.alibaba.fastjson2.JSON;
 import com.aliyun.odps.Column;
 import com.aliyun.odps.Odps;
 import com.aliyun.odps.Table;
@@ -42,12 +42,6 @@ public class OdpsReader extends Reader {
             this.originalConfig = super.getPluginJobConf();
             this.successOnNoPartition = this.originalConfig.getBool(Key.SUCCESS_ON_NO_PATITION, false);
 
-            //如果用户没有配置accessId/accessKey,尝试从环境变量获取
-            String accountType = originalConfig.getString(Key.ACCOUNT_TYPE, Constant.DEFAULT_ACCOUNT_TYPE);
-            if (Constant.DEFAULT_ACCOUNT_TYPE.equalsIgnoreCase(accountType)) {
-                this.originalConfig = IdAndKeyUtil.parseAccessIdAndKey(this.originalConfig);
-            }
-
             //检查必要的参数配置
             OdpsUtil.checkNecessaryConfig(this.originalConfig);
             //重试次数的配置检查
diff --git a/odpsreader/src/main/java/com/alibaba/datax/plugin/reader/odpsreader/ReaderProxy.java b/odpsreader/src/main/java/com/alibaba/datax/plugin/reader/odpsreader/ReaderProxy.java
index 1d56d191..c2e88eba 100755
--- a/odpsreader/src/main/java/com/alibaba/datax/plugin/reader/odpsreader/ReaderProxy.java
+++ b/odpsreader/src/main/java/com/alibaba/datax/plugin/reader/odpsreader/ReaderProxy.java
@@ -6,7 +6,7 @@ import com.alibaba.datax.common.plugin.RecordSender;
 import com.alibaba.datax.common.util.Configuration;
 import com.alibaba.datax.common.util.MessageSource;
 import com.alibaba.datax.plugin.reader.odpsreader.util.OdpsUtil;
-import com.alibaba.fastjson.JSON;
+import com.alibaba.fastjson2.JSON;
 import com.aliyun.odps.Column;
 import com.aliyun.odps.OdpsType;
 import com.aliyun.odps.data.*;
@@ -200,7 +200,7 @@ public class ReaderProxy {
         }
         if (IS_DEBUG) {
             LOG.debug(String.format("partition value details: %s",
-                    com.alibaba.fastjson.JSON.toJSONString(partitionMap)));
+                    com.alibaba.fastjson2.JSON.toJSONString(partitionMap)));
         }
         return partitionMap;
     }
@@ -212,7 +212,7 @@ public class ReaderProxy {
         // it's will never happen, but add this checking
         if (!partitionMap.containsKey(partitionColumnName)) {
             String errorMessage = MESSAGE_SOURCE.message("readerproxy.3",
-                    com.alibaba.fastjson.JSON.toJSONString(partitionMap),
+                    com.alibaba.fastjson2.JSON.toJSONString(partitionMap),
                     partitionColumnName);
             throw DataXException.asDataXException(
                     OdpsReaderErrorCode.READ_DATA_FAIL, errorMessage);
diff --git a/odpsreader/src/main/java/com/alibaba/datax/plugin/reader/odpsreader/util/IdAndKeyUtil.java b/odpsreader/src/main/java/com/alibaba/datax/plugin/reader/odpsreader/util/IdAndKeyUtil.java
deleted file mode 100644
index 05722b59..00000000
--- a/odpsreader/src/main/java/com/alibaba/datax/plugin/reader/odpsreader/util/IdAndKeyUtil.java
+++ /dev/null
@@ -1,65 +0,0 @@
-/**
- *  (C) 2010-2022 Alibaba Group Holding Limited.
- *
- * Licensed under the Apache License, Version 2.0 (the "License");
- * you may not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package com.alibaba.datax.plugin.reader.odpsreader.util;
-
-import com.alibaba.datax.common.exception.DataXException;
-import com.alibaba.datax.common.util.Configuration;
-import com.alibaba.datax.common.util.IdAndKeyRollingUtil;
-import com.alibaba.datax.common.util.MessageSource;
-import com.alibaba.datax.plugin.reader.odpsreader.Key;
-import com.alibaba.datax.plugin.reader.odpsreader.OdpsReaderErrorCode;
-
-import org.apache.commons.lang3.StringUtils;
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-
-import java.util.Map;
-
-public class IdAndKeyUtil {
-    private static Logger LOG = LoggerFactory.getLogger(IdAndKeyUtil.class);
-    private static MessageSource MESSAGE_SOURCE = MessageSource.loadResourceBundle(IdAndKeyUtil.class);
-
-    public static Configuration parseAccessIdAndKey(Configuration originalConfig) {
-        String accessId = originalConfig.getString(Key.ACCESS_ID);
-        String accessKey = originalConfig.getString(Key.ACCESS_KEY);
-
-        // 只要 accessId,accessKey 二者配置了一个,就理解为是用户本意是要直接手动配置其 accessid/accessKey
-        if (StringUtils.isNotBlank(accessId) || StringUtils.isNotBlank(accessKey)) {
-            LOG.info("Try to get accessId/accessKey from your config.");
-            //通过如下语句,进行检查是否确实配置了
-            accessId = originalConfig.getNecessaryValue(Key.ACCESS_ID, OdpsReaderErrorCode.REQUIRED_VALUE);
-            accessKey = originalConfig.getNecessaryValue(Key.ACCESS_KEY, OdpsReaderErrorCode.REQUIRED_VALUE);
-            //检查完毕,返回即可
-            return originalConfig;
-        } else {
-            Map envProp = System.getenv();
-            return getAccessIdAndKeyFromEnv(originalConfig, envProp);
-        }
-    }
-
-    private static Configuration getAccessIdAndKeyFromEnv(Configuration originalConfig,
-                                                          Map envProp) {
-    	// 如果获取到ak,在getAccessIdAndKeyFromEnv中已经设置到originalConfig了
-        String accessKey = IdAndKeyRollingUtil.getAccessIdAndKeyFromEnv(originalConfig);
-        if (StringUtils.isBlank(accessKey)) {
-            // 无处获取(既没有配置在作业中,也没用在环境变量中)
-            throw DataXException.asDataXException(OdpsReaderErrorCode.GET_ID_KEY_FAIL,
-                    MESSAGE_SOURCE.message("idandkeyutil.2"));
-        }
-        return originalConfig;
-    }
-}
diff --git a/odpsreader/src/main/java/com/alibaba/datax/plugin/reader/odpsreader/util/OdpsUtil.java b/odpsreader/src/main/java/com/alibaba/datax/plugin/reader/odpsreader/util/OdpsUtil.java
index f2ad8e0f..0ff34a81 100755
--- a/odpsreader/src/main/java/com/alibaba/datax/plugin/reader/odpsreader/util/OdpsUtil.java
+++ b/odpsreader/src/main/java/com/alibaba/datax/plugin/reader/odpsreader/util/OdpsUtil.java
@@ -76,19 +76,12 @@ public final class OdpsUtil {
             defaultProject = packageAuthorizedProject;
         }
 
-        String accountType = originalConfig.getString(Key.ACCOUNT_TYPE,
-                Constant.DEFAULT_ACCOUNT_TYPE);
 
         Account account = null;
-        if (accountType.equalsIgnoreCase(Constant.DEFAULT_ACCOUNT_TYPE)) {
-            if (StringUtils.isNotBlank(securityToken)) {
-                account = new StsAccount(accessId, accessKey, securityToken);
-            } else {
-                account = new AliyunAccount(accessId, accessKey);
-            }
+        if (StringUtils.isNotBlank(securityToken)) {
+            account = new StsAccount(accessId, accessKey, securityToken);
         } else {
-            throw DataXException.asDataXException(OdpsReaderErrorCode.ACCOUNT_TYPE_ERROR,
-                    MESSAGE_SOURCE.message("odpsutil.3", accountType));
+            account = new AliyunAccount(accessId, accessKey);
         }
 
         Odps odps = new Odps(account);
diff --git a/odpswriter/doc/odpswriter.md b/odpswriter/doc/odpswriter.md
index d81672b0..845dd1d3 100644
--- a/odpswriter/doc/odpswriter.md
+++ b/odpswriter/doc/odpswriter.md
@@ -71,8 +71,7 @@ ODPSWriter插件用于实现往ODPS插入或者更新数据,主要提供给etl
             "accessKey": "xxxx",
             "truncate": true,
             "odpsServer": "http://sxxx/api",
-            "tunnelServer": "http://xxx",
-            "accountType": "aliyun"
+            "tunnelServer": "http://xxx"
           }
         }
       }
diff --git a/odpswriter/src/main/java/com/alibaba/datax/plugin/writer/odpswriter/Constant.java b/odpswriter/src/main/java/com/alibaba/datax/plugin/writer/odpswriter/Constant.java
index f4d9734b..efedfea9 100755
--- a/odpswriter/src/main/java/com/alibaba/datax/plugin/writer/odpswriter/Constant.java
+++ b/odpswriter/src/main/java/com/alibaba/datax/plugin/writer/odpswriter/Constant.java
@@ -2,13 +2,6 @@ package com.alibaba.datax.plugin.writer.odpswriter;
 
 
 public class Constant {
-    public static final String SKYNET_ACCESSID = "SKYNET_ACCESSID";
-
-    public static final String SKYNET_ACCESSKEY = "SKYNET_ACCESSKEY";
-
-    public static final String DEFAULT_ACCOUNT_TYPE = "aliyun";
-
-    public static final String TAOBAO_ACCOUNT_TYPE = "taobao";
 
     public static final String COLUMN_POSITION = "columnPosition";
 
diff --git a/odpswriter/src/main/java/com/alibaba/datax/plugin/writer/odpswriter/Key.java b/odpswriter/src/main/java/com/alibaba/datax/plugin/writer/odpswriter/Key.java
index 7ee11128..8dff8a4c 100755
--- a/odpswriter/src/main/java/com/alibaba/datax/plugin/writer/odpswriter/Key.java
+++ b/odpswriter/src/main/java/com/alibaba/datax/plugin/writer/odpswriter/Key.java
@@ -30,8 +30,6 @@ public final class Key {
     //boolean 类型,default:false
     public final static String EMPTY_AS_NULL = "emptyAsNull";
 
-    public final static String ACCOUNT_TYPE = "accountType";
-
     public final static String IS_COMPRESS = "isCompress";
 
     // preSql
diff --git a/odpswriter/src/main/java/com/alibaba/datax/plugin/writer/odpswriter/OdpsWriter.java b/odpswriter/src/main/java/com/alibaba/datax/plugin/writer/odpswriter/OdpsWriter.java
index c82fcef4..9b7276fa 100755
--- a/odpswriter/src/main/java/com/alibaba/datax/plugin/writer/odpswriter/OdpsWriter.java
+++ b/odpswriter/src/main/java/com/alibaba/datax/plugin/writer/odpswriter/OdpsWriter.java
@@ -12,9 +12,9 @@ import com.alibaba.datax.common.util.MessageSource;
 import com.alibaba.datax.plugin.writer.odpswriter.model.PartitionInfo;
 import com.alibaba.datax.plugin.writer.odpswriter.model.UserDefinedFunction;
 import com.alibaba.datax.plugin.writer.odpswriter.util.*;
-import com.alibaba.fastjson.JSON;
-import com.alibaba.fastjson.JSONArray;
-import com.alibaba.fastjson.JSONObject;
+import com.alibaba.fastjson2.JSON;
+import com.alibaba.fastjson2.JSONArray;
+import com.alibaba.fastjson2.JSONObject;
 import com.aliyun.odps.Odps;
 import com.aliyun.odps.Table;
 import com.aliyun.odps.TableSchema;
@@ -62,7 +62,6 @@ public class OdpsWriter extends Writer {
         private String tableName;
         private String tunnelServer;
         private String partition;
-        private String accountType;
         private boolean truncate;
         private String uploadId;
         private TableTunnel.UploadSession masterUpload;
@@ -104,8 +103,6 @@ public class OdpsWriter extends Writer {
             this.tableName = this.originalConfig.getString(Key.TABLE);
             this.tunnelServer = this.originalConfig.getString(Key.TUNNEL_SERVER, null);
 
-            this.dealAK();
-
             // init odps config
             this.odps = OdpsUtil.initOdpsProject(this.originalConfig);
 
@@ -153,31 +150,6 @@ public class OdpsWriter extends Writer {
             }
         }
 
-        private void dealAK() {
-            this.accountType = this.originalConfig.getString(Key.ACCOUNT_TYPE,
-                Constant.DEFAULT_ACCOUNT_TYPE);
-
-            if (!Constant.DEFAULT_ACCOUNT_TYPE.equalsIgnoreCase(this.accountType) &&
-                    !Constant.TAOBAO_ACCOUNT_TYPE.equalsIgnoreCase(this.accountType)) {
-                throw DataXException.asDataXException(OdpsWriterErrorCode.ACCOUNT_TYPE_ERROR,
-                        MESSAGE_SOURCE.message("odpswriter.1", accountType));
-            }
-            this.originalConfig.set(Key.ACCOUNT_TYPE, this.accountType);
-
-            //检查accessId,accessKey配置
-            if (Constant.DEFAULT_ACCOUNT_TYPE
-                    .equalsIgnoreCase(this.accountType)) {
-                this.originalConfig = IdAndKeyUtil.parseAccessIdAndKey(this.originalConfig);
-                String accessId = this.originalConfig.getString(Key.ACCESS_ID);
-                String accessKey = this.originalConfig.getString(Key.ACCESS_KEY);
-                if (IS_DEBUG) {
-                    LOG.debug("accessId:[{}], accessKey:[{}] .", accessId,
-                            accessKey);
-                }
-                LOG.info("accessId:[{}] .", accessId);
-            }
-        }
-
         private void dealDynamicPartition() {
             /*
              * 如果显示配置了 supportDynamicPartition,则以配置为准
@@ -241,20 +213,6 @@ public class OdpsWriter extends Writer {
 
         @Override
         public void prepare() {
-            String accessId = null;
-            String accessKey = null;
-            if (Constant.DEFAULT_ACCOUNT_TYPE
-                    .equalsIgnoreCase(this.accountType)) {
-                this.originalConfig = IdAndKeyUtil.parseAccessIdAndKey(this.originalConfig);
-                accessId = this.originalConfig.getString(Key.ACCESS_ID);
-                accessKey = this.originalConfig.getString(Key.ACCESS_KEY);
-                if (IS_DEBUG) {
-                    LOG.debug("accessId:[{}], accessKey:[{}] .", accessId,
-                            accessKey);
-                }
-                LOG.info("accessId:[{}] .", accessId);
-            }
-
             // init odps config
             this.odps = OdpsUtil.initOdpsProject(this.originalConfig);
 
diff --git a/odpswriter/src/main/java/com/alibaba/datax/plugin/writer/odpswriter/OdpsWriterProxy.java b/odpswriter/src/main/java/com/alibaba/datax/plugin/writer/odpswriter/OdpsWriterProxy.java
index 221aca79..e7c95be1 100755
--- a/odpswriter/src/main/java/com/alibaba/datax/plugin/writer/odpswriter/OdpsWriterProxy.java
+++ b/odpswriter/src/main/java/com/alibaba/datax/plugin/writer/odpswriter/OdpsWriterProxy.java
@@ -6,9 +6,9 @@ import com.alibaba.datax.common.plugin.TaskPluginCollector;
 import com.alibaba.datax.common.util.Configuration;
 import com.alibaba.datax.common.util.MessageSource;
 import com.alibaba.datax.plugin.writer.odpswriter.util.OdpsUtil;
-import com.alibaba.fastjson.JSON;
-import com.alibaba.fastjson.JSONArray;
-import com.alibaba.fastjson.JSONObject;
+import com.alibaba.fastjson2.JSON;
+import com.alibaba.fastjson2.JSONArray;
+import com.alibaba.fastjson2.JSONObject;
 import com.aliyun.odps.OdpsType;
 import com.aliyun.odps.TableSchema;
 import com.aliyun.odps.data.ArrayRecord;
diff --git a/odpswriter/src/main/java/com/alibaba/datax/plugin/writer/odpswriter/util/CustomPartitionUtils.java b/odpswriter/src/main/java/com/alibaba/datax/plugin/writer/odpswriter/util/CustomPartitionUtils.java
index 51ad45a1..6153a820 100644
--- a/odpswriter/src/main/java/com/alibaba/datax/plugin/writer/odpswriter/util/CustomPartitionUtils.java
+++ b/odpswriter/src/main/java/com/alibaba/datax/plugin/writer/odpswriter/util/CustomPartitionUtils.java
@@ -4,7 +4,7 @@ import com.alibaba.datax.common.element.Record;
 import com.alibaba.datax.common.util.Configuration;
 import com.alibaba.datax.plugin.writer.odpswriter.model.PartitionInfo;
 import com.alibaba.datax.plugin.writer.odpswriter.model.UserDefinedFunction;
-import com.alibaba.fastjson.JSON;
+import com.alibaba.fastjson2.JSON;
 import com.google.common.base.Joiner;
 import org.apache.commons.lang3.StringUtils;
 import org.slf4j.Logger;
diff --git a/odpswriter/src/main/java/com/alibaba/datax/plugin/writer/odpswriter/util/IdAndKeyUtil.java b/odpswriter/src/main/java/com/alibaba/datax/plugin/writer/odpswriter/util/IdAndKeyUtil.java
deleted file mode 100755
index 98c9afdd..00000000
--- a/odpswriter/src/main/java/com/alibaba/datax/plugin/writer/odpswriter/util/IdAndKeyUtil.java
+++ /dev/null
@@ -1,65 +0,0 @@
-/**
- *  (C) 2010-2022 Alibaba Group Holding Limited.
- *
- * Licensed under the Apache License, Version 2.0 (the "License");
- * you may not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package com.alibaba.datax.plugin.writer.odpswriter.util;
-
-import com.alibaba.datax.common.exception.DataXException;
-import com.alibaba.datax.common.util.Configuration;
-import com.alibaba.datax.common.util.IdAndKeyRollingUtil;
-import com.alibaba.datax.common.util.MessageSource;
-import com.alibaba.datax.plugin.writer.odpswriter.Key;
-import com.alibaba.datax.plugin.writer.odpswriter.OdpsWriterErrorCode;
-
-import org.apache.commons.lang3.StringUtils;
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-
-import java.util.Map;
-
-public class IdAndKeyUtil {
-    private static Logger LOG = LoggerFactory.getLogger(IdAndKeyUtil.class);
-    private static final MessageSource MESSAGE_SOURCE = MessageSource.loadResourceBundle(IdAndKeyUtil.class);
-
-    public static Configuration parseAccessIdAndKey(Configuration originalConfig) {
-        String accessId = originalConfig.getString(Key.ACCESS_ID);
-        String accessKey = originalConfig.getString(Key.ACCESS_KEY);
-
-        // 只要 accessId,accessKey 二者配置了一个,就理解为是用户本意是要直接手动配置其 accessid/accessKey
-        if (StringUtils.isNotBlank(accessId) || StringUtils.isNotBlank(accessKey)) {
-            LOG.info("Try to get accessId/accessKey from your config.");
-            //通过如下语句,进行检查是否确实配置了
-            accessId = originalConfig.getNecessaryValue(Key.ACCESS_ID, OdpsWriterErrorCode.REQUIRED_VALUE);
-            accessKey = originalConfig.getNecessaryValue(Key.ACCESS_KEY, OdpsWriterErrorCode.REQUIRED_VALUE);
-            //检查完毕,返回即可
-            return originalConfig;
-        } else {
-            Map envProp = System.getenv();
-            return getAccessIdAndKeyFromEnv(originalConfig, envProp);
-        }
-    }
-
-    private static Configuration getAccessIdAndKeyFromEnv(Configuration originalConfig,
-                                                          Map envProp) {
-    	// 如果获取到ak,在getAccessIdAndKeyFromEnv中已经设置到originalConfig了
-    	String accessKey = IdAndKeyRollingUtil.getAccessIdAndKeyFromEnv(originalConfig);
-    	if (StringUtils.isBlank(accessKey)) {
-    		// 无处获取(既没有配置在作业中,也没用在环境变量中)
-            throw DataXException.asDataXException(OdpsWriterErrorCode.GET_ID_KEY_FAIL,
-                    MESSAGE_SOURCE.message("idandkeyutil.2"));
-    	}
-        return originalConfig;
-    }
-}
diff --git a/odpswriter/src/main/java/com/alibaba/datax/plugin/writer/odpswriter/util/OdpsUtil.java b/odpswriter/src/main/java/com/alibaba/datax/plugin/writer/odpswriter/util/OdpsUtil.java
index a663da85..a3a372af 100755
--- a/odpswriter/src/main/java/com/alibaba/datax/plugin/writer/odpswriter/util/OdpsUtil.java
+++ b/odpswriter/src/main/java/com/alibaba/datax/plugin/writer/odpswriter/util/OdpsUtil.java
@@ -79,7 +79,6 @@ public class OdpsUtil {
 
 
     public static Odps initOdpsProject(Configuration originalConfig) {
-        String accountType = originalConfig.getString(Key.ACCOUNT_TYPE);
         String accessId = originalConfig.getString(Key.ACCESS_ID);
         String accessKey = originalConfig.getString(Key.ACCESS_KEY);
 
@@ -88,15 +87,10 @@ public class OdpsUtil {
         String securityToken = originalConfig.getString(Key.SECURITY_TOKEN);
 
         Account account;
-        if (accountType.equalsIgnoreCase(Constant.DEFAULT_ACCOUNT_TYPE)) {
-            if (StringUtils.isNotBlank(securityToken)) {
-                account = new com.aliyun.odps.account.StsAccount(accessId, accessKey, securityToken);
-            } else {
-                account = new AliyunAccount(accessId, accessKey);
-            }
+        if (StringUtils.isNotBlank(securityToken)) {
+            account = new com.aliyun.odps.account.StsAccount(accessId, accessKey, securityToken);
         } else {
-            throw DataXException.asDataXException(OdpsWriterErrorCode.ACCOUNT_TYPE_ERROR,
-                    MESSAGE_SOURCE.message("odpsutil.4", accountType));
+            account = new AliyunAccount(accessId, accessKey);
         }
 
         Odps odps = new Odps(account);
diff --git a/opentsdbreader/pom.xml b/opentsdbreader/pom.xml
index 83d7c424..b10fba02 100644
--- a/opentsdbreader/pom.xml
+++ b/opentsdbreader/pom.xml
@@ -44,10 +44,6 @@
                     slf4j-log4j12
                     org.slf4j
                 
-                
-                    fastjson
-                    com.alibaba
-                
                 
                     commons-math3
                     org.apache.commons
@@ -89,8 +85,8 @@
 
         
         
-            com.alibaba
-            fastjson
+            com.alibaba.fastjson2
+            fastjson2
         
 
         
diff --git a/opentsdbreader/src/main/java/com/alibaba/datax/plugin/reader/conn/DataPoint4TSDB.java b/opentsdbreader/src/main/java/com/alibaba/datax/plugin/reader/conn/DataPoint4TSDB.java
index 64c124ae..e8a84fb2 100644
--- a/opentsdbreader/src/main/java/com/alibaba/datax/plugin/reader/conn/DataPoint4TSDB.java
+++ b/opentsdbreader/src/main/java/com/alibaba/datax/plugin/reader/conn/DataPoint4TSDB.java
@@ -1,6 +1,6 @@
 package com.alibaba.datax.plugin.reader.conn;
 
-import com.alibaba.fastjson.JSON;
+import com.alibaba.fastjson2.JSON;
 
 import java.util.Map;
 
diff --git a/opentsdbreader/src/main/java/com/alibaba/datax/plugin/reader/conn/OpenTSDBConnection.java b/opentsdbreader/src/main/java/com/alibaba/datax/plugin/reader/conn/OpenTSDBConnection.java
index 939a856f..49ba5fb3 100644
--- a/opentsdbreader/src/main/java/com/alibaba/datax/plugin/reader/conn/OpenTSDBConnection.java
+++ b/opentsdbreader/src/main/java/com/alibaba/datax/plugin/reader/conn/OpenTSDBConnection.java
@@ -2,7 +2,7 @@ package com.alibaba.datax.plugin.reader.conn;
 
 import com.alibaba.datax.common.plugin.RecordSender;
 import com.alibaba.datax.plugin.reader.util.TSDBUtils;
-import com.alibaba.fastjson.JSON;
+import com.alibaba.fastjson2.JSON;
 import org.apache.commons.lang3.StringUtils;
 
 import java.util.List;
diff --git a/opentsdbreader/src/main/java/com/alibaba/datax/plugin/reader/conn/OpenTSDBDump.java b/opentsdbreader/src/main/java/com/alibaba/datax/plugin/reader/conn/OpenTSDBDump.java
index 009aa100..6f3c551a 100644
--- a/opentsdbreader/src/main/java/com/alibaba/datax/plugin/reader/conn/OpenTSDBDump.java
+++ b/opentsdbreader/src/main/java/com/alibaba/datax/plugin/reader/conn/OpenTSDBDump.java
@@ -1,7 +1,7 @@
 package com.alibaba.datax.plugin.reader.conn;
 
 import com.alibaba.datax.common.plugin.RecordSender;
-import com.alibaba.fastjson.JSON;
+import com.alibaba.fastjson2.JSON;
 import net.opentsdb.core.TSDB;
 import net.opentsdb.utils.Config;
 
diff --git a/opentsdbreader/src/main/java/com/alibaba/datax/plugin/reader/opentsdbreader/OpenTSDBReader.java b/opentsdbreader/src/main/java/com/alibaba/datax/plugin/reader/opentsdbreader/OpenTSDBReader.java
index 4cd0476e..7790a2b1 100755
--- a/opentsdbreader/src/main/java/com/alibaba/datax/plugin/reader/opentsdbreader/OpenTSDBReader.java
+++ b/opentsdbreader/src/main/java/com/alibaba/datax/plugin/reader/opentsdbreader/OpenTSDBReader.java
@@ -6,7 +6,7 @@ import com.alibaba.datax.common.spi.Reader;
 import com.alibaba.datax.common.util.Configuration;
 import com.alibaba.datax.plugin.reader.conn.OpenTSDBConnection;
 import com.alibaba.datax.plugin.reader.util.TimeUtils;
-import com.alibaba.fastjson.JSON;
+import com.alibaba.fastjson2.JSON;
 import org.apache.commons.lang3.StringUtils;
 import org.joda.time.DateTime;
 import org.slf4j.Logger;
diff --git a/opentsdbreader/src/main/java/com/alibaba/datax/plugin/reader/util/HttpUtils.java b/opentsdbreader/src/main/java/com/alibaba/datax/plugin/reader/util/HttpUtils.java
index cbd0d7ca..fa82b634 100644
--- a/opentsdbreader/src/main/java/com/alibaba/datax/plugin/reader/util/HttpUtils.java
+++ b/opentsdbreader/src/main/java/com/alibaba/datax/plugin/reader/util/HttpUtils.java
@@ -1,6 +1,6 @@
 package com.alibaba.datax.plugin.reader.util;
 
-import com.alibaba.fastjson.JSON;
+import com.alibaba.fastjson2.JSON;
 import org.apache.http.client.fluent.Content;
 import org.apache.http.client.fluent.Request;
 import org.apache.http.entity.ContentType;
diff --git a/opentsdbreader/src/main/java/com/alibaba/datax/plugin/reader/util/TSDBUtils.java b/opentsdbreader/src/main/java/com/alibaba/datax/plugin/reader/util/TSDBUtils.java
index bbfb75cb..9f1e38d5 100644
--- a/opentsdbreader/src/main/java/com/alibaba/datax/plugin/reader/util/TSDBUtils.java
+++ b/opentsdbreader/src/main/java/com/alibaba/datax/plugin/reader/util/TSDBUtils.java
@@ -1,7 +1,7 @@
 package com.alibaba.datax.plugin.reader.util;
 
 import com.alibaba.datax.plugin.reader.conn.DataPoint4TSDB;
-import com.alibaba.fastjson.JSON;
+import com.alibaba.fastjson2.JSON;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
diff --git a/ossreader/src/main/java/com/alibaba/datax/plugin/reader/ossreader/OssReader.java b/ossreader/src/main/java/com/alibaba/datax/plugin/reader/ossreader/OssReader.java
index 62a1f81f..9b76c53e 100755
--- a/ossreader/src/main/java/com/alibaba/datax/plugin/reader/ossreader/OssReader.java
+++ b/ossreader/src/main/java/com/alibaba/datax/plugin/reader/ossreader/OssReader.java
@@ -12,8 +12,8 @@ import com.alibaba.datax.plugin.unstructuredstorage.FileFormat;
 import com.alibaba.datax.plugin.unstructuredstorage.reader.UnstructuredStorageReaderUtil;
 import com.alibaba.datax.plugin.unstructuredstorage.reader.binaryFileUtil.BinaryFileReaderUtil;
 import com.alibaba.datax.plugin.unstructuredstorage.reader.split.StartEndPair;
-import com.alibaba.fastjson.JSON;
-import com.alibaba.fastjson.TypeReference;
+import com.alibaba.fastjson2.JSON;
+import com.alibaba.fastjson2.TypeReference;
 import com.aliyun.oss.ClientException;
 import com.aliyun.oss.OSSClient;
 import com.aliyun.oss.OSSException;
diff --git a/ossreader/src/main/java/com/alibaba/datax/plugin/reader/ossreader/util/HdfsParquetUtil.java b/ossreader/src/main/java/com/alibaba/datax/plugin/reader/ossreader/util/HdfsParquetUtil.java
index f332bb95..3012c84a 100644
--- a/ossreader/src/main/java/com/alibaba/datax/plugin/reader/ossreader/util/HdfsParquetUtil.java
+++ b/ossreader/src/main/java/com/alibaba/datax/plugin/reader/ossreader/util/HdfsParquetUtil.java
@@ -2,8 +2,8 @@ package com.alibaba.datax.plugin.reader.ossreader.util;
 
 import com.alibaba.datax.common.util.Configuration;
 import com.alibaba.datax.plugin.reader.ossreader.Key;
-import com.alibaba.fastjson.JSON;
-import com.alibaba.fastjson.JSONObject;
+import com.alibaba.fastjson2.JSON;
+import com.alibaba.fastjson2.JSONObject;
 
 /**
  * @Author: guxuan
diff --git a/ossreader/src/main/java/com/alibaba/datax/plugin/reader/ossreader/util/OssSplitUtil.java b/ossreader/src/main/java/com/alibaba/datax/plugin/reader/ossreader/util/OssSplitUtil.java
index 760d8d5f..6ba80999 100644
--- a/ossreader/src/main/java/com/alibaba/datax/plugin/reader/ossreader/util/OssSplitUtil.java
+++ b/ossreader/src/main/java/com/alibaba/datax/plugin/reader/ossreader/util/OssSplitUtil.java
@@ -7,8 +7,8 @@ import com.alibaba.datax.plugin.unstructuredstorage.reader.Key;
 import com.alibaba.datax.plugin.unstructuredstorage.reader.UnstructuredStorageReaderErrorCode;
 import com.alibaba.datax.plugin.unstructuredstorage.reader.split.StartEndPair;
 import com.alibaba.datax.plugin.unstructuredstorage.reader.split.UnstructuredSplitUtil;
-import com.alibaba.fastjson.JSONArray;
-import com.alibaba.fastjson.JSONObject;
+import com.alibaba.fastjson2.JSONArray;
+import com.alibaba.fastjson2.JSONObject;
 import com.aliyun.oss.OSSClient;
 import com.aliyun.oss.model.GetObjectRequest;
 import com.aliyun.oss.model.OSSObject;
diff --git a/osswriter/src/main/java/com/alibaba/datax/plugin/writer/osswriter/OssWriter.java b/osswriter/src/main/java/com/alibaba/datax/plugin/writer/osswriter/OssWriter.java
index a8aec0e6..f96a8e01 100644
--- a/osswriter/src/main/java/com/alibaba/datax/plugin/writer/osswriter/OssWriter.java
+++ b/osswriter/src/main/java/com/alibaba/datax/plugin/writer/osswriter/OssWriter.java
@@ -14,7 +14,7 @@ import com.alibaba.datax.plugin.unstructuredstorage.writer.binaryFileUtil.Binary
 import com.alibaba.datax.plugin.writer.hdfswriter.HdfsWriter;
 import com.alibaba.datax.plugin.writer.osswriter.util.HandlerUtil;
 import com.alibaba.datax.plugin.writer.osswriter.util.HdfsParquetUtil;
-import com.alibaba.fastjson.JSON;
+import com.alibaba.fastjson2.JSON;
 import com.aliyun.oss.model.*;
 import org.apache.commons.io.IOUtils;
 import org.apache.commons.lang3.StringUtils;
diff --git a/osswriter/src/main/java/com/alibaba/datax/plugin/writer/osswriter/parquet/ParquetFileSupport.java b/osswriter/src/main/java/com/alibaba/datax/plugin/writer/osswriter/parquet/ParquetFileSupport.java
index 9daa5a7f..c3ff777c 100644
--- a/osswriter/src/main/java/com/alibaba/datax/plugin/writer/osswriter/parquet/ParquetFileSupport.java
+++ b/osswriter/src/main/java/com/alibaba/datax/plugin/writer/osswriter/parquet/ParquetFileSupport.java
@@ -5,9 +5,9 @@ import com.alibaba.datax.common.element.Record;
 import com.alibaba.datax.common.plugin.TaskPluginCollector;
 import com.alibaba.datax.plugin.unstructuredstorage.writer.Key;
 import com.alibaba.datax.plugin.writer.osswriter.Constant;
-import com.alibaba.fastjson.JSON;
-import com.alibaba.fastjson.JSONArray;
-import com.alibaba.fastjson.JSONObject;
+import com.alibaba.fastjson2.JSON;
+import com.alibaba.fastjson2.JSONArray;
+import com.alibaba.fastjson2.JSONObject;
 import org.apache.commons.lang3.StringUtils;
 import org.apache.hadoop.conf.Configuration;
 import org.slf4j.Logger;
diff --git a/osswriter/src/main/java/com/alibaba/datax/plugin/writer/osswriter/util/HdfsParquetUtil.java b/osswriter/src/main/java/com/alibaba/datax/plugin/writer/osswriter/util/HdfsParquetUtil.java
index ccd3aa35..dc102dac 100644
--- a/osswriter/src/main/java/com/alibaba/datax/plugin/writer/osswriter/util/HdfsParquetUtil.java
+++ b/osswriter/src/main/java/com/alibaba/datax/plugin/writer/osswriter/util/HdfsParquetUtil.java
@@ -3,8 +3,8 @@ package com.alibaba.datax.plugin.writer.osswriter.util;
 import com.alibaba.datax.common.util.Configuration;
 import com.alibaba.datax.plugin.writer.hdfswriter.HdfsWriter;
 import com.alibaba.datax.plugin.writer.osswriter.Key;
-import com.alibaba.fastjson.JSON;
-import com.alibaba.fastjson.JSONObject;
+import com.alibaba.fastjson2.JSON;
+import com.alibaba.fastjson2.JSONObject;
 import org.apache.commons.lang3.StringUtils;
 import org.apache.commons.lang3.Validate;
 import org.apache.hadoop.fs.FileSystem;
diff --git a/package.xml b/package.xml
index 96b52c30..4c1aff04 100755
--- a/package.xml
+++ b/package.xml
@@ -434,6 +434,13 @@
             
             datax
         
+        
+            databendwriter/target/datax/
+            
+                **/*.*
+            
+            datax
+        
         
             oscarwriter/target/datax/
             
@@ -483,5 +490,12 @@
             
             datax
         
+        
+            selectdbwriter/target/datax/
+            
+                **/*.*
+            
+            datax
+        
     
 
diff --git a/plugin-rdbms-util/src/main/java/com/alibaba/datax/plugin/rdbms/reader/util/OriginalConfPretreatmentUtil.java b/plugin-rdbms-util/src/main/java/com/alibaba/datax/plugin/rdbms/reader/util/OriginalConfPretreatmentUtil.java
index 3ac5f2af..ef3a876d 100755
--- a/plugin-rdbms-util/src/main/java/com/alibaba/datax/plugin/rdbms/reader/util/OriginalConfPretreatmentUtil.java
+++ b/plugin-rdbms-util/src/main/java/com/alibaba/datax/plugin/rdbms/reader/util/OriginalConfPretreatmentUtil.java
@@ -261,7 +261,7 @@ public final class OriginalConfPretreatmentUtil {
 
         // 混合配制 table 和 querySql
         if (!ListUtil.checkIfValueSame(tableModeFlags)
-                || !ListUtil.checkIfValueSame(tableModeFlags)) {
+                || !ListUtil.checkIfValueSame(querySqlModeFlags)) {
             throw DataXException.asDataXException(DBUtilErrorCode.TABLE_QUERYSQL_MIXED,
                     "您配置凌乱了. 不能同时既配置table又配置querySql. 请检查您的配置并作出修改.");
         }
diff --git a/plugin-rdbms-util/src/main/java/com/alibaba/datax/plugin/rdbms/reader/util/SingleTableSplitUtil.java b/plugin-rdbms-util/src/main/java/com/alibaba/datax/plugin/rdbms/reader/util/SingleTableSplitUtil.java
index 7e09cce5..10cfe795 100755
--- a/plugin-rdbms-util/src/main/java/com/alibaba/datax/plugin/rdbms/reader/util/SingleTableSplitUtil.java
+++ b/plugin-rdbms-util/src/main/java/com/alibaba/datax/plugin/rdbms/reader/util/SingleTableSplitUtil.java
@@ -5,7 +5,7 @@ import com.alibaba.datax.common.util.Configuration;
 import com.alibaba.datax.plugin.rdbms.reader.Constant;
 import com.alibaba.datax.plugin.rdbms.reader.Key;
 import com.alibaba.datax.plugin.rdbms.util.*;
-import com.alibaba.fastjson.JSON;
+import com.alibaba.fastjson2.JSON;
 
 import org.apache.commons.lang3.StringUtils;
 import org.apache.commons.lang3.tuple.ImmutablePair;
diff --git a/plugin-rdbms-util/src/main/java/com/alibaba/datax/plugin/rdbms/util/DBUtil.java b/plugin-rdbms-util/src/main/java/com/alibaba/datax/plugin/rdbms/util/DBUtil.java
index 978c4566..12a3aa74 100755
--- a/plugin-rdbms-util/src/main/java/com/alibaba/datax/plugin/rdbms/util/DBUtil.java
+++ b/plugin-rdbms-util/src/main/java/com/alibaba/datax/plugin/rdbms/util/DBUtil.java
@@ -720,6 +720,11 @@ public final class DBUtil {
                         new ArrayList(), String.class);
                 DBUtil.doDealWithSessionConfig(conn, sessionConfig, message);
                 break;
+            case SQLServer:
+                sessionConfig = config.getList(Key.SESSION,
+                        new ArrayList(), String.class);
+                DBUtil.doDealWithSessionConfig(conn, sessionConfig, message);
+                break;
             default:
                 break;
         }
diff --git a/plugin-rdbms-util/src/main/java/com/alibaba/datax/plugin/rdbms/util/DataBaseType.java b/plugin-rdbms-util/src/main/java/com/alibaba/datax/plugin/rdbms/util/DataBaseType.java
index 5468ce06..1b46a8bc 100755
--- a/plugin-rdbms-util/src/main/java/com/alibaba/datax/plugin/rdbms/util/DataBaseType.java
+++ b/plugin-rdbms-util/src/main/java/com/alibaba/datax/plugin/rdbms/util/DataBaseType.java
@@ -18,13 +18,14 @@ public enum DataBaseType {
     PostgreSQL("postgresql", "org.postgresql.Driver"),
     RDBMS("rdbms", "com.alibaba.datax.plugin.rdbms.util.DataBaseType"),
     DB2("db2", "com.ibm.db2.jcc.DB2Driver"),
+    ADB("adb","com.mysql.jdbc.Driver"),
     ADS("ads","com.mysql.jdbc.Driver"),
     ClickHouse("clickhouse", "ru.yandex.clickhouse.ClickHouseDriver"),
     KingbaseES("kingbasees", "com.kingbase8.Driver"),
     Oscar("oscar", "com.oscar.Driver"),
     OceanBase("oceanbase", "com.alipay.oceanbase.jdbc.Driver"),
-    StarRocks("starrocks", "com.mysql.jdbc.Driver");
-
+    StarRocks("starrocks", "com.mysql.jdbc.Driver"),
+    Databend("databend", "com.databend.jdbc.DatabendDriver");
 
     private String typeName;
     private String driverClassName;
@@ -89,6 +90,14 @@ public enum DataBaseType {
                     result = jdbc + "?" + suffix;
                 }
                 break;
+            case ADB:
+                suffix = "yearIsDateType=false&zeroDateTimeBehavior=convertToNull&rewriteBatchedStatements=true&tinyInt1isBit=false";
+                if (jdbc.contains("?")) {
+                    result = jdbc + "&" + suffix;
+                } else {
+                    result = jdbc + "?" + suffix;
+                }
+                break;
             case DRDS:
                 suffix = "yearIsDateType=false&zeroDateTimeBehavior=convertToNull";
                 if (jdbc.contains("?")) {
@@ -109,6 +118,8 @@ public enum DataBaseType {
                 break;
             case RDBMS:
                 break;
+            case Databend:
+                break;
             case KingbaseES:
                 break;
             case Oscar:
diff --git a/plugin-unstructured-storage-util/src/main/java/com/alibaba/datax/plugin/unstructuredstorage/reader/ColumnEntry.java b/plugin-unstructured-storage-util/src/main/java/com/alibaba/datax/plugin/unstructuredstorage/reader/ColumnEntry.java
index ee3af816..c86bd206 100644
--- a/plugin-unstructured-storage-util/src/main/java/com/alibaba/datax/plugin/unstructuredstorage/reader/ColumnEntry.java
+++ b/plugin-unstructured-storage-util/src/main/java/com/alibaba/datax/plugin/unstructuredstorage/reader/ColumnEntry.java
@@ -5,7 +5,7 @@ import java.text.SimpleDateFormat;
 
 import org.apache.commons.lang3.StringUtils;
 
-import com.alibaba.fastjson.JSON;
+import com.alibaba.fastjson2.JSON;
 
 public class ColumnEntry {
     private Integer index;
diff --git a/plugin-unstructured-storage-util/src/main/java/com/alibaba/datax/plugin/unstructuredstorage/reader/UnstructuredStorageReaderUtil.java b/plugin-unstructured-storage-util/src/main/java/com/alibaba/datax/plugin/unstructuredstorage/reader/UnstructuredStorageReaderUtil.java
index 645971d0..afcad851 100755
--- a/plugin-unstructured-storage-util/src/main/java/com/alibaba/datax/plugin/unstructuredstorage/reader/UnstructuredStorageReaderUtil.java
+++ b/plugin-unstructured-storage-util/src/main/java/com/alibaba/datax/plugin/unstructuredstorage/reader/UnstructuredStorageReaderUtil.java
@@ -5,9 +5,9 @@ import com.alibaba.datax.common.exception.DataXException;
 import com.alibaba.datax.common.plugin.RecordSender;
 import com.alibaba.datax.common.plugin.TaskPluginCollector;
 import com.alibaba.datax.common.util.Configuration;
-import com.alibaba.fastjson.JSON;
-import com.alibaba.fastjson.JSONObject;
-import com.alibaba.fastjson.TypeReference;
+import com.alibaba.fastjson2.JSON;
+import com.alibaba.fastjson2.JSONObject;
+import com.alibaba.fastjson2.TypeReference;
 import com.csvreader.CsvReader;
 import org.apache.commons.beanutils.BeanUtils;
 import io.airlift.compress.snappy.SnappyCodec;
diff --git a/plugin-unstructured-storage-util/src/main/java/com/alibaba/datax/plugin/unstructuredstorage/reader/split/UnstructuredSplitUtil.java b/plugin-unstructured-storage-util/src/main/java/com/alibaba/datax/plugin/unstructuredstorage/reader/split/UnstructuredSplitUtil.java
index 8087ed63..4e42583d 100644
--- a/plugin-unstructured-storage-util/src/main/java/com/alibaba/datax/plugin/unstructuredstorage/reader/split/UnstructuredSplitUtil.java
+++ b/plugin-unstructured-storage-util/src/main/java/com/alibaba/datax/plugin/unstructuredstorage/reader/split/UnstructuredSplitUtil.java
@@ -5,7 +5,7 @@ import com.alibaba.datax.common.util.Configuration;
 import com.alibaba.datax.common.util.RangeSplitUtil;
 import com.alibaba.datax.plugin.unstructuredstorage.reader.Key;
 import com.alibaba.datax.plugin.unstructuredstorage.reader.UnstructuredStorageReaderErrorCode;
-import com.alibaba.fastjson.JSON;
+import com.alibaba.fastjson2.JSON;
 import org.apache.commons.io.FileUtils;
 import org.apache.commons.lang3.tuple.ImmutableTriple;
 import org.apache.commons.lang3.tuple.Triple;
diff --git a/plugin-unstructured-storage-util/src/main/java/com/alibaba/datax/plugin/unstructuredstorage/util/ColumnTypeUtil.java b/plugin-unstructured-storage-util/src/main/java/com/alibaba/datax/plugin/unstructuredstorage/util/ColumnTypeUtil.java
index 8215bc36..a03bf07e 100644
--- a/plugin-unstructured-storage-util/src/main/java/com/alibaba/datax/plugin/unstructuredstorage/util/ColumnTypeUtil.java
+++ b/plugin-unstructured-storage-util/src/main/java/com/alibaba/datax/plugin/unstructuredstorage/util/ColumnTypeUtil.java
@@ -2,8 +2,8 @@ package com.alibaba.datax.plugin.unstructuredstorage.util;
 
 import com.alibaba.datax.common.util.Configuration;
 import com.alibaba.datax.plugin.unstructuredstorage.reader.ColumnEntry;
-import com.alibaba.fastjson.JSON;
-import com.alibaba.fastjson.JSONObject;
+import com.alibaba.fastjson2.JSON;
+import com.alibaba.fastjson2.JSONObject;
 
 import java.util.ArrayList;
 import java.util.List;
diff --git a/plugin-unstructured-storage-util/src/main/java/com/alibaba/datax/plugin/unstructuredstorage/writer/TextCsvWriterManager.java b/plugin-unstructured-storage-util/src/main/java/com/alibaba/datax/plugin/unstructuredstorage/writer/TextCsvWriterManager.java
index 167a7a87..4a9b9197 100644
--- a/plugin-unstructured-storage-util/src/main/java/com/alibaba/datax/plugin/unstructuredstorage/writer/TextCsvWriterManager.java
+++ b/plugin-unstructured-storage-util/src/main/java/com/alibaba/datax/plugin/unstructuredstorage/writer/TextCsvWriterManager.java
@@ -6,8 +6,8 @@ import java.util.HashMap;
 import java.util.List;
 
 import com.alibaba.datax.common.util.Configuration;
-import com.alibaba.fastjson.JSON;
-import com.alibaba.fastjson.TypeReference;
+import com.alibaba.fastjson2.JSON;
+import com.alibaba.fastjson2.TypeReference;
 import org.apache.commons.beanutils.BeanUtils;
 import org.apache.commons.io.IOUtils;
 import org.apache.commons.lang3.StringUtils;
diff --git a/pom.xml b/pom.xml
index 6f9383d1..957c60ee 100644
--- a/pom.xml
+++ b/pom.xml
@@ -22,7 +22,7 @@
         3.3.2
         1.10
         1.2
-        1.1.46.sec10
+        2.0.23
         16.0.1
         3.7.2.1-SNAPSHOT
 
@@ -84,6 +84,7 @@
         mysqlwriter
         starrockswriter
         drdswriter
+        databendwriter
         oraclewriter
         sqlserverwriter
         postgresqlwriter
@@ -120,6 +121,8 @@
         cassandrawriter
         clickhousewriter
         doriswriter
+        selectdbwriter
+        adbmysqlwriter
 
         
         plugin-rdbms-util
@@ -135,8 +138,8 @@
                 ${commons-lang3-version}
             
             
-                com.alibaba
-                fastjson
+                com.alibaba.fastjson2
+                fastjson2
                 ${fastjson-version}
             
             
+            
+                maven-compiler-plugin
+                
+                    ${jdk-version}
+                    ${jdk-version}
+                    ${project-sourceEncoding}
+                
+            
+            
+            
+                maven-assembly-plugin
+                
+                    
+                        src/main/assembly/package.xml
+                    
+                    datax
+                
+                
+                    
+                        dwzip
+                        package
+                        
+                            single
+                        
+                    
+                
+            
+        
+    
+
diff --git a/selectdbwriter/src/main/assembly/package.xml b/selectdbwriter/src/main/assembly/package.xml
new file mode 100644
index 00000000..1ea0009e
--- /dev/null
+++ b/selectdbwriter/src/main/assembly/package.xml
@@ -0,0 +1,34 @@
+
+
+    
+    
+        dir
+    
+    false
+    
+        
+            src/main/resources
+            
+                plugin.json
+                plugin_job_template.json
+            
+            plugin/writer/selectdbwriter
+        
+        
+            target/
+            
+                selectdbwriter-0.0.1-SNAPSHOT.jar
+            
+            plugin/writer/selectdbwriter
+        
+    
+    
+        
+            false
+            plugin/writer/selectdbwriter/libs
+            runtime
+        
+    
+
diff --git a/selectdbwriter/src/main/java/com/alibaba/datax/plugin/writer/selectdbwriter/BaseResponse.java b/selectdbwriter/src/main/java/com/alibaba/datax/plugin/writer/selectdbwriter/BaseResponse.java
new file mode 100644
index 00000000..c02f725f
--- /dev/null
+++ b/selectdbwriter/src/main/java/com/alibaba/datax/plugin/writer/selectdbwriter/BaseResponse.java
@@ -0,0 +1,23 @@
+package com.alibaba.datax.plugin.writer.selectdbwriter;
+
+import com.fasterxml.jackson.annotation.JsonIgnoreProperties;
+
+@JsonIgnoreProperties(ignoreUnknown = true)
+public class BaseResponse {
+    private int code;
+    private String msg;
+    private T data;
+    private int count;
+
+    public int getCode() {
+        return code;
+    }
+
+    public String getMsg() {
+        return msg;
+    }
+
+    public T getData(){
+        return data;
+    }
+}
diff --git a/selectdbwriter/src/main/java/com/alibaba/datax/plugin/writer/selectdbwriter/CopyIntoResp.java b/selectdbwriter/src/main/java/com/alibaba/datax/plugin/writer/selectdbwriter/CopyIntoResp.java
new file mode 100644
index 00000000..4da002ac
--- /dev/null
+++ b/selectdbwriter/src/main/java/com/alibaba/datax/plugin/writer/selectdbwriter/CopyIntoResp.java
@@ -0,0 +1,26 @@
+package com.alibaba.datax.plugin.writer.selectdbwriter;
+
+import com.fasterxml.jackson.annotation.JsonIgnoreProperties;
+
+import java.util.Map;
+
+@JsonIgnoreProperties(ignoreUnknown = true)
+public class CopyIntoResp extends BaseResponse{
+    private String code;
+    private String exception;
+
+    private Map result;
+
+    public String getDataCode() {
+        return code;
+    }
+
+    public String getException() {
+        return exception;
+    }
+
+    public Map getResult() {
+        return result;
+    }
+
+}
diff --git a/selectdbwriter/src/main/java/com/alibaba/datax/plugin/writer/selectdbwriter/CopySQLBuilder.java b/selectdbwriter/src/main/java/com/alibaba/datax/plugin/writer/selectdbwriter/CopySQLBuilder.java
new file mode 100644
index 00000000..62910d5d
--- /dev/null
+++ b/selectdbwriter/src/main/java/com/alibaba/datax/plugin/writer/selectdbwriter/CopySQLBuilder.java
@@ -0,0 +1,40 @@
+package com.alibaba.datax.plugin.writer.selectdbwriter;
+
+
+import java.util.Map;
+import java.util.StringJoiner;
+
+public class CopySQLBuilder {
+    private final static String COPY_SYNC = "copy.async";
+    private final String fileName;
+    private final Keys options;
+    private Map properties;
+
+
+
+    public CopySQLBuilder(Keys options, String fileName) {
+        this.options=options;
+        this.fileName=fileName;
+        this.properties=options.getLoadProps();
+    }
+
+    public String buildCopySQL(){
+        StringBuilder sb = new StringBuilder();
+        sb.append("COPY INTO ")
+                .append(options.getDatabase() + "." + options.getTable())
+                .append(" FROM @~('").append(fileName).append("') ")
+                .append("PROPERTIES (");
+
+        //copy into must be sync
+        properties.put(COPY_SYNC,false);
+        StringJoiner props = new StringJoiner(",");
+        for(Map.Entry entry : properties.entrySet()){
+            String key = String.valueOf(entry.getKey());
+            String value = String.valueOf(entry.getValue());
+            String prop = String.format("'%s'='%s'",key,value);
+            props.add(prop);
+        }
+        sb.append(props).append(" )");
+        return sb.toString();
+    }
+}
diff --git a/selectdbwriter/src/main/java/com/alibaba/datax/plugin/writer/selectdbwriter/DelimiterParser.java b/selectdbwriter/src/main/java/com/alibaba/datax/plugin/writer/selectdbwriter/DelimiterParser.java
new file mode 100644
index 00000000..fa6b397c
--- /dev/null
+++ b/selectdbwriter/src/main/java/com/alibaba/datax/plugin/writer/selectdbwriter/DelimiterParser.java
@@ -0,0 +1,54 @@
+package com.alibaba.datax.plugin.writer.selectdbwriter;
+
+import com.google.common.base.Strings;
+
+import java.io.StringWriter;
+
+public class DelimiterParser {
+
+    private static final String HEX_STRING = "0123456789ABCDEF";
+
+    public static String parse(String sp, String dSp) throws RuntimeException {
+        if ( Strings.isNullOrEmpty(sp)) {
+            return dSp;
+        }
+        if (!sp.toUpperCase().startsWith("\\X")) {
+            return sp;
+        }
+        String hexStr = sp.substring(2);
+        // check hex str
+        if (hexStr.isEmpty()) {
+            throw new RuntimeException("Failed to parse delimiter: Hex str is empty");
+        }
+        if (hexStr.length() % 2 != 0) {
+            throw new RuntimeException("Failed to parse delimiter: Hex str length error");
+        }
+        for (char hexChar : hexStr.toUpperCase().toCharArray()) {
+            if (HEX_STRING.indexOf(hexChar) == -1) {
+                throw new RuntimeException("Failed to parse delimiter: Hex str format error");
+            }
+        }
+        // transform to separator
+        StringWriter writer = new StringWriter();
+        for (byte b : hexStrToBytes(hexStr)) {
+            writer.append((char) b);
+        }
+        return writer.toString();
+    }
+
+    private static byte[] hexStrToBytes(String hexStr) {
+        String upperHexStr = hexStr.toUpperCase();
+        int length = upperHexStr.length() / 2;
+        char[] hexChars = upperHexStr.toCharArray();
+        byte[] bytes = new byte[length];
+        for (int i = 0; i < length; i++) {
+            int pos = i * 2;
+            bytes[i] = (byte) (charToByte(hexChars[pos]) << 4 | charToByte(hexChars[pos + 1]));
+        }
+        return bytes;
+    }
+
+    private static byte charToByte(char c) {
+        return (byte) HEX_STRING.indexOf(c);
+    }
+}
diff --git a/selectdbwriter/src/main/java/com/alibaba/datax/plugin/writer/selectdbwriter/HttpPostBuilder.java b/selectdbwriter/src/main/java/com/alibaba/datax/plugin/writer/selectdbwriter/HttpPostBuilder.java
new file mode 100644
index 00000000..9471debb
--- /dev/null
+++ b/selectdbwriter/src/main/java/com/alibaba/datax/plugin/writer/selectdbwriter/HttpPostBuilder.java
@@ -0,0 +1,51 @@
+package com.alibaba.datax.plugin.writer.selectdbwriter;
+
+import org.apache.commons.codec.binary.Base64;
+import org.apache.http.HttpEntity;
+import org.apache.http.HttpHeaders;
+import org.apache.http.client.methods.HttpPost;
+
+import java.nio.charset.StandardCharsets;
+import java.util.HashMap;
+import java.util.Map;
+
+
+public class HttpPostBuilder {
+    String url;
+    Map header;
+    HttpEntity httpEntity;
+    public HttpPostBuilder() {
+        header = new HashMap<>();
+    }
+
+    public HttpPostBuilder setUrl(String url) {
+        this.url = url;
+        return this;
+    }
+
+    public HttpPostBuilder addCommonHeader() {
+        header.put(HttpHeaders.EXPECT, "100-continue");
+        return this;
+    }
+
+    public HttpPostBuilder baseAuth(String user, String password) {
+        final String authInfo = user + ":" + password;
+        byte[] encoded = Base64.encodeBase64(authInfo.getBytes(StandardCharsets.UTF_8));
+        header.put(HttpHeaders.AUTHORIZATION, "Basic " + new String(encoded));
+        return this;
+    }
+
+    public HttpPostBuilder setEntity(HttpEntity httpEntity) {
+        this.httpEntity = httpEntity;
+        return this;
+    }
+
+    public HttpPost build() {
+        SelectdbUtil.checkNotNull(url);
+        SelectdbUtil.checkNotNull(httpEntity);
+        HttpPost put = new HttpPost(url);
+        header.forEach(put::setHeader);
+        put.setEntity(httpEntity);
+        return put;
+    }
+}
diff --git a/selectdbwriter/src/main/java/com/alibaba/datax/plugin/writer/selectdbwriter/HttpPutBuilder.java b/selectdbwriter/src/main/java/com/alibaba/datax/plugin/writer/selectdbwriter/HttpPutBuilder.java
new file mode 100644
index 00000000..59d7dbca
--- /dev/null
+++ b/selectdbwriter/src/main/java/com/alibaba/datax/plugin/writer/selectdbwriter/HttpPutBuilder.java
@@ -0,0 +1,65 @@
+package com.alibaba.datax.plugin.writer.selectdbwriter;
+
+import org.apache.commons.codec.binary.Base64;
+import org.apache.http.HttpEntity;
+import org.apache.http.HttpHeaders;
+import org.apache.http.client.methods.HttpPut;
+import org.apache.http.entity.StringEntity;
+
+import java.nio.charset.StandardCharsets;
+import java.util.HashMap;
+import java.util.Map;
+
+public class HttpPutBuilder {
+    String url;
+    Map header;
+    HttpEntity httpEntity;
+    public HttpPutBuilder() {
+        header = new HashMap<>();
+    }
+
+    public HttpPutBuilder setUrl(String url) {
+        this.url = url;
+        return this;
+    }
+
+    public HttpPutBuilder addFileName(String fileName){
+        header.put("fileName", fileName);
+        return this;
+    }
+
+    public HttpPutBuilder setEmptyEntity() {
+        try {
+            this.httpEntity = new StringEntity("");
+        } catch (Exception e) {
+            throw new IllegalArgumentException(e);
+        }
+        return this;
+    }
+
+    public HttpPutBuilder addCommonHeader() {
+        header.put(HttpHeaders.EXPECT, "100-continue");
+        return this;
+    }
+
+    public HttpPutBuilder baseAuth(String user, String password) {
+        final String authInfo = user + ":" + password;
+        byte[] encoded = Base64.encodeBase64(authInfo.getBytes(StandardCharsets.UTF_8));
+        header.put(HttpHeaders.AUTHORIZATION, "Basic " + new String(encoded));
+        return this;
+    }
+
+    public HttpPutBuilder setEntity(HttpEntity httpEntity) {
+        this.httpEntity = httpEntity;
+        return this;
+    }
+
+    public HttpPut build() {
+        SelectdbUtil.checkNotNull(url);
+        SelectdbUtil.checkNotNull(httpEntity);
+        HttpPut put = new HttpPut(url);
+        header.forEach(put::setHeader);
+        put.setEntity(httpEntity);
+        return put;
+    }
+}
diff --git a/selectdbwriter/src/main/java/com/alibaba/datax/plugin/writer/selectdbwriter/Keys.java b/selectdbwriter/src/main/java/com/alibaba/datax/plugin/writer/selectdbwriter/Keys.java
new file mode 100644
index 00000000..6c767d93
--- /dev/null
+++ b/selectdbwriter/src/main/java/com/alibaba/datax/plugin/writer/selectdbwriter/Keys.java
@@ -0,0 +1,186 @@
+package com.alibaba.datax.plugin.writer.selectdbwriter;
+
+import com.alibaba.datax.common.exception.DataXException;
+import com.alibaba.datax.common.util.Configuration;
+import com.alibaba.datax.plugin.rdbms.util.DBUtilErrorCode;
+
+import java.io.Serializable;
+import java.util.List;
+import java.util.Map;
+import java.util.stream.Collectors;
+
+public class Keys implements Serializable {
+
+    private static final long serialVersionUID = 1l;
+    private static final int DEFAULT_MAX_RETRIES = 3;
+    private static final int BATCH_ROWS = 500000;
+    private static final long DEFAULT_FLUSH_INTERVAL = 30000;
+
+    private static final String LOAD_PROPS_FORMAT = "file.type";
+    public enum StreamLoadFormat {
+        CSV, JSON;
+    }
+
+    private static final String USERNAME = "username";
+    private static final String PASSWORD = "password";
+    private static final String DATABASE = "connection[0].selectedDatabase";
+    private static final String TABLE = "connection[0].table[0]";
+    private static final String COLUMN = "column";
+    private static final String PRE_SQL = "preSql";
+    private static final String POST_SQL = "postSql";
+    private static final String JDBC_URL = "connection[0].jdbcUrl";
+    private static final String LABEL_PREFIX = "labelPrefix";
+    private static final String MAX_BATCH_ROWS = "maxBatchRows";
+    private static final String MAX_BATCH_SIZE = "batchSize";
+    private static final String FLUSH_INTERVAL = "flushInterval";
+    private static final String LOAD_URL = "loadUrl";
+    private static final String FLUSH_QUEUE_LENGTH = "flushQueueLength";
+    private static final String LOAD_PROPS = "loadProps";
+
+    private static final String DEFAULT_LABEL_PREFIX = "datax_selectdb_writer_";
+
+    private static final long DEFAULT_MAX_BATCH_SIZE = 90 * 1024 * 1024; //default 90M
+
+    private static final String CLUSTER_NAME = "clusterName";
+
+    private static final String MAX_RETRIES = "maxRetries";
+    private final Configuration options;
+
+    private List infoSchemaColumns;
+    private List userSetColumns;
+    private boolean isWildcardColumn;
+
+    public Keys ( Configuration options) {
+        this.options = options;
+        this.userSetColumns = options.getList(COLUMN, String.class).stream().map(str -> str.replace("`", "")).collect(Collectors.toList());
+        if (1 == options.getList(COLUMN, String.class).size() && "*".trim().equals(options.getList(COLUMN, String.class).get(0))) {
+            this.isWildcardColumn = true;
+        }
+    }
+
+    public void doPretreatment() {
+        validateRequired();
+        validateStreamLoadUrl();
+    }
+
+    public String getJdbcUrl() {
+        return options.getString(JDBC_URL);
+    }
+
+    public String getDatabase() {
+        return options.getString(DATABASE);
+    }
+
+    public String getTable() {
+        return options.getString(TABLE);
+    }
+
+    public String getUsername() {
+        return options.getString(USERNAME);
+    }
+
+    public String getPassword() {
+        return options.getString(PASSWORD);
+    }
+
+    public String getClusterName(){
+        return options.getString(CLUSTER_NAME);
+    }
+
+    public String getLabelPrefix() {
+        String label = options.getString(LABEL_PREFIX);
+        return null == label ? DEFAULT_LABEL_PREFIX : label;
+    }
+
+    public List getLoadUrlList() {
+        return options.getList(LOAD_URL, String.class);
+    }
+
+    public List getColumns() {
+        if (isWildcardColumn) {
+            return this.infoSchemaColumns;
+        }
+        return this.userSetColumns;
+    }
+
+    public boolean isWildcardColumn() {
+        return this.isWildcardColumn;
+    }
+
+    public void setInfoCchemaColumns(List cols) {
+        this.infoSchemaColumns = cols;
+    }
+
+    public List getPreSqlList() {
+        return options.getList(PRE_SQL, String.class);
+    }
+
+    public List getPostSqlList() {
+        return options.getList(POST_SQL, String.class);
+    }
+
+    public Map getLoadProps() {
+        return options.getMap(LOAD_PROPS);
+    }
+
+    public int getMaxRetries() {
+        Integer retries = options.getInt(MAX_RETRIES);
+        return null == retries ? DEFAULT_MAX_RETRIES : retries;
+    }
+
+    public int getBatchRows() {
+        Integer rows = options.getInt(MAX_BATCH_ROWS);
+        return null == rows ? BATCH_ROWS : rows;
+    }
+
+    public long getBatchSize() {
+        Long size = options.getLong(MAX_BATCH_SIZE);
+        return null == size ? DEFAULT_MAX_BATCH_SIZE : size;
+    }
+
+    public long getFlushInterval() {
+        Long interval = options.getLong(FLUSH_INTERVAL);
+        return null == interval ? DEFAULT_FLUSH_INTERVAL : interval;
+    }
+
+    public int getFlushQueueLength() {
+        Integer len = options.getInt(FLUSH_QUEUE_LENGTH);
+        return null == len ? 1 : len;
+    }
+
+
+    public StreamLoadFormat getStreamLoadFormat() {
+        Map loadProps = getLoadProps();
+        if (null == loadProps) {
+            return StreamLoadFormat.CSV;
+        }
+        if (loadProps.containsKey(LOAD_PROPS_FORMAT)
+                && StreamLoadFormat.JSON.name().equalsIgnoreCase(String.valueOf(loadProps.get(LOAD_PROPS_FORMAT)))) {
+            return StreamLoadFormat.JSON;
+        }
+        return StreamLoadFormat.CSV;
+    }
+
+    private void validateStreamLoadUrl() {
+        List urlList = getLoadUrlList();
+        for (String host : urlList) {
+            if (host.split(":").length < 2) {
+                throw DataXException.asDataXException(DBUtilErrorCode.CONF_ERROR,
+                        "The format of loadUrl is not correct, please enter:[`fe_ip:fe_http_ip;fe_ip:fe_http_ip`].");
+            }
+        }
+    }
+
+    private void validateRequired() {
+        final String[] requiredOptionKeys = new String[]{
+                USERNAME,
+                DATABASE,
+                TABLE,
+                COLUMN,
+                LOAD_URL
+        };
+        for (String optionKey : requiredOptionKeys) {
+            options.getNecessaryValue(optionKey, DBUtilErrorCode.REQUIRED_VALUE);
+        }
+    }
+}
diff --git a/selectdbwriter/src/main/java/com/alibaba/datax/plugin/writer/selectdbwriter/SelectdbBaseCodec.java b/selectdbwriter/src/main/java/com/alibaba/datax/plugin/writer/selectdbwriter/SelectdbBaseCodec.java
new file mode 100644
index 00000000..d2fc1224
--- /dev/null
+++ b/selectdbwriter/src/main/java/com/alibaba/datax/plugin/writer/selectdbwriter/SelectdbBaseCodec.java
@@ -0,0 +1,23 @@
+package com.alibaba.datax.plugin.writer.selectdbwriter;
+
+import com.alibaba.datax.common.element.Column;
+
+public class SelectdbBaseCodec {
+    protected String convertionField( Column col) {
+        if (null == col.getRawData() || Column.Type.NULL == col.getType()) {
+            return null;
+        }
+        if ( Column.Type.BOOL == col.getType()) {
+            return String.valueOf(col.asLong());
+        }
+        if ( Column.Type.BYTES == col.getType()) {
+            byte[] bts = (byte[])col.getRawData();
+            long value = 0;
+            for (int i = 0; i < bts.length; i++) {
+                value += (bts[bts.length - i - 1] & 0xffL) << (8 * i);
+            }
+            return String.valueOf(value);
+        }
+        return col.asString();
+    }
+}
diff --git a/selectdbwriter/src/main/java/com/alibaba/datax/plugin/writer/selectdbwriter/SelectdbCodec.java b/selectdbwriter/src/main/java/com/alibaba/datax/plugin/writer/selectdbwriter/SelectdbCodec.java
new file mode 100644
index 00000000..b7e9d6ae
--- /dev/null
+++ b/selectdbwriter/src/main/java/com/alibaba/datax/plugin/writer/selectdbwriter/SelectdbCodec.java
@@ -0,0 +1,10 @@
+package com.alibaba.datax.plugin.writer.selectdbwriter;
+
+import com.alibaba.datax.common.element.Record;
+
+import java.io.Serializable;
+
+public interface SelectdbCodec extends Serializable {
+
+    String codec( Record row);
+}
diff --git a/selectdbwriter/src/main/java/com/alibaba/datax/plugin/writer/selectdbwriter/SelectdbCodecFactory.java b/selectdbwriter/src/main/java/com/alibaba/datax/plugin/writer/selectdbwriter/SelectdbCodecFactory.java
new file mode 100644
index 00000000..567f4c0b
--- /dev/null
+++ b/selectdbwriter/src/main/java/com/alibaba/datax/plugin/writer/selectdbwriter/SelectdbCodecFactory.java
@@ -0,0 +1,19 @@
+package com.alibaba.datax.plugin.writer.selectdbwriter;
+
+import java.util.Map;
+
+public class SelectdbCodecFactory {
+    public SelectdbCodecFactory (){
+
+    }
+    public static SelectdbCodec createCodec( Keys writerOptions) {
+        if ( Keys.StreamLoadFormat.CSV.equals(writerOptions.getStreamLoadFormat())) {
+            Map props = writerOptions.getLoadProps();
+            return new SelectdbCsvCodec (null == props || !props.containsKey("file.column_separator") ? null : String.valueOf(props.get("file.column_separator")));
+        }
+        if ( Keys.StreamLoadFormat.JSON.equals(writerOptions.getStreamLoadFormat())) {
+            return new SelectdbJsonCodec (writerOptions.getColumns());
+        }
+        throw new RuntimeException("Failed to create row serializer, unsupported `format` from stream load properties.");
+    }
+}
diff --git a/selectdbwriter/src/main/java/com/alibaba/datax/plugin/writer/selectdbwriter/SelectdbCopyIntoObserver.java b/selectdbwriter/src/main/java/com/alibaba/datax/plugin/writer/selectdbwriter/SelectdbCopyIntoObserver.java
new file mode 100644
index 00000000..c9228b22
--- /dev/null
+++ b/selectdbwriter/src/main/java/com/alibaba/datax/plugin/writer/selectdbwriter/SelectdbCopyIntoObserver.java
@@ -0,0 +1,233 @@
+package com.alibaba.datax.plugin.writer.selectdbwriter;
+
+import com.fasterxml.jackson.core.type.TypeReference;
+import com.fasterxml.jackson.databind.ObjectMapper;
+import org.apache.commons.lang3.StringUtils;
+import org.apache.http.Header;
+import org.apache.http.HttpEntity;
+import org.apache.http.client.methods.CloseableHttpResponse;
+import org.apache.http.entity.InputStreamEntity;
+import org.apache.http.entity.StringEntity;
+import org.apache.http.impl.client.CloseableHttpClient;
+import org.apache.http.impl.client.HttpClientBuilder;
+import org.apache.http.impl.client.HttpClients;
+import org.apache.http.util.EntityUtils;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.ByteArrayInputStream;
+import java.io.IOException;
+import java.net.HttpURLConnection;
+import java.net.URL;
+import java.nio.ByteBuffer;
+import java.nio.charset.StandardCharsets;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.regex.Pattern;
+
+public class SelectdbCopyIntoObserver {
+    private static final Logger LOG = LoggerFactory.getLogger(SelectdbCopyIntoObserver.class);
+
+    private Keys options;
+    private long pos;
+    public static final int SUCCESS = 0;
+    public static final String FAIL = "1";
+    private static final ObjectMapper OBJECT_MAPPER = new ObjectMapper();
+    private final HttpClientBuilder httpClientBuilder = HttpClients
+        .custom()
+        .disableRedirectHandling();
+    private CloseableHttpClient httpClient;
+    private static final String UPLOAD_URL_PATTERN = "%s/copy/upload";
+    private static final String COMMIT_PATTERN = "%s/copy/query";
+    private static final Pattern COMMITTED_PATTERN = Pattern.compile("errCode = 2, detailMessage = No files can be copied, matched (\\d+) files, " + "filtered (\\d+) files because files may be loading or loaded");
+
+
+    public SelectdbCopyIntoObserver(Keys options) {
+        this.options = options;
+        this.httpClient = httpClientBuilder.build();
+
+    }
+
+    public void streamLoad(WriterTuple data) throws Exception {
+        String host = getLoadHost();
+        if (host == null) {
+            throw new RuntimeException("load_url cannot be empty, or the host cannot connect.Please check your configuration.");
+        }
+        String loadUrl = String.format(UPLOAD_URL_PATTERN, host);
+        String uploadAddress = getUploadAddress(loadUrl, data.getLabel());
+        put(uploadAddress, data.getLabel(), addRows(data.getRows(), data.getBytes().intValue()));
+        executeCopy(host,data.getLabel());
+
+    }
+
+    private String getUploadAddress(String loadUrl, String fileName) throws IOException {
+        HttpPutBuilder putBuilder = new HttpPutBuilder();
+        putBuilder.setUrl(loadUrl)
+            .addFileName(fileName)
+            .addCommonHeader()
+            .setEmptyEntity()
+            .baseAuth(options.getUsername(), options.getPassword());
+        CloseableHttpResponse execute = httpClientBuilder.build().execute(putBuilder.build());
+        int statusCode = execute.getStatusLine().getStatusCode();
+        String reason = execute.getStatusLine().getReasonPhrase();
+        if (statusCode == 307) {
+            Header location = execute.getFirstHeader("location");
+            String uploadAddress = location.getValue();
+            LOG.info("redirect to s3:{}", uploadAddress);
+            return uploadAddress;
+        } else {
+            HttpEntity entity = execute.getEntity();
+            String result = entity == null ? null : EntityUtils.toString(entity);
+            LOG.error("Failed get the redirected address, status {}, reason {}, response {}", statusCode, reason, result);
+            throw new RuntimeException("Could not get the redirected address.");
+        }
+
+    }
+
+    private byte[] addRows(List rows, int totalBytes) {
+        if (Keys.StreamLoadFormat.CSV.equals(options.getStreamLoadFormat())) {
+            Map props = (options.getLoadProps() == null ? new HashMap<>() : options.getLoadProps());
+            byte[] lineDelimiter = DelimiterParser.parse((String) props.get("file.line_delimiter"), "\n").getBytes(StandardCharsets.UTF_8);
+            ByteBuffer bos = ByteBuffer.allocate(totalBytes + rows.size() * lineDelimiter.length);
+            for (byte[] row : rows) {
+                bos.put(row);
+                bos.put(lineDelimiter);
+            }
+            return bos.array();
+        }
+
+        if (Keys.StreamLoadFormat.JSON.equals(options.getStreamLoadFormat())) {
+            ByteBuffer bos = ByteBuffer.allocate(totalBytes + (rows.isEmpty() ? 2 : rows.size() + 1));
+            bos.put("[".getBytes(StandardCharsets.UTF_8));
+            byte[] jsonDelimiter = ",".getBytes(StandardCharsets.UTF_8);
+            boolean isFirstElement = true;
+            for (byte[] row : rows) {
+                if (!isFirstElement) {
+                    bos.put(jsonDelimiter);
+                }
+                bos.put(row);
+                isFirstElement = false;
+            }
+            bos.put("]".getBytes(StandardCharsets.UTF_8));
+            return bos.array();
+        }
+        throw new RuntimeException("Failed to join rows data, unsupported `file.type` from copy into properties:");
+    }
+
+    public void put(String loadUrl, String fileName, byte[] data) throws IOException {
+        LOG.info(String.format("Executing upload file to: '%s', size: '%s'", loadUrl, data.length));
+        HttpPutBuilder putBuilder = new HttpPutBuilder();
+        putBuilder.setUrl(loadUrl)
+            .addCommonHeader()
+            .setEntity(new InputStreamEntity(new ByteArrayInputStream(data)));
+        CloseableHttpResponse response = httpClient.execute(putBuilder.build());
+        final int statusCode = response.getStatusLine().getStatusCode();
+        if (statusCode != 200) {
+            String result = response.getEntity() == null ? null : EntityUtils.toString(response.getEntity());
+            LOG.error("upload file {} error, response {}", fileName, result);
+            throw new SelectdbWriterException("upload file error: " + fileName,true);
+        }
+    }
+
+    private String getLoadHost() {
+        List hostList = options.getLoadUrlList();
+        long tmp = pos + hostList.size();
+        for (; pos < tmp; pos++) {
+            String host = new StringBuilder("http://").append(hostList.get((int) (pos % hostList.size()))).toString();
+            if (checkConnection(host)) {
+                return host;
+            }
+        }
+        return null;
+    }
+
+    private boolean checkConnection(String host) {
+        try {
+            URL url = new URL(host);
+            HttpURLConnection co = (HttpURLConnection) url.openConnection();
+            co.setConnectTimeout(5000);
+            co.connect();
+            co.disconnect();
+            return true;
+        } catch (Exception e1) {
+            e1.printStackTrace();
+            return false;
+        }
+    }
+
+
+    /**
+     * execute copy into
+     */
+    public void executeCopy(String hostPort, String fileName) throws IOException{
+        long start = System.currentTimeMillis();
+        CopySQLBuilder copySQLBuilder = new CopySQLBuilder(options, fileName);
+        String copySQL = copySQLBuilder.buildCopySQL();
+        LOG.info("build copy SQL is {}", copySQL);
+        Map params = new HashMap<>();
+        params.put("sql", copySQL);
+        if(StringUtils.isNotBlank(options.getClusterName())){
+            params.put("cluster",options.getClusterName());
+        }
+        HttpPostBuilder postBuilder = new HttpPostBuilder();
+        postBuilder.setUrl(String.format(COMMIT_PATTERN, hostPort))
+            .baseAuth(options.getUsername(), options.getPassword())
+            .setEntity(new StringEntity(OBJECT_MAPPER.writeValueAsString(params)));
+
+        CloseableHttpResponse response = httpClient.execute(postBuilder.build());
+        final int statusCode = response.getStatusLine().getStatusCode();
+        final String reasonPhrase = response.getStatusLine().getReasonPhrase();
+        String loadResult = "";
+        if (statusCode != 200) {
+            LOG.warn("commit failed with status {} {}, reason {}", statusCode, hostPort, reasonPhrase);
+            throw new SelectdbWriterException("commit error with file: " + fileName,true);
+        } else if (response.getEntity() != null){
+            loadResult = EntityUtils.toString(response.getEntity());
+            boolean success = handleCommitResponse(loadResult);
+            if(success){
+                LOG.info("commit success cost {}ms, response is {}", System.currentTimeMillis() - start, loadResult);
+            }else{
+                throw new SelectdbWriterException("commit fail",true);
+            }
+        }
+    }
+
+    public boolean handleCommitResponse(String loadResult) throws IOException {
+        BaseResponse baseResponse = OBJECT_MAPPER.readValue(loadResult, new TypeReference>(){});
+        if(baseResponse.getCode() == SUCCESS){
+            CopyIntoResp dataResp = baseResponse.getData();
+            if(FAIL.equals(dataResp.getDataCode())){
+                LOG.error("copy into execute failed, reason:{}", loadResult);
+                return false;
+            }else{
+                Map result = dataResp.getResult();
+                if(!result.get("state").equals("FINISHED") && !isCommitted(result.get("msg"))){
+                    LOG.error("copy into load failed, reason:{}", loadResult);
+                    return false;
+                }else{
+                    return true;
+                }
+            }
+        }else{
+            LOG.error("commit failed, reason:{}", loadResult);
+            return false;
+        }
+    }
+
+    public static boolean isCommitted(String msg) {
+        return COMMITTED_PATTERN.matcher(msg).matches();
+    }
+
+
+    public void close() throws IOException {
+        if (null != httpClient) {
+            try {
+                httpClient.close();
+            } catch (IOException e) {
+                LOG.error("Closing httpClient failed.", e);
+                throw new RuntimeException("Closing httpClient failed.", e);
+            }
+        }
+    }
+}
diff --git a/selectdbwriter/src/main/java/com/alibaba/datax/plugin/writer/selectdbwriter/SelectdbCsvCodec.java b/selectdbwriter/src/main/java/com/alibaba/datax/plugin/writer/selectdbwriter/SelectdbCsvCodec.java
new file mode 100644
index 00000000..57cad84d
--- /dev/null
+++ b/selectdbwriter/src/main/java/com/alibaba/datax/plugin/writer/selectdbwriter/SelectdbCsvCodec.java
@@ -0,0 +1,27 @@
+package com.alibaba.datax.plugin.writer.selectdbwriter;
+
+import com.alibaba.datax.common.element.Record;
+
+public class SelectdbCsvCodec extends SelectdbBaseCodec implements SelectdbCodec {
+
+    private static final long serialVersionUID = 1L;
+
+    private final String columnSeparator;
+
+    public SelectdbCsvCodec ( String sp) {
+        this.columnSeparator = DelimiterParser.parse(sp, "\t");
+    }
+
+    @Override
+    public String codec( Record row) {
+        StringBuilder sb = new StringBuilder();
+        for (int i = 0; i < row.getColumnNumber(); i++) {
+            String value = convertionField(row.getColumn(i));
+            sb.append(null == value ? "\\N" : value);
+            if (i < row.getColumnNumber() - 1) {
+                sb.append(columnSeparator);
+            }
+        }
+        return sb.toString();
+    }
+}
diff --git a/selectdbwriter/src/main/java/com/alibaba/datax/plugin/writer/selectdbwriter/SelectdbJsonCodec.java b/selectdbwriter/src/main/java/com/alibaba/datax/plugin/writer/selectdbwriter/SelectdbJsonCodec.java
new file mode 100644
index 00000000..8b1a3760
--- /dev/null
+++ b/selectdbwriter/src/main/java/com/alibaba/datax/plugin/writer/selectdbwriter/SelectdbJsonCodec.java
@@ -0,0 +1,33 @@
+package com.alibaba.datax.plugin.writer.selectdbwriter;
+
+import com.alibaba.datax.common.element.Record;
+import com.alibaba.fastjson2.JSON;
+
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+
+public class SelectdbJsonCodec extends SelectdbBaseCodec implements SelectdbCodec {
+
+    private static final long serialVersionUID = 1L;
+
+    private final List fieldNames;
+
+    public SelectdbJsonCodec ( List fieldNames) {
+        this.fieldNames = fieldNames;
+    }
+
+    @Override
+    public String codec( Record row) {
+        if (null == fieldNames) {
+            return "";
+        }
+        Map rowMap = new HashMap<> (fieldNames.size());
+        int idx = 0;
+        for (String fieldName : fieldNames) {
+            rowMap.put(fieldName, convertionField(row.getColumn(idx)));
+            idx++;
+        }
+        return JSON.toJSONString(rowMap);
+    }
+}
diff --git a/selectdbwriter/src/main/java/com/alibaba/datax/plugin/writer/selectdbwriter/SelectdbUtil.java b/selectdbwriter/src/main/java/com/alibaba/datax/plugin/writer/selectdbwriter/SelectdbUtil.java
new file mode 100644
index 00000000..6cfcc8bf
--- /dev/null
+++ b/selectdbwriter/src/main/java/com/alibaba/datax/plugin/writer/selectdbwriter/SelectdbUtil.java
@@ -0,0 +1,113 @@
+package com.alibaba.datax.plugin.writer.selectdbwriter;
+
+import com.alibaba.datax.plugin.rdbms.util.DBUtil;
+import com.alibaba.datax.plugin.rdbms.util.DataBaseType;
+import com.alibaba.datax.plugin.rdbms.util.RdbmsException;
+import com.alibaba.datax.plugin.rdbms.writer.Constant;
+import com.alibaba.druid.sql.parser.ParserException;
+import com.google.common.base.Strings;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.sql.Connection;
+import java.sql.ResultSet;
+import java.sql.Statement;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.List;
+
+/**
+ * jdbc util
+ */
+public class SelectdbUtil {
+    private static final Logger LOG = LoggerFactory.getLogger(SelectdbUtil.class);
+
+    private SelectdbUtil() {}
+
+    public static List getDorisTableColumns( Connection conn, String databaseName, String tableName) {
+        String currentSql = String.format("SELECT COLUMN_NAME FROM `information_schema`.`COLUMNS` WHERE `TABLE_SCHEMA` = '%s' AND `TABLE_NAME` = '%s' ORDER BY `ORDINAL_POSITION` ASC;", databaseName, tableName);
+        List columns = new ArrayList<> ();
+        ResultSet rs = null;
+        try {
+            rs = DBUtil.query(conn, currentSql);
+            while (DBUtil.asyncResultSetNext(rs)) {
+                String colName = rs.getString("COLUMN_NAME");
+                columns.add(colName);
+            }
+            return columns;
+        } catch (Exception e) {
+            throw RdbmsException.asQueryException(DataBaseType.MySql, e, currentSql, null, null);
+        } finally {
+            DBUtil.closeDBResources(rs, null, null);
+        }
+    }
+
+    public static List renderPreOrPostSqls(List preOrPostSqls, String tableName) {
+        if (null == preOrPostSqls) {
+            return Collections.emptyList();
+        }
+        List renderedSqls = new ArrayList<>();
+        for (String sql : preOrPostSqls) {
+            if (! Strings.isNullOrEmpty(sql)) {
+                renderedSqls.add(sql.replace(Constant.TABLE_NAME_PLACEHOLDER, tableName));
+            }
+        }
+        return renderedSqls;
+    }
+
+    public static void executeSqls(Connection conn, List sqls) {
+        Statement stmt = null;
+        String currentSql = null;
+        try {
+            stmt = conn.createStatement();
+            for (String sql : sqls) {
+                currentSql = sql;
+                DBUtil.executeSqlWithoutResultSet(stmt, sql);
+            }
+        } catch (Exception e) {
+            throw RdbmsException.asQueryException(DataBaseType.MySql, e, currentSql, null, null);
+        } finally {
+            DBUtil.closeDBResources(null, stmt, null);
+        }
+    }
+
+    public static void preCheckPrePareSQL( Keys options) {
+        String table = options.getTable();
+        List preSqls = options.getPreSqlList();
+        List renderedPreSqls = SelectdbUtil.renderPreOrPostSqls(preSqls, table);
+        if (null != renderedPreSqls && !renderedPreSqls.isEmpty()) {
+            LOG.info("Begin to preCheck preSqls:[{}].", String.join(";", renderedPreSqls));
+            for (String sql : renderedPreSqls) {
+                try {
+                    DBUtil.sqlValid(sql, DataBaseType.MySql);
+                } catch ( ParserException e) {
+                    throw RdbmsException.asPreSQLParserException(DataBaseType.MySql,e,sql);
+                }
+            }
+        }
+    }
+
+    public static void preCheckPostSQL( Keys options) {
+        String table = options.getTable();
+        List postSqls = options.getPostSqlList();
+        List renderedPostSqls = SelectdbUtil.renderPreOrPostSqls(postSqls, table);
+        if (null != renderedPostSqls && !renderedPostSqls.isEmpty()) {
+            LOG.info("Begin to preCheck postSqls:[{}].", String.join(";", renderedPostSqls));
+            for(String sql : renderedPostSqls) {
+                try {
+                    DBUtil.sqlValid(sql, DataBaseType.MySql);
+                } catch (ParserException e){
+                    throw RdbmsException.asPostSQLParserException(DataBaseType.MySql,e,sql);
+                }
+            }
+        }
+    }
+
+    public static  T checkNotNull(T reference) {
+        if (reference == null) {
+            throw new NullPointerException();
+        } else {
+            return reference;
+        }
+    }
+}
diff --git a/selectdbwriter/src/main/java/com/alibaba/datax/plugin/writer/selectdbwriter/SelectdbWriter.java b/selectdbwriter/src/main/java/com/alibaba/datax/plugin/writer/selectdbwriter/SelectdbWriter.java
new file mode 100644
index 00000000..2b91f122
--- /dev/null
+++ b/selectdbwriter/src/main/java/com/alibaba/datax/plugin/writer/selectdbwriter/SelectdbWriter.java
@@ -0,0 +1,149 @@
+package com.alibaba.datax.plugin.writer.selectdbwriter;
+
+import com.alibaba.datax.common.element.Record;
+import com.alibaba.datax.common.exception.DataXException;
+import com.alibaba.datax.common.plugin.RecordReceiver;
+import com.alibaba.datax.common.spi.Writer;
+import com.alibaba.datax.common.util.Configuration;
+import com.alibaba.datax.plugin.rdbms.util.DBUtil;
+import com.alibaba.datax.plugin.rdbms.util.DBUtilErrorCode;
+import com.alibaba.datax.plugin.rdbms.util.DataBaseType;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.sql.Connection;
+import java.util.ArrayList;
+import java.util.List;
+
+/**
+ * doris data writer
+ */
+public class SelectdbWriter extends Writer {
+
+    public static class Job extends Writer.Job {
+
+        private static final Logger LOG = LoggerFactory.getLogger(Job.class);
+        private Configuration originalConfig = null;
+        private Keys options;
+
+        @Override
+        public void init() {
+            this.originalConfig = super.getPluginJobConf();
+            options = new Keys (super.getPluginJobConf());
+            options.doPretreatment();
+        }
+
+        @Override
+        public void preCheck(){
+            this.init();
+            SelectdbUtil.preCheckPrePareSQL(options);
+            SelectdbUtil.preCheckPostSQL(options);
+        }
+
+        @Override
+        public void prepare() {
+            String username = options.getUsername();
+            String password = options.getPassword();
+            String jdbcUrl = options.getJdbcUrl();
+            List renderedPreSqls = SelectdbUtil.renderPreOrPostSqls(options.getPreSqlList(), options.getTable());
+            if (null != renderedPreSqls && !renderedPreSqls.isEmpty()) {
+                Connection conn = DBUtil.getConnection(DataBaseType.MySql, jdbcUrl, username, password);
+                LOG.info("Begin to execute preSqls:[{}]. context info:{}.", String.join(";", renderedPreSqls), jdbcUrl);
+                SelectdbUtil.executeSqls(conn, renderedPreSqls);
+                DBUtil.closeDBResources(null, null, conn);
+            }
+        }
+
+        @Override
+        public List split(int mandatoryNumber) {
+            List configurations = new ArrayList<>(mandatoryNumber);
+            for (int i = 0; i < mandatoryNumber; i++) {
+                configurations.add(originalConfig);
+            }
+            return configurations;
+        }
+
+        @Override
+        public void post() {
+            String username = options.getUsername();
+            String password = options.getPassword();
+            String jdbcUrl = options.getJdbcUrl();
+            List renderedPostSqls = SelectdbUtil.renderPreOrPostSqls(options.getPostSqlList(), options.getTable());
+            if (null != renderedPostSqls && !renderedPostSqls.isEmpty()) {
+                Connection conn = DBUtil.getConnection(DataBaseType.MySql, jdbcUrl, username, password);
+                LOG.info("Start to execute preSqls:[{}]. context info:{}.", String.join(";", renderedPostSqls), jdbcUrl);
+                SelectdbUtil.executeSqls(conn, renderedPostSqls);
+                DBUtil.closeDBResources(null, null, conn);
+            }
+        }
+
+        @Override
+        public void destroy() {
+        }
+
+    }
+
+    public static class Task extends Writer.Task {
+        private SelectdbWriterManager writerManager;
+        private Keys options;
+        private SelectdbCodec rowCodec;
+
+        @Override
+        public void init() {
+            options = new Keys (super.getPluginJobConf());
+            if (options.isWildcardColumn()) {
+                Connection conn = DBUtil.getConnection(DataBaseType.MySql, options.getJdbcUrl(), options.getUsername(), options.getPassword());
+                List columns = SelectdbUtil.getDorisTableColumns(conn, options.getDatabase(), options.getTable());
+                options.setInfoCchemaColumns(columns);
+            }
+            writerManager = new SelectdbWriterManager(options);
+            rowCodec = SelectdbCodecFactory.createCodec(options);
+        }
+
+        @Override
+        public void prepare() {
+        }
+
+        public void startWrite(RecordReceiver recordReceiver) {
+            try {
+                Record record;
+                while ((record = recordReceiver.getFromReader()) != null) {
+                    if (record.getColumnNumber() != options.getColumns().size()) {
+                        throw DataXException
+                                .asDataXException(
+                                        DBUtilErrorCode.CONF_ERROR,
+                                        String.format(
+                                                "There is an error in the column configuration information. " +
+                                                "This is because you have configured a task where the number of fields to be read from the source:%s " +
+                                                "is not equal to the number of fields to be written to the destination table:%s. " +
+                                                "Please check your configuration and make changes.",
+                                                record.getColumnNumber(),
+                                                options.getColumns().size()));
+                    }
+                    writerManager.writeRecord(rowCodec.codec(record));
+                }
+            } catch (Exception e) {
+                throw DataXException.asDataXException(DBUtilErrorCode.WRITE_DATA_ERROR, e);
+            }
+        }
+
+        @Override
+        public void post() {
+            try {
+                writerManager.close();
+            } catch (Exception e) {
+                throw DataXException.asDataXException(DBUtilErrorCode.WRITE_DATA_ERROR, e);
+            }
+        }
+
+        @Override
+        public void destroy() {}
+
+        @Override
+        public boolean supportFailOver(){
+            return false;
+        }
+    }
+
+
+}
diff --git a/selectdbwriter/src/main/java/com/alibaba/datax/plugin/writer/selectdbwriter/SelectdbWriterException.java b/selectdbwriter/src/main/java/com/alibaba/datax/plugin/writer/selectdbwriter/SelectdbWriterException.java
new file mode 100644
index 00000000..f85a06d1
--- /dev/null
+++ b/selectdbwriter/src/main/java/com/alibaba/datax/plugin/writer/selectdbwriter/SelectdbWriterException.java
@@ -0,0 +1,39 @@
+package com.alibaba.datax.plugin.writer.selectdbwriter;
+
+
+public class SelectdbWriterException extends RuntimeException  {
+
+    private boolean reCreateLabel;
+
+
+    public SelectdbWriterException() {
+        super();
+    }
+
+    public SelectdbWriterException(String message) {
+        super(message);
+    }
+
+    public SelectdbWriterException(String message, boolean reCreateLabel) {
+        super(message);
+        this.reCreateLabel = reCreateLabel;
+    }
+
+    public SelectdbWriterException(String message, Throwable cause) {
+        super(message, cause);
+    }
+
+    public SelectdbWriterException(Throwable cause) {
+        super(cause);
+    }
+
+    protected SelectdbWriterException(String message, Throwable cause,
+                                      boolean enableSuppression,
+                                      boolean writableStackTrace) {
+        super(message, cause, enableSuppression, writableStackTrace);
+    }
+
+    public boolean needReCreateLabel() {
+        return reCreateLabel;
+    }
+}
diff --git a/selectdbwriter/src/main/java/com/alibaba/datax/plugin/writer/selectdbwriter/SelectdbWriterManager.java b/selectdbwriter/src/main/java/com/alibaba/datax/plugin/writer/selectdbwriter/SelectdbWriterManager.java
new file mode 100644
index 00000000..e8b22b7f
--- /dev/null
+++ b/selectdbwriter/src/main/java/com/alibaba/datax/plugin/writer/selectdbwriter/SelectdbWriterManager.java
@@ -0,0 +1,196 @@
+package com.alibaba.datax.plugin.writer.selectdbwriter;
+
+import com.google.common.base.Strings;
+import org.apache.commons.lang3.concurrent.BasicThreadFactory;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.nio.charset.StandardCharsets;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.UUID;
+import java.util.concurrent.Executors;
+import java.util.concurrent.LinkedBlockingDeque;
+import java.util.concurrent.ScheduledExecutorService;
+import java.util.concurrent.ScheduledFuture;
+import java.util.concurrent.TimeUnit;
+
+public class SelectdbWriterManager {
+
+    private static final Logger LOG = LoggerFactory.getLogger(SelectdbWriterManager.class);
+
+    private final SelectdbCopyIntoObserver visitor;
+    private final Keys options;
+    private final List buffer = new ArrayList<>();
+    private int batchCount = 0;
+    private long batchSize = 0;
+    private volatile boolean closed = false;
+    private volatile Exception flushException;
+    private final LinkedBlockingDeque flushQueue;
+    private ScheduledExecutorService scheduler;
+    private ScheduledFuture scheduledFuture;
+
+    public SelectdbWriterManager(Keys options) {
+        this.options = options;
+        this.visitor = new SelectdbCopyIntoObserver(options);
+        flushQueue = new LinkedBlockingDeque<>(options.getFlushQueueLength());
+        this.startScheduler();
+        this.startAsyncFlushing();
+    }
+
+    public void startScheduler() {
+        stopScheduler();
+        this.scheduler = Executors.newScheduledThreadPool(1, new BasicThreadFactory.Builder().namingPattern("Doris-interval-flush").daemon(true).build());
+        this.scheduledFuture = this.scheduler.schedule(() -> {
+            synchronized (SelectdbWriterManager.this) {
+                if (!closed) {
+                    try {
+                        String label = createBatchLabel();
+                        LOG.info(String.format("Selectdb interval Sinking triggered: label[%s].", label));
+                        if (batchCount == 0) {
+                            startScheduler();
+                        }
+                        flush(label, false);
+                    } catch (Exception e) {
+                        flushException = e;
+                    }
+                }
+            }
+        }, options.getFlushInterval(), TimeUnit.MILLISECONDS);
+    }
+
+    public void stopScheduler() {
+        if (this.scheduledFuture != null) {
+            scheduledFuture.cancel(false);
+            this.scheduler.shutdown();
+        }
+    }
+
+    public final synchronized void writeRecord(String record) throws IOException {
+        checkFlushException();
+        try {
+            byte[] bts = record.getBytes(StandardCharsets.UTF_8);
+            buffer.add(bts);
+            batchCount++;
+            batchSize += bts.length;
+            if (batchCount >= options.getBatchRows() || batchSize >= options.getBatchSize()) {
+                String label = createBatchLabel();
+                if(LOG.isDebugEnabled()){
+                    LOG.debug(String.format("buffer Sinking triggered: rows[%d] label [%s].", batchCount, label));
+                }
+                flush(label, false);
+            }
+        } catch (Exception e) {
+            throw new SelectdbWriterException("Writing records to selectdb failed.", e);
+        }
+    }
+
+    public synchronized void flush(String label, boolean waitUtilDone) throws Exception {
+        checkFlushException();
+        if (batchCount == 0) {
+            if (waitUtilDone) {
+                waitAsyncFlushingDone();
+            }
+            return;
+        }
+        flushQueue.put(new WriterTuple(label, batchSize, new ArrayList<>(buffer)));
+        if (waitUtilDone) {
+            // wait the last flush
+            waitAsyncFlushingDone();
+        }
+        buffer.clear();
+        batchCount = 0;
+        batchSize = 0;
+    }
+
+    public synchronized void close() throws IOException {
+        if (!closed) {
+            closed = true;
+            try {
+                String label = createBatchLabel();
+                if (batchCount > 0) {
+                    if (LOG.isDebugEnabled()) {
+                        LOG.debug(String.format("Selectdb Sink is about to close: label[%s].", label));
+                    }
+                }
+                flush(label, true);
+            } catch (Exception e) {
+                throw new RuntimeException("Writing records to Selectdb failed.", e);
+            }
+        }
+        checkFlushException();
+    }
+
+    public String createBatchLabel() {
+        StringBuilder sb = new StringBuilder();
+        if (!Strings.isNullOrEmpty(options.getLabelPrefix())) {
+            sb.append(options.getLabelPrefix());
+        }
+        return sb.append(UUID.randomUUID().toString())
+                .toString();
+    }
+
+    private void startAsyncFlushing() {
+        // start flush thread
+        Thread flushThread = new Thread(new Runnable() {
+            public void run() {
+                while (true) {
+                    try {
+                        asyncFlush();
+                    } catch (Exception e) {
+                        flushException = e;
+                    }
+                }
+            }
+        });
+        flushThread.setDaemon(true);
+        flushThread.start();
+    }
+
+    private void waitAsyncFlushingDone() throws InterruptedException {
+        // wait previous flushings
+        for (int i = 0; i <= options.getFlushQueueLength(); i++) {
+            flushQueue.put(new WriterTuple("", 0l, null));
+        }
+        checkFlushException();
+    }
+
+    private void asyncFlush() throws Exception {
+        WriterTuple flushData = flushQueue.take();
+        if (Strings.isNullOrEmpty(flushData.getLabel())) {
+            return;
+        }
+        stopScheduler();
+        for (int i = 0; i <= options.getMaxRetries(); i++) {
+            try {
+                // copy into
+                visitor.streamLoad(flushData);
+                startScheduler();
+                break;
+            } catch (Exception e) {
+                LOG.warn("Failed to flush batch data to selectdb, retry times = {}", i, e);
+                if (i >= options.getMaxRetries()) {
+                    throw new RuntimeException(e);
+                }
+                if (e instanceof SelectdbWriterException && ((SelectdbWriterException)e).needReCreateLabel()) {
+                    String newLabel = createBatchLabel();
+                    LOG.warn(String.format("Batch label changed from [%s] to [%s]", flushData.getLabel(), newLabel));
+                    flushData.setLabel(newLabel);
+                }
+                try {
+                    Thread.sleep(1000l * Math.min(i + 1, 100));
+                } catch (InterruptedException ex) {
+                    Thread.currentThread().interrupt();
+                    throw new RuntimeException("Unable to flush, interrupted while doing another attempt", e);
+                }
+            }
+        }
+    }
+
+    private void checkFlushException() {
+        if (flushException != null) {
+            throw new RuntimeException("Writing records to selectdb failed.", flushException);
+        }
+    }
+}
diff --git a/selectdbwriter/src/main/java/com/alibaba/datax/plugin/writer/selectdbwriter/WriterTuple.java b/selectdbwriter/src/main/java/com/alibaba/datax/plugin/writer/selectdbwriter/WriterTuple.java
new file mode 100644
index 00000000..483ade05
--- /dev/null
+++ b/selectdbwriter/src/main/java/com/alibaba/datax/plugin/writer/selectdbwriter/WriterTuple.java
@@ -0,0 +1,22 @@
+package com.alibaba.datax.plugin.writer.selectdbwriter;
+
+import java.util.List;
+
+public class WriterTuple {
+    private String label;
+    private Long bytes;
+    private List rows;
+
+
+    public WriterTuple ( String label, Long bytes, List rows){
+        this.label = label;
+        this.rows = rows;
+        this.bytes = bytes;
+    }
+
+    public String getLabel() { return label; }
+    public void setLabel(String label) { this.label = label; }
+    public Long getBytes() { return bytes; }
+    public List getRows() { return rows; }
+
+}
diff --git a/selectdbwriter/src/main/resources/plugin.json b/selectdbwriter/src/main/resources/plugin.json
new file mode 100644
index 00000000..4b84a945
--- /dev/null
+++ b/selectdbwriter/src/main/resources/plugin.json
@@ -0,0 +1,6 @@
+{
+  "name": "selectdbwriter",
+  "class": "com.alibaba.datax.plugin.writer.selectdbwriter.SelectdbWriter",
+  "description": "selectdb writer plugin",
+  "developer": "selectdb"
+}
\ No newline at end of file
diff --git a/selectdbwriter/src/main/resources/plugin_job_template.json b/selectdbwriter/src/main/resources/plugin_job_template.json
new file mode 100644
index 00000000..c603b7e0
--- /dev/null
+++ b/selectdbwriter/src/main/resources/plugin_job_template.json
@@ -0,0 +1,19 @@
+{
+  "name": "selectdbwriter",
+  "parameter": {
+    "username": "",
+    "password": "",
+    "column": [],
+    "preSql": [],
+    "postSql": [],
+    "loadUrl": [],
+    "loadProps": {},
+    "connection": [
+      {
+        "jdbcUrl": "",
+        "selectedDatabase": "",
+        "table": []
+      }
+    ]
+  }
+}
\ No newline at end of file
diff --git a/sqlserverwriter/doc/sqlserverwriter.md b/sqlserverwriter/doc/sqlserverwriter.md
index cdaf1526..7d786292 100644
--- a/sqlserverwriter/doc/sqlserverwriter.md
+++ b/sqlserverwriter/doc/sqlserverwriter.md
@@ -69,6 +69,7 @@ SqlServerWriter 通过 DataX 框架获取 Reader 生成的协议数据,根据
                                 "jdbcUrl": "jdbc:sqlserver://[HOST_NAME]:PORT;DatabaseName=[DATABASE_NAME]"
                             }
                         ],
+			"session": ["SET IDENTITY_INSERT TABLE_NAME ON"],
                         "preSql": [
                             "delete from @table where db_id = -1;"
                         ],
@@ -139,6 +140,14 @@ SqlServerWriter 通过 DataX 框架获取 Reader 生成的协议数据,根据
 
   * 默认值:否 
+* **session** + + * 描述:DataX在获取 seqlserver 连接时,执行session指定的SQL语句,修改当前connection session属性
+ + * 必选:否
+ + * 默认值:无
+ * **preSql** * 描述:写入数据到目的表前,会先执行这里的标准语句。如果 Sql 中有你需要操作到的表名称,请使用 `@table` 表示,这样在实际执行 Sql 语句时,会对变量按照实际表名称进行替换。
diff --git a/starrockswriter/doc/starrockswriter.md b/starrockswriter/doc/starrockswriter.md index ba94e6af..6ebe3681 100644 --- a/starrockswriter/doc/starrockswriter.md +++ b/starrockswriter/doc/starrockswriter.md @@ -64,13 +64,13 @@ StarRocksWriter 插件实现了写入数据到 StarRocks 主库的目的表的 "column": ["k1", "k2", "v1", "v2"], "preSql": [], "postSql": [], - "connection": [ - { - "table": ["xxx"], - "jdbcUrl": "jdbc:mysql://172.28.17.100:9030/", - "selectedDatabase": "xxxx" - } - ], + "connection": [ + { + "table": ["xxx"], + "jdbcUrl": "jdbc:mysql://172.28.17.100:9030/", + "selectedDatabase": "xxxx" + } + ], "loadUrl": ["172.28.17.100:8030", "172.28.17.100:8030"], "loadProps": {} } diff --git a/starrockswriter/pom.xml b/starrockswriter/pom.xml index 9fb9b147..73a51422 100755 --- a/starrockswriter/pom.xml +++ b/starrockswriter/pom.xml @@ -62,9 +62,8 @@ 4.5.3 - com.alibaba - fastjson - 1.2.75 + com.alibaba.fastjson2 + fastjson2 mysql @@ -98,10 +97,6 @@ true - - com.alibaba.fastjson - com.starrocks.shade.com.alibaba.fastjson - org.apache.http com.starrocks.shade.org.apache.http @@ -118,7 +113,6 @@ commons-logging:* org.apache.httpcomponents:httpclient org.apache.httpcomponents:httpcore - com.alibaba:fastjson diff --git a/starrockswriter/src/main/java/com/starrocks/connector/datax/plugin/writer/starrockswriter/manager/StarRocksStreamLoadVisitor.java b/starrockswriter/src/main/java/com/starrocks/connector/datax/plugin/writer/starrockswriter/manager/StarRocksStreamLoadVisitor.java index 6bbbba1f..b3671556 100644 --- a/starrockswriter/src/main/java/com/starrocks/connector/datax/plugin/writer/starrockswriter/manager/StarRocksStreamLoadVisitor.java +++ b/starrockswriter/src/main/java/com/starrocks/connector/datax/plugin/writer/starrockswriter/manager/StarRocksStreamLoadVisitor.java @@ -6,7 +6,7 @@ import java.net.URL; import java.nio.ByteBuffer; import java.nio.charset.StandardCharsets; -import com.alibaba.fastjson.JSON; +import com.alibaba.fastjson2.JSON; import com.starrocks.connector.datax.plugin.writer.starrockswriter.StarRocksWriterOptions; import com.starrocks.connector.datax.plugin.writer.starrockswriter.row.StarRocksDelimiterParser; diff --git a/starrockswriter/src/main/java/com/starrocks/connector/datax/plugin/writer/starrockswriter/row/StarRocksJsonSerializer.java b/starrockswriter/src/main/java/com/starrocks/connector/datax/plugin/writer/starrockswriter/row/StarRocksJsonSerializer.java index 60faa1be..f235a08d 100644 --- a/starrockswriter/src/main/java/com/starrocks/connector/datax/plugin/writer/starrockswriter/row/StarRocksJsonSerializer.java +++ b/starrockswriter/src/main/java/com/starrocks/connector/datax/plugin/writer/starrockswriter/row/StarRocksJsonSerializer.java @@ -5,7 +5,7 @@ import java.util.List; import java.util.Map; import com.alibaba.datax.common.element.Record; -import com.alibaba.fastjson.JSON; +import com.alibaba.fastjson2.JSON; public class StarRocksJsonSerializer extends StarRocksBaseSerializer implements StarRocksISerializer { diff --git a/streamreader/src/main/java/com/alibaba/datax/plugin/reader/streamreader/StreamReader.java b/streamreader/src/main/java/com/alibaba/datax/plugin/reader/streamreader/StreamReader.java index e3b86659..6b8c55bc 100755 --- a/streamreader/src/main/java/com/alibaba/datax/plugin/reader/streamreader/StreamReader.java +++ b/streamreader/src/main/java/com/alibaba/datax/plugin/reader/streamreader/StreamReader.java @@ -5,7 +5,7 @@ import com.alibaba.datax.common.exception.DataXException; import com.alibaba.datax.common.plugin.RecordSender; import com.alibaba.datax.common.spi.Reader; import com.alibaba.datax.common.util.Configuration; -import com.alibaba.fastjson.JSONObject; +import com.alibaba.fastjson2.JSONObject; import org.apache.commons.lang3.RandomStringUtils; import org.apache.commons.lang3.RandomUtils; diff --git a/tsdbreader/pom.xml b/tsdbreader/pom.xml index d707fe41..4b3f58c6 100644 --- a/tsdbreader/pom.xml +++ b/tsdbreader/pom.xml @@ -41,10 +41,6 @@ slf4j-log4j12 org.slf4j
- - fastjson - com.alibaba - commons-math3 org.apache.commons @@ -86,8 +82,8 @@ - com.alibaba - fastjson + com.alibaba.fastjson2 + fastjson2 diff --git a/tsdbreader/src/main/java/com/alibaba/datax/plugin/reader/tsdbreader/TSDBReader.java b/tsdbreader/src/main/java/com/alibaba/datax/plugin/reader/tsdbreader/TSDBReader.java index 550a010a..1f8c3d18 100755 --- a/tsdbreader/src/main/java/com/alibaba/datax/plugin/reader/tsdbreader/TSDBReader.java +++ b/tsdbreader/src/main/java/com/alibaba/datax/plugin/reader/tsdbreader/TSDBReader.java @@ -6,7 +6,7 @@ import com.alibaba.datax.common.spi.Reader; import com.alibaba.datax.common.util.Configuration; import com.alibaba.datax.plugin.reader.tsdbreader.conn.TSDBConnection; import com.alibaba.datax.plugin.reader.tsdbreader.util.TimeUtils; -import com.alibaba.fastjson.JSON; +import com.alibaba.fastjson2.JSON; import org.apache.commons.lang3.StringUtils; import org.joda.time.DateTime; import org.slf4j.Logger; diff --git a/tsdbreader/src/main/java/com/alibaba/datax/plugin/reader/tsdbreader/conn/DataPoint4MultiFieldsTSDB.java b/tsdbreader/src/main/java/com/alibaba/datax/plugin/reader/tsdbreader/conn/DataPoint4MultiFieldsTSDB.java index 5b380c73..3e8d43d4 100644 --- a/tsdbreader/src/main/java/com/alibaba/datax/plugin/reader/tsdbreader/conn/DataPoint4MultiFieldsTSDB.java +++ b/tsdbreader/src/main/java/com/alibaba/datax/plugin/reader/tsdbreader/conn/DataPoint4MultiFieldsTSDB.java @@ -1,6 +1,6 @@ package com.alibaba.datax.plugin.reader.tsdbreader.conn; -import com.alibaba.fastjson.JSON; +import com.alibaba.fastjson2.JSON; import java.util.Map; diff --git a/tsdbreader/src/main/java/com/alibaba/datax/plugin/reader/tsdbreader/conn/DataPoint4TSDB.java b/tsdbreader/src/main/java/com/alibaba/datax/plugin/reader/tsdbreader/conn/DataPoint4TSDB.java index 5c5c1349..8724bfbb 100644 --- a/tsdbreader/src/main/java/com/alibaba/datax/plugin/reader/tsdbreader/conn/DataPoint4TSDB.java +++ b/tsdbreader/src/main/java/com/alibaba/datax/plugin/reader/tsdbreader/conn/DataPoint4TSDB.java @@ -1,6 +1,6 @@ package com.alibaba.datax.plugin.reader.tsdbreader.conn; -import com.alibaba.fastjson.JSON; +import com.alibaba.fastjson2.JSON; import java.util.Map; diff --git a/tsdbreader/src/main/java/com/alibaba/datax/plugin/reader/tsdbreader/conn/TSDBConnection.java b/tsdbreader/src/main/java/com/alibaba/datax/plugin/reader/tsdbreader/conn/TSDBConnection.java index d466da39..479c16c1 100644 --- a/tsdbreader/src/main/java/com/alibaba/datax/plugin/reader/tsdbreader/conn/TSDBConnection.java +++ b/tsdbreader/src/main/java/com/alibaba/datax/plugin/reader/tsdbreader/conn/TSDBConnection.java @@ -2,7 +2,7 @@ package com.alibaba.datax.plugin.reader.tsdbreader.conn; import com.alibaba.datax.common.plugin.RecordSender; import com.alibaba.datax.plugin.reader.tsdbreader.util.TSDBUtils; -import com.alibaba.fastjson.JSON; +import com.alibaba.fastjson2.JSON; import org.apache.commons.lang3.StringUtils; import java.util.List; diff --git a/tsdbreader/src/main/java/com/alibaba/datax/plugin/reader/tsdbreader/conn/TSDBDump.java b/tsdbreader/src/main/java/com/alibaba/datax/plugin/reader/tsdbreader/conn/TSDBDump.java index c911a062..05b9c5c2 100644 --- a/tsdbreader/src/main/java/com/alibaba/datax/plugin/reader/tsdbreader/conn/TSDBDump.java +++ b/tsdbreader/src/main/java/com/alibaba/datax/plugin/reader/tsdbreader/conn/TSDBDump.java @@ -4,8 +4,10 @@ import com.alibaba.datax.common.element.*; import com.alibaba.datax.common.plugin.RecordSender; import com.alibaba.datax.plugin.reader.tsdbreader.Constant; import com.alibaba.datax.plugin.reader.tsdbreader.util.HttpUtils; -import com.alibaba.fastjson.JSON; -import com.alibaba.fastjson.parser.Feature; +import com.alibaba.fastjson2.JSON; +import com.alibaba.fastjson2.JSONReader; +import com.alibaba.fastjson2.JSONReader.Feature; +import com.alibaba.fastjson2.JSONWriter; import org.slf4j.Logger; import org.slf4j.LoggerFactory; @@ -29,7 +31,7 @@ final class TSDBDump { private static final String QUERY_MULTI_FIELD = "/api/mquery"; static { - JSON.DEFAULT_PARSER_FEATURE &= ~Feature.UseBigDecimal.getMask(); + JSON.config(Feature.UseBigDecimalForDoubles); } private TSDBDump() { diff --git a/tsdbreader/src/main/java/com/alibaba/datax/plugin/reader/tsdbreader/util/HttpUtils.java b/tsdbreader/src/main/java/com/alibaba/datax/plugin/reader/tsdbreader/util/HttpUtils.java index 5cba4e54..af81988c 100644 --- a/tsdbreader/src/main/java/com/alibaba/datax/plugin/reader/tsdbreader/util/HttpUtils.java +++ b/tsdbreader/src/main/java/com/alibaba/datax/plugin/reader/tsdbreader/util/HttpUtils.java @@ -1,6 +1,6 @@ package com.alibaba.datax.plugin.reader.tsdbreader.util; -import com.alibaba.fastjson.JSON; +import com.alibaba.fastjson2.JSON; import org.apache.commons.lang3.StringUtils; import org.apache.http.client.fluent.Content; import org.apache.http.client.fluent.Request; diff --git a/tsdbwriter/pom.xml b/tsdbwriter/pom.xml index 72215651..9f997123 100644 --- a/tsdbwriter/pom.xml +++ b/tsdbwriter/pom.xml @@ -38,10 +38,6 @@ slf4j-log4j12 org.slf4j - - fastjson - com.alibaba - commons-math3 org.apache.commons @@ -83,8 +79,8 @@ - com.alibaba - fastjson + com.alibaba.fastjson2 + fastjson2 diff --git a/tsdbwriter/src/main/java/com/alibaba/datax/plugin/writer/conn/DataPoint4TSDB.java b/tsdbwriter/src/main/java/com/alibaba/datax/plugin/writer/conn/DataPoint4TSDB.java index fee012df..b6e2d309 100644 --- a/tsdbwriter/src/main/java/com/alibaba/datax/plugin/writer/conn/DataPoint4TSDB.java +++ b/tsdbwriter/src/main/java/com/alibaba/datax/plugin/writer/conn/DataPoint4TSDB.java @@ -1,6 +1,6 @@ package com.alibaba.datax.plugin.writer.conn; -import com.alibaba.fastjson.JSON; +import com.alibaba.fastjson2.JSON; import java.util.Map; diff --git a/tsdbwriter/src/main/java/com/alibaba/datax/plugin/writer/conn/TSDBConnection.java b/tsdbwriter/src/main/java/com/alibaba/datax/plugin/writer/conn/TSDBConnection.java index 074f0295..5266f5d9 100644 --- a/tsdbwriter/src/main/java/com/alibaba/datax/plugin/writer/conn/TSDBConnection.java +++ b/tsdbwriter/src/main/java/com/alibaba/datax/plugin/writer/conn/TSDBConnection.java @@ -2,7 +2,7 @@ package com.alibaba.datax.plugin.writer.conn; import com.alibaba.datax.common.plugin.RecordSender; import com.alibaba.datax.plugin.writer.util.TSDBUtils; -import com.alibaba.fastjson.JSON; +import com.alibaba.fastjson2.JSON; import org.apache.commons.lang3.StringUtils; import java.util.List; diff --git a/tsdbwriter/src/main/java/com/alibaba/datax/plugin/writer/tsdbwriter/TSDBConverter.java b/tsdbwriter/src/main/java/com/alibaba/datax/plugin/writer/tsdbwriter/TSDBConverter.java index 86e35c56..9bde0c9e 100644 --- a/tsdbwriter/src/main/java/com/alibaba/datax/plugin/writer/tsdbwriter/TSDBConverter.java +++ b/tsdbwriter/src/main/java/com/alibaba/datax/plugin/writer/tsdbwriter/TSDBConverter.java @@ -2,7 +2,7 @@ package com.alibaba.datax.plugin.writer.tsdbwriter; import com.alibaba.datax.common.element.Column; import com.alibaba.datax.common.element.Record; -import com.alibaba.fastjson.JSON; +import com.alibaba.fastjson2.JSON; import com.aliyun.hitsdb.client.value.request.MultiFieldPoint; import com.aliyun.hitsdb.client.value.request.Point; import org.slf4j.Logger; diff --git a/tsdbwriter/src/main/java/com/alibaba/datax/plugin/writer/util/HttpUtils.java b/tsdbwriter/src/main/java/com/alibaba/datax/plugin/writer/util/HttpUtils.java index 29b14dab..97055adc 100644 --- a/tsdbwriter/src/main/java/com/alibaba/datax/plugin/writer/util/HttpUtils.java +++ b/tsdbwriter/src/main/java/com/alibaba/datax/plugin/writer/util/HttpUtils.java @@ -1,6 +1,6 @@ package com.alibaba.datax.plugin.writer.util; -import com.alibaba.fastjson.JSON; +import com.alibaba.fastjson2.JSON; import org.apache.commons.lang3.StringUtils; import org.apache.http.client.fluent.Content; import org.apache.http.client.fluent.Request; diff --git a/tsdbwriter/src/main/java/com/alibaba/datax/plugin/writer/util/TSDBUtils.java b/tsdbwriter/src/main/java/com/alibaba/datax/plugin/writer/util/TSDBUtils.java index d57c5935..83250b32 100644 --- a/tsdbwriter/src/main/java/com/alibaba/datax/plugin/writer/util/TSDBUtils.java +++ b/tsdbwriter/src/main/java/com/alibaba/datax/plugin/writer/util/TSDBUtils.java @@ -1,7 +1,7 @@ package com.alibaba.datax.plugin.writer.util; import com.alibaba.datax.plugin.writer.conn.DataPoint4TSDB; -import com.alibaba.fastjson.JSON; +import com.alibaba.fastjson2.JSON; import org.apache.commons.lang3.StringUtils; import org.slf4j.Logger; import org.slf4j.LoggerFactory; diff --git a/txtfilereader/src/main/java/com/alibaba/datax/plugin/reader/txtfilereader/TxtFileReader.java b/txtfilereader/src/main/java/com/alibaba/datax/plugin/reader/txtfilereader/TxtFileReader.java index 914305c6..a74ef8fc 100755 --- a/txtfilereader/src/main/java/com/alibaba/datax/plugin/reader/txtfilereader/TxtFileReader.java +++ b/txtfilereader/src/main/java/com/alibaba/datax/plugin/reader/txtfilereader/TxtFileReader.java @@ -182,6 +182,7 @@ public class TxtFileReader extends Reader { delimiterInStr)); } + UnstructuredStorageReaderUtil.validateCsvReaderConfig(this.originConfig); } @Override