您好,登录后才能下订单哦!
这篇文章主要讲解了“怎么使用Flink TableAPI和SQL /Elasticsearch”,文中的讲解内容简单清晰,易于学习与理解,下面请大家跟着小编的思路慢慢深入,一起来研究和学习“怎么使用Flink TableAPI和SQL /Elasticsearch”吧!
使用Tbale&SQL与Flink Elasticsearch Connector 连接器将数据写入Elasticsearch引擎的索引
示例环境
java.version: 1.8.x flink.version: 1.11.1 elasticsearch:6.x
示例数据源 (项目码云下载)
Flink 系例 之 搭建开发环境与数据
示例模块 (pom.xml)
Flink 系例 之 TableAPI & SQL 与 示例模块
InsertToEs.java
package com.flink.examples.elasticsearch; import org.apache.flink.streaming.api.TimeCharacteristic; import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment; import org.apache.flink.table.api.EnvironmentSettings; import org.apache.flink.table.api.StatementSet; import org.apache.flink.table.api.TableResult; import org.apache.flink.table.api.bridge.java.StreamTableEnvironment; /** * @Description 使用Tbale&SQL与Flink Elasticsearch连接器将数据写入Elasticsearch引擎的索引 */ public class InsertToEs { /** * Apache Flink 有两种关系型 API 来做流批统一处理:Table API 和 SQL。 * 参考官方:https://ci.apache.org/projects/flink/flink-docs-release-1.11/zh/dev/table/connectors/elasticsearch.html */ //参见属性配置类:ElasticsearchValidator static String table_sql = "CREATE TABLE my_users (\n" + " user_id STRING,\n" + " user_name STRING,\n" + " uv BIGINT,\n" + " pv BIGINT,\n" + " PRIMARY KEY (user_id) NOT ENFORCED\n" + ") WITH (\n" + " 'connector.type' = 'elasticsearch',\n" + " 'connector.version' = '6',\n" + " 'connector.property-version' = '1', \n" + " 'connector.hosts' = 'http://192.168.110.35:9200',\n" + " 'connector.index' = 'users',\n" + " 'connector.document-type' = 'doc',\n" + " 'format.type' = 'json',\n" + " 'update-mode'='append' -- append|upsert\n" + ")"; public static void main(String[] args) { //构建StreamExecutionEnvironment StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment(); //默认流时间方式 env.setStreamTimeCharacteristic(TimeCharacteristic.ProcessingTime); //构建EnvironmentSettings 并指定Blink Planner EnvironmentSettings bsSettings = EnvironmentSettings.newInstance().useBlinkPlanner().inStreamingMode().build(); //构建StreamTableEnvironment StreamTableEnvironment tEnv = StreamTableEnvironment.create(env, bsSettings); //注册kafka数据维表 tEnv.executeSql(table_sql); //Elasticsearch connector 目前只支持了 sink,不支持 source 。不能SELECT elasticsearch table,因此只能通过insert的方式提交数据; String sql = "insert into my_users (user_id,user_name,uv,pv) values('10003','tom',31,10)"; // TableResult tableResult = tEnv.executeSql(sql); //第二种方式:声明一个操作集合来执行sql StatementSet stmtSet = tEnv.createStatementSet(); stmtSet.addInsertSql(sql); TableResult tableResult = stmtSet.execute(); tableResult.print(); } }
打印结果
+-------------------------------------------+ | default_catalog.default_database.my_users | +-------------------------------------------+ | -1 | +-------------------------------------------+ 1 row in set
感谢各位的阅读,以上就是“怎么使用Flink TableAPI和SQL /Elasticsearch”的内容了,经过本文的学习后,相信大家对怎么使用Flink TableAPI和SQL /Elasticsearch这一问题有了更深刻的体会,具体使用情况还需要大家实践验证。这里是亿速云,小编将为大家推送更多相关知识点的文章,欢迎关注!
免责声明:本站发布的内容(图片、视频和文字)以原创、转载和分享为主,文章观点不代表本网站立场,如果涉及侵权请联系站长邮箱:is@yisu.com进行举报,并提供相关证据,一经查实,将立刻删除涉嫌侵权内容。