Scalable Stream Processing - Spark Streaming and Flink
Scalable Stream Processing - Spark Streaming and Flink Amir H. Payberah payberah@kth.se 05/10/2018 The Course Web Page https://id2221kth.github.io 1 / 79 Where Are We? 2 / 79 Stream Processing Systems Outline ▶ Spark streaming ▶ Flink 4 / 79 Spark Streaming 5 / 79 Contribution ▶ Design issues • Continuous vs. micro-batch processing • Record-at-a-Time vs. declarative APIs 6 / 79 Spark Streaming RDDs and processes them using RDD operations. • Discretized Stream Processing (DStream) 7 / 79 Spark Streaming ▶ Run a streaming computation as a series of very small, deterministic batch jobs. • Chops0 码力 | 113 页 | 1.22 MB | 1 年前3Spark 简介以及与 Hadoop 的对比
Spark 简介以及与 Hadoop 的对比 1 Spark 简介 1.1 Spark 概述 Spark 是 UC Berkeley AMP lab 所开源的类 Hadoop MapReduce 的通用的并行计算框 架,Spark 基于 map reduce 算法实现的分布式计算,拥有 Hadoop MapReduce 所具有的 优点;但不同于 MapReduce 的是 Job 中间输出和结果可以保存在内存中,从而不再需要读 写 HDFS,因此 Spark 能更好地适用于数据挖掘与机器学习等需要迭代的 map reduce 的算 法。 1.2 Spark 核心概念 1.2.1 弹性分布数据集(RDD) RDD 是 Spark 的最基本抽象,是对分布式内存的抽象使用,实现了以操作本地集合的方式 来操作分布式数据集的抽象实现。RDD 是 Spark 最核心的东西,它表示已被分区,不可变的 的操作不是马上执行,Spark 在遇 到 Transformations 操作时只会记录需要这样的操作,并不会去执行,需要等到有 Actions 操作的时候才会真正启动计算过程进行计算。 2. 操作(Actions) (如:count, collect, save 等),Actions 操作会返回结果或把 RDD 数据写 到存储系统中。Actions 是触发 Spark 启动计算的动因。0 码力 | 3 页 | 172.14 KB | 1 年前3MATLAB与Spark/Hadoop相集成:实现大数据的处理和价值挖
MathWorks, Inc. MATLAB与Spark/Hadoop相集成:实现大 数据的处理和价值挖 马文辉 2 内容 ▪ 大数据及其带来的挑战 ▪ MATLAB大数据处理 ➢ tall数组 ➢ 并行与分布式计算 ▪ MATLAB与Spark/Hadoop集成 ➢ MATLAB访问HDFS(Hadoop分布式文件系统) ➢ 在Spark/Hadoop集群上运行MATLAB代码 ▪ MapReduce (MDCS/PCT) ▪ MATLAB API for Spark API ▪ Tall Arrays ▪ 计算 ▪ Desktop (Multicore, GPU) ▪ Clusters ▪ Cloud Computing (MDCS on EC2) ▪ Hadoop ▪ Spark ▪ 内存与数据访问 ▪ 64-bit processors ▪ Memory Parallel Computing Toolbox) ▪ MATLAB集群之上的分布式计算 (MDCS, MATLAB Distributed Computing Server) 9 MATLAB与Spark/Hadoop集成 MDCS 10 Hadoop Hadoop是跨计算机集群的分布式大数据处理平台,由两部分组成: • YARN (Yet Another Resource Negotiator)0 码力 | 17 页 | 1.64 MB | 1 年前3Apache Kyuubi 1.3.0 Documentation
multi-tenant JDBC interface for large-scale data processing and analytics, built on top of Apache Spark™ [http://spark.apache.org/]. In general, the complete ecosystem of Kyuubi falls into the hierarchies shown the above figure, with each layer loosely coupled to the other. For example, you can use Kyuubi, Spark and Apache Iceberg [https://iceberg.apache.org/] to build and manage Data Lake with pure SQL for both multi-tenancy, and this is why we want to create this project despite that the Spark Thrift JDBC/ODBC server [http://spark.apache.org/docs/latest/sql-distributed-sql-engine.html#running-the-thrift-jdbcodbc-server]0 码力 | 199 页 | 4.42 MB | 1 年前3Apache Kyuubi 1.3.1 Documentation
multi-tenant JDBC interface for large-scale data processing and analytics, built on top of Apache Spark™ [http://spark.apache.org/]. In general, the complete ecosystem of Kyuubi falls into the hierarchies shown the above figure, with each layer loosely coupled to the other. For example, you can use Kyuubi, Spark and Apache Iceberg [https://iceberg.apache.org/] to build and manage Data Lake with pure SQL for both multi-tenancy, and this is why we want to create this project despite that the Spark Thrift JDBC/ODBC server [http://spark.apache.org/docs/latest/sql-distributed-sql-engine.html#running-the-thrift-jdbcodbc-server]0 码力 | 199 页 | 4.44 MB | 1 年前3Apache Kyuubi 1.4.1 Documentation
multi-tenant JDBC interface for large-scale data processing and analytics, built on top of Apache Spark™ [http://spark.apache.org/]. In general, the complete ecosystem of Kyuubi falls into the hierarchies shown the above figure, with each layer loosely coupled to the other. For example, you can use Kyuubi, Spark and Apache Iceberg [https://iceberg.apache.org/] to build and manage Data Lake with pure SQL for both multi-tenancy, and this is why we want to create this project despite that the Spark Thrift JDBC/ODBC server [http://spark.apache.org/docs/latest/sql-distributed-sql-engine.html#running-the-thrift-jdbcodbc-server]0 码力 | 233 页 | 4.62 MB | 1 年前3Apache Kyuubi 1.4.0 Documentation
multi-tenant JDBC interface for large-scale data processing and analytics, built on top of Apache Spark™ [http://spark.apache.org/]. In general, the complete ecosystem of Kyuubi falls into the hierarchies shown the above figure, with each layer loosely coupled to the other. For example, you can use Kyuubi, Spark and Apache Iceberg [https://iceberg.apache.org/] to build and manage Data Lake with pure SQL for both multi-tenancy, and this is why we want to create this project despite that the Spark Thrift JDBC/ODBC server [http://spark.apache.org/docs/latest/sql-distributed-sql-engine.html#running-the-thrift-jdbcodbc-server]0 码力 | 233 页 | 4.62 MB | 1 年前3Apache Kyuubi 1.5.1 Documentation
multi-tenant JDBC interface for large-scale data processing and analytics, built on top of Apache Spark™ [http://spark.apache.org/]. In general, the complete ecosystem of Kyuubi falls into the hierarchies shown the above figure, with each layer loosely coupled to the other. For example, you can use Kyuubi, Spark and Apache Iceberg [https://iceberg.apache.org/] to build and manage Data Lake with pure SQL for both multi-tenancy, and this is why we want to create this project despite that the Spark Thrift JDBC/ODBC server [http://spark.apache.org/docs/latest/sql-distributed-sql-engine.html#running-the- thrift-jdbcodbc-server]0 码力 | 267 页 | 5.80 MB | 1 年前3Apache Kyuubi 1.5.2 Documentation
multi-tenant JDBC interface for large-scale data processing and analytics, built on top of Apache Spark™ [http://spark.apache.org/]. In general, the complete ecosystem of Kyuubi falls into the hierarchies shown the above figure, with each layer loosely coupled to the other. For example, you can use Kyuubi, Spark and Apache Iceberg [https://iceberg.apache.org/] to build and manage Data Lake with pure SQL for both multi-tenancy, and this is why we want to create this project despite that the Spark Thrift JDBC/ODBC server [http://spark.apache.org/docs/latest/sql-distributed-sql-engine.html#running-the- thrift-jdbcodbc-server]0 码力 | 267 页 | 5.80 MB | 1 年前3Apache Kyuubi 1.5.0 Documentation
multi-tenant JDBC interface for large-scale data processing and analytics, built on top of Apache Spark™ [http://spark.apache.org/]. In general, the complete ecosystem of Kyuubi falls into the hierarchies shown the above figure, with each layer loosely coupled to the other. For example, you can use Kyuubi, Spark and Apache Iceberg [https://iceberg.apache.org/] to build and manage Data Lake with pure SQL for both multi-tenancy, and this is why we want to create this project despite that the Spark Thrift JDBC/ODBC server [http://spark.apache.org/docs/latest/sql-distributed-sql-engine.html#running-the- thrift-jdbcodbc-server]0 码力 | 267 页 | 5.80 MB | 1 年前3
共 254 条
- 1
- 2
- 3
- 4
- 5
- 6
- 26