Apache Kyuubi 1.3.0 Documentation
And Spark Block Cleaner will select folder start with blockmgr and spark for deletion using the logic Spark uses to create those folders. Before deleting those files, Spark Block Cleaner will determine Spark ThriftServer through simple SQL language and JDBC interface to implement their own business logic. The basic capacity planning of Spark ThriftServer, the consolidation of underlying services, and that migrating from HiveServer2 for the reason of query speed. With UDF/UDAF support, Some complex logic can still be fulfilled, so basically, Spark ThriftServer is able to deal with most of the big data0 码力 | 129 页 | 6.15 MB | 1 年前3Apache Kyuubi 1.3.1 Documentation
And Spark Block Cleaner will select folder start with blockmgr and spark for deletion using the logic Spark uses to create those folders. Before deleting those files, Spark Block Cleaner will determine Spark ThriftServer through simple SQL language and JDBC interface to implement their own business logic. The basic capacity planning of Spark ThriftServer, the consolidation of underlying services, and that migrating from HiveServer2 for the reason of query speed. With UDF/UDAF support, Some complex logic can still be fulfilled, so basically, Spark ThriftServer is able to deal with most of the big data0 码力 | 129 页 | 6.16 MB | 1 年前3Apache Kyuubi 1.3.0 Documentation
And Spark Block Cleaner will select folder start with blockmgr and spark for deletion using the logic Spark uses to create those folders. Before deleting those files, Spark Block Cleaner will determine Spark ThriftServer through simple SQL language and JDBC interface to implement their own business logic. The basic capacity planning of Spark ThriftServer, the consolidation of underlying services, and that migrating from HiveServer2 for the reason of query speed. With UDF/UDAF support, Some complex logic can still be fulfilled, so basically, Spark ThriftServer is able to deal with most of the big data0 码力 | 199 页 | 4.42 MB | 1 年前3Apache Kyuubi 1.3.1 Documentation
And Spark Block Cleaner will select folder start with blockmgr and spark for deletion using the logic Spark uses to create those folders. Before deleting those files, Spark Block Cleaner will determine Spark ThriftServer through simple SQL language and JDBC interface to implement their own business logic. The basic capacity planning of Spark ThriftServer, the consolidation of underlying services, and that migrating from HiveServer2 for the reason of query speed. With UDF/UDAF support, Some complex logic can still be fulfilled, so basically, Spark ThriftServer is able to deal with most of the big data0 码力 | 199 页 | 4.44 MB | 1 年前3Apache Kyuubi 1.4.1 Documentation
And Spark Block Cleaner will select folder start with blockmgr and spark for deletion using the logic Spark uses to create those folders. Before deleting those files, Spark Block Cleaner will determine Spark ThriftServer through simple SQL language and JDBC interface to implement their own business logic. The basic capacity planning of Spark ThriftServer, the consolidation of underlying services, and that migrating from HiveServer2 for the reason of query speed. With UDF/UDAF support, Some complex logic can still be fulfilled, so basically, Spark ThriftServer is able to deal with most of the big data0 码力 | 148 页 | 6.26 MB | 1 年前3Apache Kyuubi 1.4.0 Documentation
And Spark Block Cleaner will select folder start with blockmgr and spark for deletion using the logic Spark uses to create those folders. Before deleting those files, Spark Block Cleaner will determine Spark ThriftServer through simple SQL language and JDBC interface to implement their own business logic. The basic capacity planning of Spark ThriftServer, the consolidation of underlying services, and that migrating from HiveServer2 for the reason of query speed. With UDF/UDAF support, Some complex logic can still be fulfilled, so basically, Spark ThriftServer is able to deal with most of the big data0 码力 | 148 页 | 6.26 MB | 1 年前3Apache Kyuubi 1.5.0 Documentation
And Spark Block Cleaner will select folder start with blockmgr and spark for deletion using the logic Spark uses to create those folders. Before deleting those files, Spark Block Cleaner will determine Spark ThriftServer through simple SQL language and JDBC interface to implement their own business logic. The basic capacity planning of Spark ThriftServer, the consolidation of underlying services, and that migrating from HiveServer2 for the reason of query speed. With UDF/UDAF support, Some complex logic can still be fulfilled, so basically, Spark ThriftServer is able to deal with most of the big data0 码力 | 172 页 | 6.94 MB | 1 年前3Apache Kyuubi 1.4.1 Documentation
And Spark Block Cleaner will select folder start with blockmgr and spark for deletion using the logic Spark uses to create those folders. Before deleting those files, Spark Block Cleaner will determine Spark ThriftServer through simple SQL language and JDBC interface to implement their own business logic. The basic capacity planning of Spark ThriftServer, the consolidation of underlying services, and that migrating from HiveServer2 for the reason of query speed. With UDF/UDAF support, Some complex logic can still be fulfilled, so basically, Spark ThriftServer is able to deal with most of the big data0 码力 | 233 页 | 4.62 MB | 1 年前3Apache Kyuubi 1.5.1 Documentation
And Spark Block Cleaner will select folder start with blockmgr and spark for deletion using the logic Spark uses to create those folders. Before deleting those files, Spark Block Cleaner will determine Spark ThriftServer through simple SQL language and JDBC interface to implement their own business logic. The basic capacity planning of Spark ThriftServer, the consolidation of underlying services, and that migrating from HiveServer2 for the reason of query speed. With UDF/UDAF support, Some complex logic can still be fulfilled, so basically, Spark ThriftServer is able to deal with most of the big data0 码力 | 172 页 | 6.94 MB | 1 年前3Apache Kyuubi 1.4.0 Documentation
And Spark Block Cleaner will select folder start with blockmgr and spark for deletion using the logic Spark uses to create those folders. Before deleting those files, Spark Block Cleaner will determine Spark ThriftServer through simple SQL language and JDBC interface to implement their own business logic. The basic capacity planning of Spark ThriftServer, the consolidation of underlying services, and that migrating from HiveServer2 for the reason of query speed. With UDF/UDAF support, Some complex logic can still be fulfilled, so basically, Spark ThriftServer is able to deal with most of the big data0 码力 | 233 页 | 4.62 MB | 1 年前3
共 44 条
- 1
- 2
- 3
- 4
- 5