Apache Kyuubi 1.9.0-SNAPSHOT Documentation
mode, The local mode, the engine operates on the same n YARN mode, the engine runs within the Appl container of YARN. kyuubi.engine.hive.event.loggers JSON A comma-separated list of engine history log e kyuubi engine container core number when th YARN. kyuubi.engine.yarn.java.optionsThe extra Java options for the AM when the e kyuubi.engine.yarn.memory 1024 kyuubi engine container memory in Note that, the default value is set to false when engine running on Kubernetes to prevent potential network issues. boolean 1.5.0 kyuubi.frontend.login.timeout PT20S (deprecated) Timeout for Thrift clients 0 码力 | 405 页 | 4.96 MB | 1 年前3Apache Kyuubi 1.5.0 Documentation
we demo running Kyuubi on macOS and Hue on Docker for Mac, there are several known limitations of network, and you can find workarounds from here. Configuration 1. Copy a configuration template from Hue Host of the Spark Thrift Server # For macOS users, use docker.for.mac.host.internal to access host network sql_server_host=docker.for.mac.host.internal # Port of the Spark Thrift Server sql_server_port=10009 dynamicAllocation.enabled=false \ --conf spark.shuffle.service.enabled=false \ --conf spark.kubernetes.container.image=\ local:// When running shell, you can use cmd kubectl 0 码力 | 172 页 | 6.94 MB | 1 年前3Apache Kyuubi 1.5.1 Documentation
we demo running Kyuubi on macOS and Hue on Docker for Mac, there are several known limitations of network, and you can find workarounds from here. Configuration 1. Copy a configuration template from Hue Host of the Spark Thrift Server # For macOS users, use docker.for.mac.host.internal to access host network sql_server_host=docker.for.mac.host.internal # Port of the Spark Thrift Server sql_server_port=10009 dynamicAllocation.enabled=false \ --conf spark.shuffle.service.enabled=false \ --conf spark.kubernetes.container.image=\ local:// When running shell, you can use cmd kubectl 0 码力 | 172 页 | 6.94 MB | 1 年前3Apache Kyuubi 1.5.2 Documentation
we demo running Kyuubi on macOS and Hue on Docker for Mac, there are several known limitations of network, and you can find workarounds from here. Configuration 1. Copy a configuration template from Hue Host of the Spark Thrift Server # For macOS users, use docker.for.mac.host.internal to access host network sql_server_host=docker.for.mac.host.internal # Port of the Spark Thrift Server sql_server_port=10009 dynamicAllocation.enabled=false \ --conf spark.shuffle.service.enabled=false \ --conf spark.kubernetes.container.image=\ local:// When running shell, you can use cmd kubectl 0 码力 | 172 页 | 6.94 MB | 1 年前3Apache Kyuubi 1.8.1 Documentation
Note that, the default value is set to false when engine running on Kubernetes to prevent potential network issues. boolean 1.5.0 kyuubi.frontend.login.timeout PT20S (deprecated) Timeout for Thrift clients 0.0 Kubernetes Key Default Meaning kyuubi.kubernetes.application.state.container spark- kubernetes- driver The container name to retrieve the applic from. kyuubi.kubernetes.application.state.source valid values are pod and container. If the container and there is container inside th name of kyuubi.kubernetes.application.st the application state will be from the mat container state. Otherwise, the applicatio0 码力 | 405 页 | 5.28 MB | 1 年前3Apache Kyuubi 1.5.1 Documentation
Docker for Mac [https://docs.docker.com/docker-for-mac/], there are several known limitations of network, and you can find workarounds from here [https://docs.docker.com/docker- for-mac/networking/#kno of the Spark Thrift Server # For macOS users, use docker.for.mac.host.internal to access host network sql_server_host=docker.for.mac.host.internal # Port of the Spark Thrift Server sql_server_port=10009 dynamicAllocation.enabled=false \ --conf spark.shuffle.service.enabled=false \ --conf spark.kubernetes.container.image=\ local:// When running shell, you can use cmd kubectl 0 码力 | 267 页 | 5.80 MB | 1 年前3Apache Kyuubi 1.5.2 Documentation
Docker for Mac [https://docs.docker.com/docker-for-mac/], there are several known limitations of network, and you can find workarounds from here [https://docs.docker.com/docker- for-mac/networking/#kno of the Spark Thrift Server # For macOS users, use docker.for.mac.host.internal to access host network sql_server_host=docker.for.mac.host.internal # Port of the Spark Thrift Server sql_server_port=10009 dynamicAllocation.enabled=false \ --conf spark.shuffle.service.enabled=false \ --conf spark.kubernetes.container.image=\ local:// When running shell, you can use cmd kubectl 0 码力 | 267 页 | 5.80 MB | 1 年前3Apache Kyuubi 1.5.0 Documentation
Docker for Mac [https://docs.docker.com/docker-for-mac/], there are several known limitations of network, and you can find workarounds from here [https://docs.docker.com/docker- for-mac/networking/#kno of the Spark Thrift Server # For macOS users, use docker.for.mac.host.internal to access host network sql_server_host=docker.for.mac.host.internal # Port of the Spark Thrift Server sql_server_port=10009 dynamicAllocation.enabled=false \ --conf spark.shuffle.service.enabled=false \ --conf spark.kubernetes.container.image=\ local:// When running shell, you can use cmd kubectl 0 码力 | 267 页 | 5.80 MB | 1 年前3Apache Kyuubi 1.7.0-rc1 Documentation
dynamicAllocation.enabled=false \ --conf spark.shuffle.service.enabled=false \ --conf spark.kubernetes.container.image=\ local:// When running shell, you can use cmd kubectl initially empty volume created when a pod is assigned to a node. • nfs: mounts an existing NFS(Network File System) into a pod. • persistentVolumeClaim: mounts a PersistentVolume into a pod. Note: Please . .mount.path=<container_path> spark.kubernetes.executor.volumes. . .options.path= spark.kubernetes.executor.volumes. . .mount.path=<container_path> Read Using Kubernetes 0 码力 | 206 页 | 3.78 MB | 1 年前3Apache Kyuubi 1.7.3 Documentation
dynamicAllocation.enabled=false \ --conf spark.shuffle.service.enabled=false \ --conf spark.kubernetes.container.image=\ local:// When running shell, you can use cmd kubectl initially empty volume created when a pod is assigned to a node. • nfs: mounts an existing NFS(Network File System) into a pod. • persistentVolumeClaim: mounts a PersistentVolume into a pod. Note: Please . .mount.path=<container_path> spark.kubernetes.executor.volumes. . .options.path= spark.kubernetes.executor.volumes. . .mount.path=<container_path> Read Using Kubernetes 0 码力 | 211 页 | 3.79 MB | 1 年前3
共 44 条
- 1
- 2
- 3
- 4
- 5