Apache Karaf Container 4.x - Documentation
) 5.13.3. Create an OSGi blueprint bundle (karaf-blueprint-archetype) 5.13.4. Create a features XML (karaf-feature-archetype) 5.13.5. Create a KAR file (karaf-kar-archetype) 5.14. Security framework feature:repo-add camel feature:install deployer camel-blueprint aries-blueprint cat > deploy/example.xml <0 码力 | 370 页 | 1.03 MB | 1 年前 3Apache Karaf 3.0.5 Guides
2011 The Apache Software Foundation The PDF format of the Karaf Manual has been generated by Prince XML (http://www.princexml.com). 2 Table of contents Overview Quick Start Users Guide Developers Guide karaf@root()> feature:repo-add camel 2.10.0 Adding feature url mvn:org.apache.camel.karaf/apache-camel/ 2.10.0/xml/features karaf@root()> feature:install camel-spring karaf@root()> bundle:install -s mvn:org.apache REPOSITORIES The features are described in a features XML descriptor. This XML file contains the description of a set of features. 72 PROVISIONING A features XML descriptor is named a "features repository". Before0 码力 | 203 页 | 534.36 KB | 1 年前3Apache Karaf Cave 3.x - Documentation
container for: • OSGi bundles (jar files) • OBR (OSGi Bundle Repository) metadata (aka a repository.xml file) By default, a repository uses a filesystem backend for the storage, the directory used is KARAF_BASE/cave KARAF_BASE/cave. karaf@root()> feature:repo-add mvn:org.apache.karaf.cave/apache-karaf-cave/3.0.0/xml/ features karaf@root()> feature:list |grep -i cave cave-server | 3.0.0 | x --- 0 | file:/home/jbonofre/.m2/repository/repository.xml 1 | file:/home/jbonofre/apache-karaf-3.0.1/cave/my-repository/repository.xml You can uninstall a repository from the Apache Karaf OBR0 码力 | 18 页 | 101.41 KB | 1 年前3尚硅谷大数据技术之Hadoop(入门)
[core-default.xml] hadoop-common-3.1.3.jar/core-default.xml [hdfs-default.xml] hadoop-hdfs-3.1.3.jar/hdfs-default.xml [yarn-default.xml] hadoop-yarn-common-3.1.3.jar/yarn-default.xml [mapred-default [mapred-default.xml] hadoop-mapreduce-client-core-3.1.3.jar/mapred-default.xml (2)自定义配置文件: core-site.xml、hdfs-site.xml、yarn-site.xml、mapred-site.xml 四个配置文件存放在 $HADOOP_HOME/etc/hadoop 这个路径上,用户可以根据项目需求重新进行修改配置。 这个路径上,用户可以根据项目需求重新进行修改配置。 3)配置集群 (1)核心配置文件 配置 core-site.xml [atguigu@hadoop102 ~]$ cd $HADOOP_HOME/etc/hadoop 尚硅谷大数据技术之 Hadoop(入门) ————————————————————————————— 更多 Java –大数据 –前端 –python0 码力 | 35 页 | 1.70 MB | 1 年前3Apache Karaf Cellar 4.x - Documentation
and shell usage. 3.1. Registering Cellar features Karaf Cellar is provided as a Karaf features XML descriptor. Simply register the Cellar feature URL in your Karaf instance: Now you have Cellar features a hazelcast feature is automatically installed, providing the etc/hazelcast.xml configuration file. The etc/hazelcast.xml configuration file contains all the core configuration, especially: * the Hazelcast feature:install cellar-cloud 4.1. Hazelcast cluster identification Theelement in the etc/hazelcast.xml defines the identification of the Hazelcast cluster: All Cellar nodes have to use the same name and 0 码力 | 39 页 | 177.09 KB | 1 年前3Apache Karaf Cellar 3.x Documentation
and shell usage. 3.1. Registering Cellar features Karaf Cellar is provided as a Karaf features XML descriptor. Simply register the Cellar feature URL in your Karaf instance: Now you have Cellar features feature:repo-add mvn:org.apache.karaf.cellar/apache-karaf-cellar/3.0.3/xml/ features Adding feature url mvn:org.apache.karaf.cellar/apache-karaf-cellar/3.0.3/xml/features 3.2. Starting Cellar To start Cellar in your a hazelcast feature is automatically installed, providing the etc/hazelcast.xml configuration file. The etc/hazelcast.xml configuration file contains all the core configuration, especially: * the Hazelcast0 码力 | 34 页 | 157.07 KB | 1 年前3尚硅谷大数据技术之Hadoop(生产调优手册)
—————————— 更多 Java –大数据 –前端 –python 人工智能资料下载,可百度访问:尚硅谷官网 1.2 NameNode 心跳并发配置 1)hdfs-site.xml The number of Namenode RPC server threads that listen to requests from clients. If dfs.namenode interval 的参数值相等。 (3)要求 fs.trash.checkpoint.interval <= fs.trash.interval。 3)启用回收站 修改 core-site.xml,配置垃圾回收时间为 1 分钟。fs.trash.interval 1 yarn 0 码力 | 41 页 | 2.32 MB | 1 年前3Streaming optimizations - CS 591 K1: Data Stream Processing and Analytics Spring 2020
/dumprequest HTTP/1.1 Host: rve.org.uk Connection: keep-alive Accept: text/html,application/ xhtml+xml,application/ xml;q=0.9,*/*;q=0.8 User-Agent: Mozilla/5.0 (X11; Linux i686) AppleWebKit/537.22 (KHTML, like /dumprequest HTTP/1.1 Host: rve.org.uk Connection: keep-alive Accept: text/html,application/ xhtml+xml,application/ xml;q=0.9,*/*;q=0.8 User-Agent: Mozilla/5.0 (X11; Linux i686) AppleWebKit/537.22 (KHTML, like /dumprequest HTTP/1.1 Host: rve.org.uk Connection: keep-alive Accept: text/html,application/ xhtml+xml,application/ xml;q=0.9,*/*;q=0.8 User-Agent: Mozilla/5.0 (X11; Linux i686) AppleWebKit/537.22 (KHTML, like0 码力 | 54 页 | 2.83 MB | 1 年前3Apache Kyuubi 1.7.0-rc0 Documentation
server. Add repository to your maven configuration file which may reside in $MAVEN_HOME/conf/settings.xml.central maven repo central maven repo https https://repo apache.org/maven2 configurations as system wide defaults for all applications it launches. Via hive-site.xml Place your copy of hive-site.xml into $SPARK_HOME/conf, every single Spark application will automatically load this 0 码力 | 210 页 | 3.79 MB | 1 年前3Apache Kyuubi 1.7.0-rc1 Documentation
server. Add repository to your maven configuration file which may reside in $MAVEN_HOME/conf/settings.xml.central maven repo central maven repo https https://repo apache.org/maven2 configurations as system wide defaults for all applications it launches. Via hive-site.xml Place your copy of hive-site.xml into $SPARK_HOME/conf, every single Spark application will automatically load this 0 码力 | 206 页 | 3.78 MB | 1 年前3
共 230 条
- 1
- 2
- 3
- 4
- 5
- 6
- 23