Cilium的网络加速秘诀
传统基于 IP 来实现 policy过 滤,不足: • 一条过滤规则可能需要记录诸 多的CIDR • endpoint身份和 IP 地址耦合, 如 pod 重启后 IP 发生变化, 整集群可能需要同步 信息,刷 新 OVS 流表或者 ipset 规则 • 大规模的 policy ,会一定程度 的影响规则查询的效率,会一 定程度的影响规则更新的时间, 这些都会引入的TPS波动 Cilium policy采用了 L3 - L4 policy 决策。 • 根据Cilium endpoint 的 label, 计算生成集群唯一的identity,意味着 pod 的 IP 变更,不需要引起 policy 信息的同步,有效降低信息同步频率。 • 多个相同policy策略的endpoint可以共用一个identity,例如同一 deployment 下的多个pod副本 share 相同的 identity,这意味着集群中可以有效降低0 码力 | 14 页 | 11.97 MB | 1 年前3Cilium v1.5 Documentation
tracing: Why is a packet being dropped or a request rejected. The policy tracing framework allows to trace the policy decision process for both, running workloads and based on arbitrary label defini�ons. Integrated capability of the Linux kernel to accept compiled bytecode that is run at various hook / trace points within the kernel. Cilium compiles BPF programs and has the kernel run them at key points in way to verify if and what policy rules apply between two endpoints. We can use the cilium policy trace to simulate a policy decision between the source and des�na�on endpoints. We will use the example0 码力 | 740 页 | 12.52 MB | 1 年前3Cilium v1.6 Documentation
tracing: Why is a packet being dropped or a request rejected. The policy tracing framework allows to trace the policy decision process for both, running workloads and based on arbitrary label definitions. Integrated capability of the Linux kernel to accept compiled bytecode that is run at various hook / trace points within the kernel. Cilium compiles BPF programs and has the kernel run them at key points way to verify if and what policy rules apply between two endpoints. We can use the cilium policy trace to simulate a policy decision between the source and destination endpoints. We will use the example0 码力 | 734 页 | 11.45 MB | 1 年前3Cilium v1.7 Documentation
tracing: Why is a packet being dropped or a request rejected. The policy tracing framework allows to trace the policy decision process for both, running workloads and based on arbitrary label definitions. Integrated capability of the Linux kernel to accept compiled bytecode that is run at various hook / trace points within the kernel. Cilium compiles BPF programs and has the kernel run them at key points in way to verify if and what policy rules apply between two endpoints. We can use the cilium policy trace to simulate a policy decision between the source and destination endpoints. We will use the example0 码力 | 885 页 | 12.41 MB | 1 年前3Cilium v1.8 Documentation
tracing: Why is a packet being dropped or a request rejected. The policy tracing framework allows to trace the policy decision process for both, running workloads and based on arbitrary label definitions. way to verify if and what policy rules apply between two endpoints. We can use the cilium policy trace to simulate a policy decision between the source and destination endpoints. We will use the example [https://cilium.readthedocs.io/en/latest/gettingstarted/minikube/#getting-started-using-minikube] to trace the policy. In this example, there is: deathstar service identified by labels: org=empire, class=deathstar0 码力 | 1124 页 | 21.33 MB | 1 年前3Cilium v1.10 Documentation
tracing: Why is a packet being dropped or a request rejected. The policy tracing framework allows to trace the policy decision process for both, running workloads and based on arbitrary label definitions. way to verify if and what policy rules apply between two endpoints. We can use the cilium policy trace to simulate a policy decision between the source and destination endpoints. We will use the example HTTP-Aware Policy Enforcement Guide [https://cilium.readthedocs.io/en/latest/gettingstarted/http/] to trace the policy. In this example, there is: deathstar service identified by labels: org=empire, class=deathstar0 码力 | 1307 页 | 19.26 MB | 1 年前3Cilium v1.9 Documentation
tracing: Why is a packet being dropped or a request rejected. The policy tracing framework allows to trace the policy decision process for both, running workloads and based on arbitrary label definitions. way to verify if and what policy rules apply between two endpoints. We can use the cilium policy trace to simulate a policy decision between the source and destination endpoints. We will use the example [https://cilium.readthedocs.io/en/latest/gettingstarted/minikube/#getting-started-using-minikube] to trace the policy. In this example, there is: deathstar service identified by labels: org=empire, class=deathstar0 码力 | 1263 页 | 18.62 MB | 1 年前3Cilium v1.11 Documentation
community interest. It is planned for removal in 1.12. The in-pod Cilium CLI command cilium policy trace has been deprecated in favor of approaches using the Network Policy Editor [https://app.networkpolicy BPF_MAP_TYPE_PROG_ARRAY, BPF_MAP_TYPE_PERF_EVENT_ARRAY, BPF_MAP_TYPE_CGROUP_ARRAY, BPF_MAP_TYPE_STACK_TRACE, BPF_MAP_TYPE_ARRAY_OF_MAPS, BPF_MAP_TYPE_HASH_OF_MAPS. For example, BPF_MAP_TYPE_PROG_ARRAY is an and early throw an error to the user. Helper functions such as trace_printk() can be worked around as follows: static void BPF_FUNC(trace_printk, const char *fmt, int fmt_size, ...); #ifndef printk #0 码力 | 1373 页 | 19.37 MB | 1 年前3
共 8 条
- 1