Cilium v1.5 Documentation
in-addr.arpa and ip6.arpa are listed as wildcards for the kubernetes block like this: kubectl -n kube-system edit cm coredns [...] apiVersion: v1 data: Corefile: | .:53 { errors Installation You can monitor as Cilium and all required components are being installed: kubectl -n kube-system get pods --watch NAME READY STATUS RESTART as well if required and validate that Cilium is managing kube-dns or coredns by running: kubectl -n kube-system get cep You should see kube-dns-xxx or coredns-xxx pods. In order for the en�re system0 码力 | 740 页 | 12.52 MB | 1 年前3Cilium v1.11 Documentation
7b5cdbcbb8-kjl65 ♻ Restarted unmanaged pod kube-system/stackdriver-metadata-agent- cluster-level-6cc964cddf-8n2rt This indicates that your cluster was already running some pods before Cilium was deployed and the instead of VPC CNI, so the aws-node DaemonSet has to be patched to prevent conflict behavior. kubectl -n kube-system patch daemonset aws-node --type='strategic' - p='{"spec":{"template":{"spec":{"nodeSelector":{"io NAME:.metadata.name,HOSTNETWORK:. spec.hostNetwork --no-headers=true | grep '' | awk '{print "-n "$1" "$2}' | xargs -L 1 -r kubectl delete pod pod "event-exporter-v0.2.3-f9c896d75-cbvcz" deleted pod 0 码力 | 1373 页 | 19.37 MB | 1 年前3Cilium v1.9 Documentation
Installation You can monitor as Cilium and all required components are being installed: kubectl -n kube-system get pods --watch NAME READY STATUS RESTARTS separate namespace for this. kubectl create ns cilium-test Deploy the check with: kubectl apply -n cilium-test -f https://raw.githubusercontent.com/cilium/ci It will deploy a series of deployments which and the readiness and liveness gate indicates success or failure of the test: $ kubectl get pods -n cilium-test NAME READY STATUS RESTARTS AGE0 码力 | 1263 页 | 18.62 MB | 1 年前3Cilium v1.7 Documentation
Installation You can monitor as Cilium and all required components are being installed: kubectl -n kube-system get pods --watch NAME READY STATUS RESTARTS NAME:.metadata.name,HOSTNETWORK:. spec.hostNetwork --no-headers=true | grep '' | awk '{print "-n "$1" "$2}' | xargs -L 1 -r kubectl delete pod pod "event-exporter-v0.2.3-f9c896d75-cbvcz" deleted pod x5f" deleted pod "kube-dns-autoscaler-76fcd5f658-22r72" deleted pod "kube-state-metrics-7d9774bbd5-n6m5k" deleted pod "l7-default-backend-6f8697844f-d2rq2" deleted pod "metrics-server-v0.3.1-54699c9cc8-7l5w2" 0 码力 | 885 页 | 12.41 MB | 1 年前3Cilium v1.6 Documentation
Installation You can monitor as Cilium and all required components are being installed: kubectl -n kube-system get pods --watch NAME READY STATUS RESTARTS Installation You can monitor as Cilium and all required components are being installed: kubectl -n kube-system get pods --watch NAME READY STATUS RESTARTS Installation You can monitor as Cilium and all required components are being installed: kubectl -n kube-system get pods --watch NAME READY STATUS RESTARTS0 码力 | 734 页 | 11.45 MB | 1 年前3Cilium v1.8 Documentation
Installation You can monitor as Cilium and all required components are being installed: kubectl -n kube-system get pods --watch NAME READY STATUS RESTARTS separate namespace for this. kubectl create ns cilium-test Deploy the check with: kubectl apply -n cilium-test -f https://raw.githubusercontent.com/cilium/ci It will deploy a series of deployments which and the readiness and liveness gate indicates success or failure of the test: $ kubectl get pods -n cilium-test NAME READY STATUS RESTARTS AGE0 码力 | 1124 页 | 21.33 MB | 1 年前3Cilium v1.10 Documentation
7b5cdbcbb8-kjl65 ♻ Restarted unmanaged pod kube-system/stackdriver-metadata-agent- cluster-level-6cc964cddf-8n2rt This indicates that your cluster was already running some pods before Cilium was deployed and the instead of VPC CNI, so the aws-node DaemonSet has to be patched to prevent conflict behavior. kubectl -n kube-system patch daemonset aws-node --type='strategic' - p='{"spec":{"template":{"spec":{"nodeSelector":{"io NAME:.metadata.name,HOSTNETWORK:. spec.hostNetwork --no-headers=true | grep '' | awk '{print "-n "$1" "$2}' | xargs -L 1 -r kubectl delete pod pod "event-exporter-v0.2.3-f9c896d75-cbvcz" deleted pod 0 码力 | 1307 页 | 19.26 MB | 1 年前3Hardware Breakpoint implementation in BCC
syscall(__NR_perf_event_open, attr, pid, cpu, -1, PERF_FLAG_FD_CLOEXEC); if (efd < 0) { printf("event fd %d err %s\n", efd, strerror(errno)); return; } 02 libbpf.c Usage bpf_text = """ #include#include lookup_or_init(&key, &zero); (*val)++; bpf_trace_printk("Hello, World! Here I accessed am address!\\n"); return 0; } """ b = BPF(text=bpf_text) symbol_addr = input() pid = input() bp_type = input() b 0 码力 | 8 页 | 2.02 MB | 1 年前3Debugging Go in production using eBPF
return res } What if we just want to log the iterations? Use Case fmt.Printf("iterations: %d\n”, iterations) YOUR OPTIONS Option 1: Add a log to your program, re-compile and re-deploy. ○ This0 码力 | 14 页 | 746.99 KB | 1 年前3
共 9 条
- 1