amazon web services - kube-controller-manager outputs an error "cannot change NodeName" -


i use kubernetes on aws coreos & flannel vlan network. (followed guide https://coreos.com/kubernetes/docs/latest/getting-started.html) k8s version 1.4.6.

and have following node-exporter daemon-set.

apiversion: extensions/v1beta1 kind: daemonset metadata:   name: node-exporter   labels:     app: node-exporter     tier: monitor     category: platform spec:   template:     metadata:       labels:         app: node-exporter         tier: monitor         category: platform       name: node-exporter     spec:       containers:       - image: prom/node-exporter:0.12.0         name: node-exporter         ports:         - containerport: 9100           hostport: 9100           name: scrape       hostnetwork: true       hostpid: true 

when run this, kube-controller-manager outputs error repeatedly below:

e1117 18:31:23.197206       1 endpoints_controller.go:513] endpoints "node-exporter" invalid: [subsets[0].addresses[0].nodename: forbidden: cannot change nodename 172.17.64.5 ip-172-17-64-5.ec2.internal, subsets[0].addresses[1].nodename: forbidden: cannot change nodename 172.17.64.6 ip-172-17-64-6.ec2.internal, subsets[0].addresses[2].nodename: forbidden: cannot change nodename 172.17.80.5 ip-172-17-80-5.ec2.internal, subsets[0].addresses[3].nodename: forbidden: cannot change nodename 172.17.80.6 ip-172-17-80-6.ec2.internal, subsets[0].addresses[4].nodename: forbidden: cannot change nodename 172.17.96.6 ip-172-17-96-6.ec2.internal] 

just information, despite error message, node_exporter accessible on e.g.) 172-17-96-6:9100 . nodes in private network including k8s master.

but these logs output many , makes difficult see other logs eyes our log console. see how resolve error?

because built k8s cluster scratch, cloud-provider=aws flag not activated @ first , turned on, not sure if it's related issue.

it looks caused manifest file

apiversion: v1 kind: service metadata:   name: node-exporter   labels:     app: node-exporter     tier: monitor     category: platform   annotations:     prometheus.io/scrape: 'true' spec:   clusterip: none   ports:   - name: scrape     port: 9100     protocol: tcp   selector:     app: node-exporter   type: clusterip 

i thought necessary expose node-exporter daemon-set above, rather introduce sort of conflict when set hostnetwork: true in daemon-set (actually, pod) manifest. i'm not 100% though, after delete service error disappears while can still access 172-17-96-6:9100 outside of k8s cluster.

i followed post when setting prometheus , node-exporter, https://coreos.com/blog/prometheus-and-kubernetes-up-and-running.html

in case others face same problem, i'm leaving comment here.


Comments

Popular posts from this blog

account - Script error login visual studio DefaultLogin_PCore.js -

xcode - CocoaPod Storyboard error: -