PR | nckturner: Webhook framework for cloud controller manager |
Result | FAILURE |
Tests | 26 failed / 58 succeeded |
Started | |
Elapsed | 2h2m |
Revision | |
Builder | cf99fe22-5f8a-11ed-9cf5-52f126304da0 |
Refs |
master:b3ed40b1 108838:a1b744b4 |
infra-commit | c48abcbdb |
job-version | v1.26.0-alpha.3.387+504f252722dcc8 |
kubetest-version | v20221107-33c989e684 |
repo | k8s.io/kubernetes |
repo-commit | 504f252722dcc890f8911bede0f02b471a60c2d4 |
repos | {u'k8s.io/kubernetes': u'master:b3ed40b1672f8ace5be3284707c8ca511f1cecef,108838:a1b744b4b21915198de7dd0c85c1c15a247e257d'} |
revision | v1.26.0-alpha.3.387+504f252722dcc8 |
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-api\-machinery\]\sAdmissionWebhook\s\[Privileged\:ClusterAdmin\]\sshould\smutate\scustom\sresource\s\[Conformance\]$'
test/e2e/apimachinery/webhook.go:827 k8s.io/kubernetes/test/e2e/apimachinery.deployWebhookAndService(0xc000b89d10, {0xc003c66990, 0x2c}, 0xc004430280, 0x20fb, 0x20fc) test/e2e/apimachinery/webhook.go:827 +0xf12 k8s.io/kubernetes/test/e2e/apimachinery.glob..func27.1() test/e2e/apimachinery/webhook.go:102 +0x226from junit_01.xml
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] set up framework | framework.go:178 STEP: Creating a kubernetes client 11/08/22 19:10:06.079 Nov 8 19:10:06.079: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename webhook 11/08/22 19:10:06.08 STEP: Waiting for a default service account to be provisioned in namespace 11/08/22 19:10:06.096 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/08/22 19:10:06.1 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/apimachinery/webhook.go:90 STEP: Setting up server cert 11/08/22 19:10:06.123 STEP: Create role binding to let webhook read extension-apiserver-authentication 11/08/22 19:10:06.486 STEP: Deploying the webhook pod 11/08/22 19:10:06.496 STEP: Wait for the deployment to be ready 11/08/22 19:10:06.513 Nov 8 19:10:06.521: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Nov 8 19:10:08.537: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 8 19:10:10.543: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 10, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 10, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:10:12.546: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 10, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 10, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:10:14.541: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 14, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 14, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:10:16.542: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 14, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 14, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:10:18.542: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 14, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 14, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:10:20.543: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 14, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 14, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:10:22.542: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 14, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 14, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:10:24.544: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 14, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 14, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:10:26.543: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 26, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 26, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:10:28.543: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 26, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 26, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:10:30.543: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 26, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 26, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:10:32.541: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 26, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 26, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:10:34.543: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 26, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 26, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:10:36.543: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 26, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 26, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:10:38.542: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 26, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 26, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:10:40.542: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 26, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 26, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:10:42.543: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 26, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 26, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:10:44.542: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 26, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 26, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:10:46.543: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 26, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 26, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:10:48.543: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 26, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 26, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:10:50.543: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 26, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 26, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:10:52.544: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 26, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 26, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:10:54.542: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 26, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 26, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:10:56.543: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 26, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 26, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:10:58.544: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 26, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 26, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:11:00.542: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 26, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 26, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:11:02.542: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 26, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 26, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:11:04.544: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 26, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 26, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:11:06.542: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 26, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 26, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:11:08.542: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 26, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 26, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:11:10.541: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 26, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 26, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:11:12.541: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 26, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 26, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:11:14.543: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 26, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 26, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:11:16.542: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 26, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 26, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:11:18.542: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 26, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 26, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:11:20.542: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 26, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 26, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:11:22.541: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 26, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 26, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:11:24.541: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 26, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 26, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:11:26.541: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 26, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 26, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:11:28.542: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 11, 28, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 11, 28, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:11:30.542: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 11, 28, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 11, 28, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:11:32.542: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 11, 28, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 11, 28, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:11:34.543: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 11, 28, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 11, 28, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:11:36.542: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 11, 28, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 11, 28, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:11:38.542: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 11, 28, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 11, 28, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:11:40.541: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 11, 28, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 11, 28, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:11:42.541: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 11, 28, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 11, 28, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:11:44.541: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 11, 28, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 11, 28, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:11:46.542: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 11, 28, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 11, 28, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:11:48.542: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 11, 28, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 11, 28, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:11:50.543: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 11, 28, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 11, 28, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:11:52.544: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 11, 28, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 11, 28, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:11:54.543: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 11, 28, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 11, 28, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:11:56.541: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 11, 28, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 11, 28, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:11:58.544: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 11, 28, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 11, 28, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:12:00.541: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 11, 28, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 11, 28, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:12:02.542: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 11, 28, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 11, 28, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:12:04.542: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 11, 28, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 11, 28, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:12:06.541: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 11, 28, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 11, 28, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:12:08.542: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 11, 28, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 11, 28, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:12:10.541: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 11, 28, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 11, 28, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:12:12.542: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 11, 28, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 11, 28, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:12:14.542: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 11, 28, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 11, 28, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:12:16.541: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 11, 28, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 11, 28, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:12:18.541: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 11, 28, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 11, 28, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:12:20.541: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 11, 28, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 11, 28, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:12:22.542: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 11, 28, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 11, 28, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:12:24.541: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 11, 28, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 11, 28, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:12:26.541: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 11, 28, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 11, 28, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:12:28.542: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 11, 28, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 11, 28, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:12:30.542: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 11, 28, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 11, 28, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:12:32.542: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 11, 28, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 11, 28, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:12:34.542: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 11, 28, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 11, 28, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:12:36.543: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 11, 28, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 11, 28, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:12:38.542: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 11, 28, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 11, 28, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:12:40.541: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 11, 28, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 11, 28, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:12:42.541: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 11, 28, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 11, 28, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:12:44.542: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 11, 28, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 11, 28, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:12:46.541: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 11, 28, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 11, 28, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:12:48.542: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 11, 28, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 11, 28, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:12:50.542: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 11, 28, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 11, 28, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:12:52.543: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 12, 52, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 12, 52, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:12:54.542: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 12, 52, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 12, 52, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:12:56.544: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 12, 52, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 12, 52, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:12:58.542: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 12, 52, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 12, 52, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:13:00.542: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 12, 52, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 12, 52, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:13:02.541: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 12, 52, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 12, 52, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:13:04.542: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 12, 52, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 12, 52, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:13:06.542: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 12, 52, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 12, 52, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:13:08.542: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 12, 52, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 12, 52, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:13:10.541: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 12, 52, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 12, 52, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:13:12.541: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 12, 52, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 12, 52, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:13:14.542: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 12, 52, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 12, 52, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:13:16.543: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 12, 52, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 12, 52, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:13:18.544: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 12, 52, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 12, 52, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:13:20.543: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 12, 52, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 12, 52, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:13:22.541: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 12, 52, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 12, 52, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:13:24.542: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 12, 52, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 12, 52, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:13:26.543: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 12, 52, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 12, 52, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:13:28.542: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 12, 52, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 12, 52, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:13:30.542: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 12, 52, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 12, 52, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:13:32.543: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 12, 52, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 12, 52, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:13:34.544: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 12, 52, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 12, 52, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:13:36.541: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 12, 52, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 12, 52, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:13:38.543: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 12, 52, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 12, 52, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:13:40.542: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 12, 52, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 12, 52, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:13:42.542: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 12, 52, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 12, 52, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:13:44.542: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 12, 52, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 12, 52, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:13:46.542: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 12, 52, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 12, 52, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:13:48.542: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 12, 52, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 12, 52, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:13:50.543: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 12, 52, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 12, 52, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:13:52.542: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 12, 52, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 12, 52, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:13:54.542: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 12, 52, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 12, 52, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:13:56.541: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 12, 52, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 12, 52, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:13:58.543: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 12, 52, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 12, 52, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:14:00.544: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 12, 52, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 12, 52, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:14:02.543: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 12, 52, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 12, 52, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:14:04.542: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 12, 52, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 12, 52, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:14:06.541: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 12, 52, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 12, 52, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:14:08.541: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 12, 52, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 12, 52, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:14:10.543: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 12, 52, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 12, 52, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:14:12.543: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 12, 52, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 12, 52, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:14:14.542: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 12, 52, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 12, 52, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:14:16.542: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 12, 52, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 12, 52, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:14:18.543: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 12, 52, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 12, 52, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:14:20.542: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 12, 52, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 12, 52, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:14:22.543: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 12, 52, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 12, 52, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:14:24.542: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 12, 52, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 12, 52, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:14:26.543: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 12, 52, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 12, 52, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:14:28.544: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 12, 52, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 12, 52, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:14:30.544: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 12, 52, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 12, 52, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:14:32.544: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 12, 52, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 12, 52, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:14:34.544: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 12, 52, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 12, 52, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:14:36.544: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 12, 52, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 12, 52, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:14:38.543: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 12, 52, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 12, 52, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:14:40.543: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 12, 52, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 12, 52, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:14:42.544: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 12, 52, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 12, 52, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:14:44.542: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 12, 52, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 12, 52, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:14:46.543: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 12, 52, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 12, 52, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:14:48.542: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 12, 52, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 12, 52, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:14:50.543: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 12, 52, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 12, 52, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:14:52.545: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 12, 52, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 12, 52, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:14:54.542: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 12, 52, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 12, 52, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:14:56.543: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 12, 52, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 12, 52, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:14:58.543: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 12, 52, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 12, 52, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:15:00.543: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 12, 52, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 12, 52, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:15:02.542: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 12, 52, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 12, 52, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:15:04.542: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 12, 52, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 12, 52, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} ------------------------------ Automatically polling progress: [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance] (Spec Runtime: 5m0.027s) test/e2e/apimachinery/webhook.go:291 In [BeforeEach] (Node Runtime: 5m0.001s) test/e2e/apimachinery/webhook.go:90 At [By Step] Wait for the deployment to be ready (Step Runtime: 4m59.592s) test/e2e/apimachinery/webhook.go:823 Spec Goroutine goroutine 6917 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7ebe6a8, 0xc0001a8000}, 0xc003887860, 0x2f7ec4a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7ebe6a8, 0xc0001a8000}, 0x40?, 0x2f7d7e5?, 0x40?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7ebe6a8, 0xc0001a8000}, 0xc0042bc540?, 0xc0005d9c90?, 0x25da967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x25da61f?, 0x65f61e0?, 0x40?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/utils.waitForDeploymentCompleteMaybeCheckRolling({0x7efa648?, 0xc003a1cb60}, 0xc004517680, 0x0, 0x7781c60, 0xc003a1cb60?, 0xc0044929f0?) test/utils/deployment.go:82 k8s.io/kubernetes/test/utils.WaitForDeploymentComplete(...) test/utils/deployment.go:201 k8s.io/kubernetes/test/e2e/framework/deployment.WaitForDeploymentComplete(...) test/e2e/framework/deployment/wait.go:46 > k8s.io/kubernetes/test/e2e/apimachinery.deployWebhookAndService(0xc000b89d10, {0xc003c66990, 0x2c}, 0xc004430280, 0x20fb, 0x20fc) test/e2e/apimachinery/webhook.go:826 | err = e2edeployment.WaitForDeploymentRevisionAndImage(client, namespace, deploymentName, "1", image) | framework.ExpectNoError(err, "waiting for the deployment of image %s in %s in %s to complete", image, deploymentName, namespace) > err = e2edeployment.WaitForDeploymentComplete(client, deployment) | framework.ExpectNoError(err, "waiting for the deployment status valid", image, deploymentName, namespace) | > k8s.io/kubernetes/test/e2e/apimachinery.glob..func27.1() test/e2e/apimachinery/webhook.go:102 | createAuthReaderRoleBinding(f, namespaceName) | > deployWebhookAndService(f, imageutils.GetE2EImage(imageutils.Agnhost), certCtx, servicePort, containerPort) | }) | k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc003c40ab0, 0xc003c2b440}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 8 19:15:06.543: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 12, 52, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 12, 52, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:15:08.545: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 12, 52, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 12, 52, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:15:08.550: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 12, 52, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 12, 52, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Nov 8 19:15:08.550: INFO: Unexpected error: waiting for the deployment status valid%!(EXTRA string=registry.k8s.io/e2e-test-images/agnhost:2.40, string=sample-webhook-deployment, string=webhook-3792): <*errors.errorString | 0xc003e1d350>: { s: "error waiting for deployment \"sample-webhook-deployment\" status to match expectation: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:\"Progressing\", Status:\"True\", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:\"NewReplicaSetAvailable\", Message:\"ReplicaSet \\\"sample-webhook-deployment-6c9b47fb9c\\\" has successfully progressed.\"}, v1.DeploymentCondition{Type:\"Available\", Status:\"False\", LastUpdateTime:time.Date(2022, time.November, 8, 19, 12, 52, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 12, 52, 0, time.Local), Reason:\"MinimumReplicasUnavailable\", Message:\"Deployment does not have minimum availability.\"}}, CollisionCount:(*int32)(nil)}", } Nov 8 19:15:08.550: FAIL: waiting for the deployment status valid%!(EXTRA string=registry.k8s.io/e2e-test-images/agnhost:2.40, string=sample-webhook-deployment, string=webhook-3792): error waiting for deployment "sample-webhook-deployment" status to match expectation: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 19, 10, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 10, 6, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 19, 12, 52, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 19, 12, 52, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Full Stack Trace k8s.io/kubernetes/test/e2e/apimachinery.deployWebhookAndService(0xc000b89d10, {0xc003c66990, 0x2c}, 0xc004430280, 0x20fb, 0x20fc) test/e2e/apimachinery/webhook.go:827 +0xf12 k8s.io/kubernetes/test/e2e/apimachinery.glob..func27.1() test/e2e/apimachinery/webhook.go:102 +0x226 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/framework/node/init/init.go:32 Nov 8 19:15:08.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/apimachinery/webhook.go:105 [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/08/22 19:15:08.594 STEP: Collecting events from namespace "webhook-3792". 11/08/22 19:15:08.594 STEP: Found 11 events. 11/08/22 19:15:08.598 Nov 8 19:15:08.599: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for sample-webhook-deployment-6c9b47fb9c-zz2hw: { } Scheduled: Successfully assigned webhook-3792/sample-webhook-deployment-6c9b47fb9c-zz2hw to 172.17.0.1 Nov 8 19:15:08.599: INFO: At 2022-11-08 19:10:06 +0000 UTC - event for sample-webhook-deployment: {deployment-controller } ScalingReplicaSet: Scaled up replica set sample-webhook-deployment-6c9b47fb9c to 1 Nov 8 19:15:08.599: INFO: At 2022-11-08 19:10:06 +0000 UTC - event for sample-webhook-deployment-6c9b47fb9c: {replicaset-controller } SuccessfulCreate: Created pod: sample-webhook-deployment-6c9b47fb9c-zz2hw Nov 8 19:15:08.599: INFO: At 2022-11-08 19:10:08 +0000 UTC - event for sample-webhook-deployment-6c9b47fb9c-zz2hw: {kubelet 172.17.0.1} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.40" already present on machine Nov 8 19:15:08.599: INFO: At 2022-11-08 19:10:08 +0000 UTC - event for sample-webhook-deployment-6c9b47fb9c-zz2hw: {kubelet 172.17.0.1} Created: Created container sample-webhook Nov 8 19:15:08.599: INFO: At 2022-11-08 19:10:09 +0000 UTC - event for sample-webhook-deployment-6c9b47fb9c-zz2hw: {kubelet 172.17.0.1} Started: Started container sample-webhook Nov 8 19:15:08.599: INFO: At 2022-11-08 19:10:09 +0000 UTC - event for sample-webhook-deployment-6c9b47fb9c-zz2hw: {kubelet 172.17.0.1} Unhealthy: Readiness probe failed: Get "https://10.88.7.99:8444/readyz": dial tcp 10.88.7.99:8444: connect: connection refused Nov 8 19:15:08.599: INFO: At 2022-11-08 19:10:10 +0000 UTC - event for sample-webhook-deployment-6c9b47fb9c-zz2hw: {kubelet 172.17.0.1} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Nov 8 19:15:08.599: INFO: At 2022-11-08 19:10:13 +0000 UTC - event for sample-webhook-deployment-6c9b47fb9c-zz2hw: {kubelet 172.17.0.1} Unhealthy: Readiness probe failed: Get "https://10.88.7.101:8444/readyz": dial tcp 10.88.7.101:8444: connect: connection refused Nov 8 19:15:08.599: INFO: At 2022-11-08 19:10:16 +0000 UTC - event for sample-webhook-deployment-6c9b47fb9c-zz2hw: {kubelet 172.17.0.1} BackOff: Back-off restarting failed container sample-webhook in pod sample-webhook-deployment-6c9b47fb9c-zz2hw_webhook-3792(8b4f1796-bdad-4905-84a2-26b7ec1604f5) Nov 8 19:15:08.599: INFO: At 2022-11-08 19:10:25 +0000 UTC - event for sample-webhook-deployment-6c9b47fb9c-zz2hw: {kubelet 172.17.0.1} Unhealthy: Readiness probe failed: Get "https://10.88.7.107:8444/readyz": dial tcp 10.88.7.107:8444: connect: connection refused Nov 8 19:15:08.603: INFO: POD NODE PHASE GRACE CONDITIONS Nov 8 19:15:08.603: INFO: sample-webhook-deployment-6c9b47fb9c-zz2hw 172.17.0.1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-08 19:10:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-08 19:12:52 +0000 UTC ContainersNotReady containers with unready status: [sample-webhook]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-08 19:12:52 +0000 UTC ContainersNotReady containers with unready status: [sample-webhook]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-08 19:10:06 +0000 UTC }] Nov 8 19:15:08.603: INFO: Nov 8 19:15:08.626: INFO: Logging node info for node 172.17.0.1 Nov 8 19:15:08.630: INFO: Node Info: &Node{ObjectMeta:{172.17.0.1 1c9ca6f0-ace7-4a33-a1cd-137d512be00a 8873 0 2022-11-08 18:07:44 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:172.17.0.1 kubernetes.io/os:linux] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-11-08 18:07:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}} } {kubelet Update v1 2022-11-08 18:07:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {kubelet Update v1 2022-11-08 19:10:06 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{259962224640 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67441348608 0} {<nil>} 65860692Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{233966001789 0} {<nil>} 233966001789 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67336491008 0} {<nil>} 65758292Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-08 19:10:06 +0000 UTC,LastTransitionTime:2022-11-08 18:07:43 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-08 19:10:06 +0000 UTC,LastTransitionTime:2022-11-08 18:07:43 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-08 19:10:06 +0000 UTC,LastTransitionTime:2022-11-08 18:07:43 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-08 19:10:06 +0000 UTC,LastTransitionTime:2022-11-08 18:07:54 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.17.0.1,},NodeAddress{Type:Hostname,Address:172.17.0.1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:,SystemUUID:7d8834b1-ec1e-71b0-7148-50316089d154,BootID:99214993-e7b1-4bff-9db2-b9548be8d199,KernelVersion:5.4.0-1078-gke,OSImage:Debian GNU/Linux 10 (buster),ContainerRuntimeVersion:containerd://1.6.8,KubeletVersion:v1.26.0-alpha.3.387+504f252722dcc8,KubeProxyVersion:v1.26.0-alpha.3.387+504f252722dcc8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:11e6a66017ba4e4b938c1612b7a54a3befcefd354796c04e1dba76873a13518e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.5],SizeBytes:112030526,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:af7e3857d87770ddb40f5ea4f89b5a2709504ab1ee31f9ea4ab5823c045f2146 registry.k8s.io/e2e-test-images/agnhost:2.40],SizeBytes:51155161,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nautilus@sha256:99c0d6f1ad24a1aa1905d9c6534d193f268f7b23f9add2ae6bb41f31094bdd5c registry.k8s.io/e2e-test-images/nautilus:1.5],SizeBytes:49642095,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 registry.k8s.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:40764680,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 registry.k8s.io/e2e-test-images/nginx:1.14-2],SizeBytes:6979041,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 8 19:15:08.630: INFO: Logging kubelet events for node 172.17.0.1 Nov 8 19:15:08.633: INFO: Logging pods the kubelet thinks is on node 172.17.0.1 Nov 8 19:15:08.639: INFO: coredns-755454cbdc-s26wr started at 2022-11-08 18:07:54 +0000 UTC (0+1 container statuses recorded) Nov 8 19:15:08.639: INFO: Container coredns ready: false, restart count 18 Nov 8 19:15:08.681: INFO: Latency metrics for node 172.17.0.1 STEP: Collecting events from namespace "webhook-3792-markers". 11/08/22 19:15:08.682 STEP: Found 0 events. 11/08/22 19:15:08.685 Nov 8 19:15:08.690: INFO: POD NODE PHASE GRACE CONDITIONS Nov 8 19:15:08.690: INFO: Nov 8 19:15:08.694: INFO: Logging node info for node 172.17.0.1 Nov 8 19:15:08.698: INFO: Node Info: &Node{ObjectMeta:{172.17.0.1 1c9ca6f0-ace7-4a33-a1cd-137d512be00a 8873 0 2022-11-08 18:07:44 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:172.17.0.1 kubernetes.io/os:linux] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-11-08 18:07:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}} } {kubelet Update v1 2022-11-08 18:07:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {kubelet Update v1 2022-11-08 19:10:06 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{259962224640 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67441348608 0} {<nil>} 65860692Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{233966001789 0} {<nil>} 233966001789 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67336491008 0} {<nil>} 65758292Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-08 19:10:06 +0000 UTC,LastTransitionTime:2022-11-08 18:07:43 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-08 19:10:06 +0000 UTC,LastTransitionTime:2022-11-08 18:07:43 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-08 19:10:06 +0000 UTC,LastTransitionTime:2022-11-08 18:07:43 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-08 19:10:06 +0000 UTC,LastTransitionTime:2022-11-08 18:07:54 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.17.0.1,},NodeAddress{Type:Hostname,Address:172.17.0.1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:,SystemUUID:7d8834b1-ec1e-71b0-7148-50316089d154,BootID:99214993-e7b1-4bff-9db2-b9548be8d199,KernelVersion:5.4.0-1078-gke,OSImage:Debian GNU/Linux 10 (buster),ContainerRuntimeVersion:containerd://1.6.8,KubeletVersion:v1.26.0-alpha.3.387+504f252722dcc8,KubeProxyVersion:v1.26.0-alpha.3.387+504f252722dcc8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:11e6a66017ba4e4b938c1612b7a54a3befcefd354796c04e1dba76873a13518e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.5],SizeBytes:112030526,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:af7e3857d87770ddb40f5ea4f89b5a2709504ab1ee31f9ea4ab5823c045f2146 registry.k8s.io/e2e-test-images/agnhost:2.40],SizeBytes:51155161,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nautilus@sha256:99c0d6f1ad24a1aa1905d9c6534d193f268f7b23f9add2ae6bb41f31094bdd5c registry.k8s.io/e2e-test-images/nautilus:1.5],SizeBytes:49642095,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 registry.k8s.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:40764680,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 registry.k8s.io/e2e-test-images/nginx:1.14-2],SizeBytes:6979041,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 8 19:15:08.700: INFO: Logging kubelet events for node 172.17.0.1 Nov 8 19:15:08.703: INFO: Logging pods the kubelet thinks is on node 172.17.0.1 Nov 8 19:15:08.709: INFO: coredns-755454cbdc-s26wr started at 2022-11-08 18:07:54 +0000 UTC (0+1 container statuses recorded) Nov 8 19:15:08.709: INFO: Container coredns ready: false, restart count 18 Nov 8 19:15:08.769: INFO: Latency metrics for node 172.17.0.1 [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] tear down framework | framework.go:193 STEP: Destroying namespace "webhook-3792" for this suite. 11/08/22 19:15:08.771 STEP: Destroying namespace "webhook-3792-markers" for this suite. 11/08/22 19:15:08.784
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-api\-machinery\]\sAdmissionWebhook\s\[Privileged\:ClusterAdmin\]\sshould\smutate\scustom\sresource\swith\spruning\s\[Conformance\]$'
test/e2e/apimachinery/webhook.go:1917 k8s.io/kubernetes/test/e2e/apimachinery.testMutatingCustomResourceWebhook(0xc000b89d10, 0xc002cb1b80, {0x7ed0d88, 0xc002ff2e60}, 0x1) test/e2e/apimachinery/webhook.go:1917 +0x505 k8s.io/kubernetes/test/e2e/apimachinery.glob..func27.14() test/e2e/apimachinery/webhook.go:369 +0x12bfrom junit_01.xml
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] set up framework | framework.go:178 STEP: Creating a kubernetes client 11/08/22 18:57:50.023 Nov 8 18:57:50.023: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename webhook 11/08/22 18:57:50.024 STEP: Waiting for a default service account to be provisioned in namespace 11/08/22 18:57:50.046 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/08/22 18:57:50.052 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/apimachinery/webhook.go:90 STEP: Setting up server cert 11/08/22 18:57:50.084 STEP: Create role binding to let webhook read extension-apiserver-authentication 11/08/22 18:57:50.639 STEP: Deploying the webhook pod 11/08/22 18:57:50.654 STEP: Wait for the deployment to be ready 11/08/22 18:57:50.673 Nov 8 18:57:50.681: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created Nov 8 18:57:52.694: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 18, 57, 50, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 18, 57, 50, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 18, 57, 50, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 18, 57, 50, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c9b47fb9c\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service 11/08/22 18:57:54.699 STEP: Verifying the service has paired with the endpoint 11/08/22 18:57:54.712 Nov 8 18:57:55.713: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 Nov 8 18:57:56.712: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] test/e2e/apimachinery/webhook.go:341 Nov 8 18:57:56.716: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-1173-crds.webhook.example.com via the AdmissionRegistration API 11/08/22 18:57:57.232 STEP: Creating a custom resource that should be mutated by the webhook 11/08/22 18:57:57.254 Nov 8 18:58:00.323: INFO: Unexpected error: failed to create custom resource cr-instance-1 in namespace: webhook-8178: <*errors.StatusError | 0xc004621d60>: { ErrStatus: apiVersion: v1 code: 500 details: causes: - message: 'failed calling webhook "mutate-custom-resource-data-stage-1.k8s.io": failed to call webhook: Post "https://e2e-test-webhook.webhook-8178.svc:8443/mutating-custom-resource?timeout=10s": dial tcp 10.0.0.194:8443: connect: connection refused' kind: Status message: 'Internal error occurred: failed calling webhook "mutate-custom-resource-data-stage-1.k8s.io": failed to call webhook: Post "https://e2e-test-webhook.webhook-8178.svc:8443/mutating-custom-resource?timeout=10s": dial tcp 10.0.0.194:8443: connect: connection refused' metadata: {} reason: InternalError status: Failure, } Nov 8 18:58:00.323: FAIL: failed to create custom resource cr-instance-1 in namespace: webhook-8178: Internal error occurred: failed calling webhook "mutate-custom-resource-data-stage-1.k8s.io": failed to call webhook: Post "https://e2e-test-webhook.webhook-8178.svc:8443/mutating-custom-resource?timeout=10s": dial tcp 10.0.0.194:8443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/apimachinery.testMutatingCustomResourceWebhook(0xc000b89d10, 0xc002cb1b80, {0x7ed0d88, 0xc002ff2e60}, 0x1) test/e2e/apimachinery/webhook.go:1917 +0x505 k8s.io/kubernetes/test/e2e/apimachinery.glob..func27.14() test/e2e/apimachinery/webhook.go:369 +0x12b [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/framework/node/init/init.go:32 Nov 8 18:58:00.848: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/apimachinery/webhook.go:105 [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/08/22 18:58:00.89 STEP: Collecting events from namespace "webhook-8178". 11/08/22 18:58:00.89 STEP: Found 9 events. 11/08/22 18:58:00.894 Nov 8 18:58:00.894: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for sample-webhook-deployment-6c9b47fb9c-6sbcv: { } Scheduled: Successfully assigned webhook-8178/sample-webhook-deployment-6c9b47fb9c-6sbcv to 172.17.0.1 Nov 8 18:58:00.894: INFO: At 2022-11-08 18:57:50 +0000 UTC - event for sample-webhook-deployment: {deployment-controller } ScalingReplicaSet: Scaled up replica set sample-webhook-deployment-6c9b47fb9c to 1 Nov 8 18:58:00.894: INFO: At 2022-11-08 18:57:50 +0000 UTC - event for sample-webhook-deployment-6c9b47fb9c: {replicaset-controller } SuccessfulCreate: Created pod: sample-webhook-deployment-6c9b47fb9c-6sbcv Nov 8 18:58:00.894: INFO: At 2022-11-08 18:57:52 +0000 UTC - event for sample-webhook-deployment-6c9b47fb9c-6sbcv: {kubelet 172.17.0.1} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.40" already present on machine Nov 8 18:58:00.894: INFO: At 2022-11-08 18:57:52 +0000 UTC - event for sample-webhook-deployment-6c9b47fb9c-6sbcv: {kubelet 172.17.0.1} Created: Created container sample-webhook Nov 8 18:58:00.894: INFO: At 2022-11-08 18:57:52 +0000 UTC - event for sample-webhook-deployment-6c9b47fb9c-6sbcv: {kubelet 172.17.0.1} Started: Started container sample-webhook Nov 8 18:58:00.894: INFO: At 2022-11-08 18:57:54 +0000 UTC - event for sample-webhook-deployment-6c9b47fb9c-6sbcv: {kubelet 172.17.0.1} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Nov 8 18:58:00.894: INFO: At 2022-11-08 18:57:58 +0000 UTC - event for sample-webhook-deployment-6c9b47fb9c-6sbcv: {kubelet 172.17.0.1} Unhealthy: Readiness probe failed: Get "https://10.88.5.176:8444/readyz": dial tcp 10.88.5.176:8444: connect: connection refused Nov 8 18:58:00.894: INFO: At 2022-11-08 18:58:00 +0000 UTC - event for sample-webhook-deployment-6c9b47fb9c-6sbcv: {kubelet 172.17.0.1} BackOff: Back-off restarting failed container sample-webhook in pod sample-webhook-deployment-6c9b47fb9c-6sbcv_webhook-8178(c25f711f-6ce8-4001-9bc9-19bf21716212) Nov 8 18:58:00.901: INFO: POD NODE PHASE GRACE CONDITIONS Nov 8 18:58:00.901: INFO: sample-webhook-deployment-6c9b47fb9c-6sbcv 172.17.0.1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-08 18:57:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-08 18:57:58 +0000 UTC ContainersNotReady containers with unready status: [sample-webhook]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-08 18:57:58 +0000 UTC ContainersNotReady containers with unready status: [sample-webhook]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-08 18:57:50 +0000 UTC }] Nov 8 18:58:00.901: INFO: Nov 8 18:58:00.921: INFO: Logging node info for node 172.17.0.1 Nov 8 18:58:00.924: INFO: Node Info: &Node{ObjectMeta:{172.17.0.1 1c9ca6f0-ace7-4a33-a1cd-137d512be00a 5979 0 2022-11-08 18:07:44 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:172.17.0.1 kubernetes.io/os:linux] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-11-08 18:07:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}} } {kubelet Update v1 2022-11-08 18:07:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {kubelet Update v1 2022-11-08 18:54:49 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{259962224640 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67441348608 0} {<nil>} 65860692Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{233966001789 0} {<nil>} 233966001789 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67336491008 0} {<nil>} 65758292Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-08 18:54:49 +0000 UTC,LastTransitionTime:2022-11-08 18:07:43 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-08 18:54:49 +0000 UTC,LastTransitionTime:2022-11-08 18:07:43 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-08 18:54:49 +0000 UTC,LastTransitionTime:2022-11-08 18:07:43 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-08 18:54:49 +0000 UTC,LastTransitionTime:2022-11-08 18:07:54 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.17.0.1,},NodeAddress{Type:Hostname,Address:172.17.0.1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:,SystemUUID:7d8834b1-ec1e-71b0-7148-50316089d154,BootID:99214993-e7b1-4bff-9db2-b9548be8d199,KernelVersion:5.4.0-1078-gke,OSImage:Debian GNU/Linux 10 (buster),ContainerRuntimeVersion:containerd://1.6.8,KubeletVersion:v1.26.0-alpha.3.387+504f252722dcc8,KubeProxyVersion:v1.26.0-alpha.3.387+504f252722dcc8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:11e6a66017ba4e4b938c1612b7a54a3befcefd354796c04e1dba76873a13518e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.5],SizeBytes:112030526,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:af7e3857d87770ddb40f5ea4f89b5a2709504ab1ee31f9ea4ab5823c045f2146 registry.k8s.io/e2e-test-images/agnhost:2.40],SizeBytes:51155161,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nautilus@sha256:99c0d6f1ad24a1aa1905d9c6534d193f268f7b23f9add2ae6bb41f31094bdd5c registry.k8s.io/e2e-test-images/nautilus:1.5],SizeBytes:49642095,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 registry.k8s.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:40764680,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 registry.k8s.io/e2e-test-images/nginx:1.14-2],SizeBytes:6979041,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 8 18:58:00.925: INFO: Logging kubelet events for node 172.17.0.1 Nov 8 18:58:00.928: INFO: Logging pods the kubelet thinks is on node 172.17.0.1 Nov 8 18:58:00.934: INFO: coredns-755454cbdc-s26wr started at 2022-11-08 18:07:54 +0000 UTC (0+1 container statuses recorded) Nov 8 18:58:00.934: INFO: Container coredns ready: false, restart count 14 Nov 8 18:58:00.934: INFO: pod3 started at 2022-11-08 18:57:43 +0000 UTC (0+1 container statuses recorded) Nov 8 18:58:00.934: INFO: Container agnhost ready: false, restart count 1 Nov 8 18:58:00.969: INFO: Latency metrics for node 172.17.0.1 STEP: Collecting events from namespace "webhook-8178-markers". 11/08/22 18:58:00.97 STEP: Found 0 events. 11/08/22 18:58:00.973 Nov 8 18:58:00.977: INFO: POD NODE PHASE GRACE CONDITIONS Nov 8 18:58:00.977: INFO: Nov 8 18:58:00.981: INFO: Logging node info for node 172.17.0.1 Nov 8 18:58:00.985: INFO: Node Info: &Node{ObjectMeta:{172.17.0.1 1c9ca6f0-ace7-4a33-a1cd-137d512be00a 5979 0 2022-11-08 18:07:44 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:172.17.0.1 kubernetes.io/os:linux] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-11-08 18:07:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}} } {kubelet Update v1 2022-11-08 18:07:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {kubelet Update v1 2022-11-08 18:54:49 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{259962224640 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67441348608 0} {<nil>} 65860692Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{233966001789 0} {<nil>} 233966001789 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67336491008 0} {<nil>} 65758292Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-08 18:54:49 +0000 UTC,LastTransitionTime:2022-11-08 18:07:43 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-08 18:54:49 +0000 UTC,LastTransitionTime:2022-11-08 18:07:43 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-08 18:54:49 +0000 UTC,LastTransitionTime:2022-11-08 18:07:43 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-08 18:54:49 +0000 UTC,LastTransitionTime:2022-11-08 18:07:54 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.17.0.1,},NodeAddress{Type:Hostname,Address:172.17.0.1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:,SystemUUID:7d8834b1-ec1e-71b0-7148-50316089d154,BootID:99214993-e7b1-4bff-9db2-b9548be8d199,KernelVersion:5.4.0-1078-gke,OSImage:Debian GNU/Linux 10 (buster),ContainerRuntimeVersion:containerd://1.6.8,KubeletVersion:v1.26.0-alpha.3.387+504f252722dcc8,KubeProxyVersion:v1.26.0-alpha.3.387+504f252722dcc8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:11e6a66017ba4e4b938c1612b7a54a3befcefd354796c04e1dba76873a13518e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.5],SizeBytes:112030526,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:af7e3857d87770ddb40f5ea4f89b5a2709504ab1ee31f9ea4ab5823c045f2146 registry.k8s.io/e2e-test-images/agnhost:2.40],SizeBytes:51155161,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nautilus@sha256:99c0d6f1ad24a1aa1905d9c6534d193f268f7b23f9add2ae6bb41f31094bdd5c registry.k8s.io/e2e-test-images/nautilus:1.5],SizeBytes:49642095,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 registry.k8s.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:40764680,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 registry.k8s.io/e2e-test-images/nginx:1.14-2],SizeBytes:6979041,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 8 18:58:00.985: INFO: Logging kubelet events for node 172.17.0.1 Nov 8 18:58:00.988: INFO: Logging pods the kubelet thinks is on node 172.17.0.1 Nov 8 18:58:00.995: INFO: coredns-755454cbdc-s26wr started at 2022-11-08 18:07:54 +0000 UTC (0+1 container statuses recorded) Nov 8 18:58:00.995: INFO: Container coredns ready: false, restart count 14 Nov 8 18:58:00.995: INFO: pod3 started at 2022-11-08 18:57:43 +0000 UTC (0+1 container statuses recorded) Nov 8 18:58:00.995: INFO: Container agnhost ready: false, restart count 1 Nov 8 18:58:01.031: INFO: Latency metrics for node 172.17.0.1 [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] tear down framework | framework.go:193 STEP: Destroying namespace "webhook-8178" for this suite. 11/08/22 18:58:01.032 STEP: Destroying namespace "webhook-8178-markers" for this suite. 11/08/22 18:58:01.044
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-api\-machinery\]\sCustomResourceConversionWebhook\s\[Privileged\:ClusterAdmin\]\sshould\sbe\sable\sto\sconvert\sa\snon\shomogeneous\slist\sof\sCRs\s\[Conformance\]$'
test/e2e/apimachinery/crd_conversion_webhook.go:479 k8s.io/kubernetes/test/e2e/apimachinery.waitWebhookConversionReady(0xc000b88780?, 0xc003fd6000?, 0xc003ecf800?, {0x74ab21c?, 0x2?}) test/e2e/apimachinery/crd_conversion_webhook.go:479 +0xf3 k8s.io/kubernetes/test/e2e/apimachinery.glob..func4.4() test/e2e/apimachinery/crd_conversion_webhook.go:208 +0x113from junit_01.xml
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] set up framework | framework.go:178 STEP: Creating a kubernetes client 11/08/22 18:51:00.592 Nov 8 18:51:00.592: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename crd-webhook 11/08/22 18:51:00.593 STEP: Waiting for a default service account to be provisioned in namespace 11/08/22 18:51:00.614 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/08/22 18:51:00.618 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] test/e2e/apimachinery/crd_conversion_webhook.go:128 STEP: Setting up server cert 11/08/22 18:51:00.621 STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication 11/08/22 18:51:00.993 STEP: Deploying the custom resource conversion webhook pod 11/08/22 18:51:01.004 STEP: Wait for the deployment to be ready 11/08/22 18:51:01.021 Nov 8 18:51:01.032: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Nov 8 18:51:03.042: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 18, 51, 1, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 18, 51, 1, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 18, 51, 1, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 18, 51, 1, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-644bcc8d4c\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 8 18:51:05.050: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 18, 51, 1, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 18, 51, 1, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 18, 51, 1, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 18, 51, 1, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-644bcc8d4c\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 8 18:51:07.047: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 18, 51, 1, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 18, 51, 1, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 18, 51, 1, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 18, 51, 1, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-644bcc8d4c\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 8 18:51:09.048: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 18, 51, 1, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 18, 51, 1, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 18, 51, 1, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 18, 51, 1, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-644bcc8d4c\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 8 18:51:11.046: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 18, 51, 1, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 18, 51, 1, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 18, 51, 1, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 18, 51, 1, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-644bcc8d4c\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 8 18:51:13.047: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.November, 8, 18, 51, 1, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 18, 51, 1, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.November, 8, 18, 51, 1, 0, time.Local), LastTransitionTime:time.Date(2022, time.November, 8, 18, 51, 1, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-644bcc8d4c\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service 11/08/22 18:51:15.046 STEP: Verifying the service has paired with the endpoint 11/08/22 18:51:15.057 Nov 8 18:51:16.058: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 Nov 8 18:51:17.058: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 Nov 8 18:51:18.058: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 Nov 8 18:51:19.058: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 Nov 8 18:51:20.058: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 Nov 8 18:51:21.058: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 Nov 8 18:51:22.058: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 Nov 8 18:51:23.058: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 Nov 8 18:51:24.057: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 Nov 8 18:51:25.058: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 Nov 8 18:51:26.058: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 Nov 8 18:51:27.058: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 Nov 8 18:51:28.058: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 Nov 8 18:51:29.058: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 Nov 8 18:51:30.058: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 Nov 8 18:51:31.058: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 Nov 8 18:51:32.057: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 Nov 8 18:51:33.058: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 Nov 8 18:51:34.058: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 Nov 8 18:51:35.060: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 Nov 8 18:51:36.058: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 Nov 8 18:51:37.058: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] test/e2e/apimachinery/crd_conversion_webhook.go:184 Nov 8 18:51:37.062: INFO: >>> kubeConfig: /workspace/.kube/config Nov 8 18:51:40.643: INFO: error waiting for conversion to succeed during setup: conversion webhook for stable.example.com/v2, Kind=e2e-test-crd-webhook-3408-crd failed: Post "https://e2e-test-crd-conversion-webhook.crd-webhook-8232.svc:9443/crdconvert?timeout=30s": dial tcp 10.0.0.202:9443: connect: connection refused Nov 8 18:51:41.762: INFO: error waiting for conversion to succeed during setup: conversion webhook for stable.example.com/v2, Kind=e2e-test-crd-webhook-3408-crd failed: Post "https://e2e-test-crd-conversion-webhook.crd-webhook-8232.svc:9443/crdconvert?timeout=30s": dial tcp 10.0.0.202:9443: connect: connection refused Nov 8 18:51:42.850: INFO: error waiting for conversion to succeed during setup: conversion webhook for stable.example.com/v2, Kind=e2e-test-crd-webhook-3408-crd failed: Post "https://e2e-test-crd-conversion-webhook.crd-webhook-8232.svc:9443/crdconvert?timeout=30s": dial tcp 10.0.0.202:9443: connect: connection refused Nov 8 18:51:43.974: INFO: error waiting for conversion to succeed during setup: conversion webhook for stable.example.com/v2, Kind=e2e-test-crd-webhook-3408-crd failed: Post "https://e2e-test-crd-conversion-webhook.crd-webhook-8232.svc:9443/crdconvert?timeout=30s": dial tcp 10.0.0.202:9443: connect: connection refused Nov 8 18:51:45.058: INFO: error waiting for conversion to succeed during setup: conversion webhook for stable.example.com/v2, Kind=e2e-test-crd-webhook-3408-crd failed: Post "https://e2e-test-crd-conversion-webhook.crd-webhook-8232.svc:9443/crdconvert?timeout=30s": dial tcp 10.0.0.202:9443: connect: connection refused Nov 8 18:51:46.182: INFO: error waiting for conversion to succeed during setup: conversion webhook for stable.example.com/v2, Kind=e2e-test-crd-webhook-3408-crd failed: Post "https://e2e-test-crd-conversion-webhook.crd-webhook-8232.svc:9443/crdconvert?timeout=30s": dial tcp 10.0.0.202:9443: connect: connection refused Nov 8 18:51:47.266: INFO: error waiting for conversion to succeed during setup: conversion webhook for stable.example.com/v2, Kind=e2e-test-crd-webhook-3408-crd failed: Post "https://e2e-test-crd-conversion-webhook.crd-webhook-8232.svc:9443/crdconvert?timeout=30s": dial tcp 10.0.0.202:9443: connect: connection refused Nov 8 18:51:48.359: INFO: error waiting for conversion to succeed during setup: conversion webhook for stable.example.com/v2, Kind=e2e-test-crd-webhook-3408-crd failed: Post "https://e2e-test-crd-conversion-webhook.crd-webhook-8232.svc:9443/crdconvert?timeout=30s": dial tcp 10.0.0.202:9443: connect: connection refused Nov 8 18:51:49.474: INFO: error waiting for conversion to succeed during setup: conversion webhook for stable.example.com/v2, Kind=e2e-test-crd-webhook-3408-crd failed: Post "https://e2e-test-crd-conversion-webhook.crd-webhook-8232.svc:9443/crdconvert?timeout=30s": dial tcp 10.0.0.202:9443: connect: connection refused Nov 8 18:51:50.562: INFO: error waiting for conversion to succeed during setup: conversion webhook for stable.example.com/v2, Kind=e2e-test-crd-webhook-3408-crd failed: Post "https://e2e-test-crd-conversion-webhook.crd-webhook-8232.svc:9443/crdconvert?timeout=30s": dial tcp 10.0.0.202:9443: connect: connection refused Nov 8 18:51:51.650: INFO: error waiting for conversion to succeed during setup: conversion webhook for stable.example.com/v2, Kind=e2e-test-crd-webhook-3408-crd failed: Post "https://e2e-test-crd-conversion-webhook.crd-webhook-8232.svc:9443/crdconvert?timeout=30s": dial tcp 10.0.0.202:9443: connect: connection refused Nov 8 18:51:52.770: INFO: error waiting for conversion to succeed during setup: conversion webhook for stable.example.com/v2, Kind=e2e-test-crd-webhook-3408-crd failed: Post "https://e2e-test-crd-conversion-webhook.crd-webhook-8232.svc:9443/crdconvert?timeout=30s": dial tcp 10.0.0.202:9443: connect: connection refused Nov 8 18:51:53.858: INFO: error waiting for conversion to succeed during setup: conversion webhook for stable.example.com/v2, Kind=e2e-test-crd-webhook-3408-crd failed: Post "https://e2e-test-crd-conversion-webhook.crd-webhook-8232.svc:9443/crdconvert?timeout=30s": dial tcp 10.0.0.202:9443: connect: connection refused Nov 8 18:51:54.978: INFO: error waiting for conversion to succeed during setup: conversion webhook for stable.example.com/v2, Kind=e2e-test-crd-webhook-3408-crd failed: Post "https://e2e-test-crd-conversion-webhook.crd-webhook-8232.svc:9443/crdconvert?timeout=30s": dial tcp 10.0.0.202:9443: connect: connection refused Nov 8 18:51:56.066: INFO: error waiting for conversion to succeed during setup: conversion webhook for stable.example.com/v2, Kind=e2e-test-crd-webhook-3408-crd failed: Post "https://e2e-test-crd-conversion-webhook.crd-webhook-8232.svc:9443/crdconvert?timeout=30s": dial tcp 10.0.0.202:9443: connect: connection refused Nov 8 18:51:57.154: INFO: error waiting for conversion to succeed during setup: conversion webhook for stable.example.com/v2, Kind=e2e-test-crd-webhook-3408-crd failed: Post "https://e2e-test-crd-conversion-webhook.crd-webhook-8232.svc:9443/crdconvert?timeout=30s": dial tcp 10.0.0.202:9443: connect: connection refused Nov 8 18:51:58.274: INFO: error waiting for conversion to succeed during setup: conversion webhook for stable.example.com/v2, Kind=e2e-test-crd-webhook-3408-crd failed: Post "https://e2e-test-crd-conversion-webhook.crd-webhook-8232.svc:9443/crdconvert?timeout=30s": dial tcp 10.0.0.202:9443: connect: connection refused Nov 8 18:51:59.362: INFO: error waiting for conversion to succeed during setup: conversion webhook for stable.example.com/v2, Kind=e2e-test-crd-webhook-3408-crd failed: Post "https://e2e-test-crd-conversion-webhook.crd-webhook-8232.svc:9443/crdconvert?timeout=30s": dial tcp 10.0.0.202:9443: connect: connection refused Nov 8 18:52:00.451: INFO: error waiting for conversion to succeed during setup: conversion webhook for stable.example.com/v2, Kind=e2e-test-crd-webhook-3408-crd failed: Post "https://e2e-test-crd-conversion-webhook.crd-webhook-8232.svc:9443/crdconvert?timeout=30s": dial tcp 10.0.0.202:9443: connect: connection refused Nov 8 18:52:01.570: INFO: error waiting for conversion to succeed during setup: conversion webhook for stable.example.com/v2, Kind=e2e-test-crd-webhook-3408-crd failed: Post "https://e2e-test-crd-conversion-webhook.crd-webhook-8232.svc:9443/crdconvert?timeout=30s": dial tcp 10.0.0.202:9443: connect: connection refused Nov 8 18:52:02.662: INFO: error waiting for conversion to succeed during setup: conversion webhook for stable.example.com/v2, Kind=e2e-test-crd-webhook-3408-crd failed: Post "https://e2e-test-crd-conversion-webhook.crd-webhook-8232.svc:9443/crdconvert?timeout=30s": dial tcp 10.0.0.202:9443: connect: connection refused Nov 8 18:52:03.778: INFO: error waiting for conversion to succeed during setup: conversion webhook for stable.example.com/v2, Kind=e2e-test-crd-webhook-3408-crd failed: Post "https://e2e-test-crd-conversion-webhook.crd-webhook-8232.svc:9443/crdconvert?timeout=30s": dial tcp 10.0.0.202:9443: connect: connection refused Nov 8 18:52:04.866: INFO: error waiting for conversion to succeed during setup: conversion webhook for stable.example.com/v2, Kind=e2e-test-crd-webhook-3408-crd failed: Post "https://e2e-test-crd-conversion-webhook.crd-webhook-8232.svc:9443/crdconvert?timeout=30s": dial tcp 10.0.0.202:9443: connect: connection refused Nov 8 18:52:05.954: INFO: error waiting for conversion to succeed during setup: conversion webhook for stable.example.com/v2, Kind=e2e-test-crd-webhook-3408-crd failed: Post "https://e2e-test-crd-conversion-webhook.crd-webhook-8232.svc:9443/crdconvert?timeout=30s": dial tcp 10.0.0.202:9443: connect: connection refused Nov 8 18:52:07.074: INFO: error waiting for conversion to succeed during setup: conversion webhook for stable.example.com/v2, Kind=e2e-test-crd-webhook-3408-crd failed: Post "https://e2e-test-crd-conversion-webhook.crd-webhook-8232.svc:9443/crdconvert?timeout=30s": dial tcp 10.0.0.202:9443: connect: connection refused Nov 8 18:52:08.166: INFO: error waiting for conversion to succeed during setup: conversion webhook for stable.example.com/v2, Kind=e2e-test-crd-webhook-3408-crd failed: Post "https://e2e-test-crd-conversion-webhook.crd-webhook-8232.svc:9443/crdconvert?timeout=30s": dial tcp 10.0.0.202:9443: connect: connection refused Nov 8 18:52:09.250: INFO: error waiting for conversion to succeed during setup: conversion webhook for stable.example.com/v2, Kind=e2e-test-crd-webhook-3408-crd failed: Post "https://e2e-test-crd-conversion-webhook.crd-webhook-8232.svc:9443/crdconvert?timeout=30s": dial tcp 10.0.0.202:9443: connect: connection refused Nov 8 18:52:10.374: INFO: error waiting for conversion to succeed during setup: conversion webhook for stable.example.com/v2, Kind=e2e-test-crd-webhook-3408-crd failed: Post "https://e2e-test-crd-conversion-webhook.crd-webhook-8232.svc:9443/crdconvert?timeout=30s": dial tcp 10.0.0.202:9443: connect: connection refused Nov 8 18:52:11.458: INFO: error waiting for conversion to succeed during setup: conversion webhook for stable.example.com/v2, Kind=e2e-test-crd-webhook-3408-crd failed: Post "https://e2e-test-crd-conversion-webhook.crd-webhook-8232.svc:9443/crdconvert?timeout=30s": dial tcp 10.0.0.202:9443: connect: connection refused Nov 8 18:52:12.482: INFO: error waiting for conversion to succeed during setup: conversion webhook for stable.example.com/v2, Kind=e2e-test-crd-webhook-3408-crd failed: Post "https://e2e-test-crd-conversion-webhook.crd-webhook-8232.svc:9443/crdconvert?timeout=30s": dial tcp 10.0.0.202:9443: connect: connection refused Nov 8 18:52:12.482: INFO: Unexpected error: <*errors.errorString | 0xc000285cb0>: { s: "timed out waiting for the condition", } Nov 8 18:52:12.482: FAIL: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/apimachinery.waitWebhookConversionReady(0xc000b88780?, 0xc003fd6000?, 0xc003ecf800?, {0x74ab21c?, 0x2?}) test/e2e/apimachinery/crd_conversion_webhook.go:479 +0xf3 k8s.io/kubernetes/test/e2e/apimachinery.glob..func4.4() test/e2e/apimachinery/crd_conversion_webhook.go:208 +0x113 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] test/e2e/framework/node/init/init.go:32 Nov 8 18:52:12.999: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] test/e2e/apimachinery/crd_conversion_webhook.go:139 [DeferCleanup (Each)] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/08/22 18:52:13.038 STEP: Collecting events from namespace "crd-webhook-8232". 11/08/22 18:52:13.038 STEP: Found 10 events. 11/08/22 18:52:13.042 Nov 8 18:52:13.043: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for sample-crd-conversion-webhook-deployment-644bcc8d4c-ql9r5: { } Scheduled: Successfully assigned crd-webhook-8232/sample-crd-conversion-webhook-deployment-644bcc8d4c-ql9r5 to 172.17.0.1 Nov 8 18:52:13.043: INFO: At 2022-11-08 18:51:01 +0000 UTC - event for sample-crd-conversion-webhook-deployment: {deployment-controller } ScalingReplicaSet: Scaled up replica set sample-crd-conversion-webhook-deployment-644bcc8d4c to 1 Nov 8 18:52:13.043: INFO: At 2022-11-08 18:51:01 +0000 UTC - event for sample-crd-conversion-webhook-deployment-644bcc8d4c: {replicaset-controller } SuccessfulCreate: Created pod: sample-crd-conversion-webhook-deployment-644bcc8d4c-ql9r5 Nov 8 18:52:13.043: INFO: At 2022-11-08 18:51:03 +0000 UTC - event for sample-crd-conversion-webhook-deployment-644bcc8d4c-ql9r5: {kubelet 172.17.0.1} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.40" already present on machine Nov 8 18:52:13.043: INFO: At 2022-11-08 18:51:03 +0000 UTC - event for sample-crd-conversion-webhook-deployment-644bcc8d4c-ql9r5: {kubelet 172.17.0.1} Created: Created container sample-crd-conversion-webhook Nov 8 18:52:13.043: INFO: At 2022-11-08 18:51:03 +0000 UTC - event for sample-crd-conversion-webhook-deployment-644bcc8d4c-ql9r5: {kubelet 172.17.0.1} Failed: Error: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: %!w(<nil>): unknown Nov 8 18:52:13.043: INFO: At 2022-11-08 18:51:03 +0000 UTC - event for sample-crd-conversion-webhook-deployment-644bcc8d4c-ql9r5: {kubelet 172.17.0.1} Failed: Error: sandbox container "d7922a5095fe045463b91eb006b70390a088f6bf18c0b0647968c810aea3a3b4" is not running Nov 8 18:52:13.043: INFO: At 2022-11-08 18:51:04 +0000 UTC - event for sample-crd-conversion-webhook-deployment-644bcc8d4c-ql9r5: {kubelet 172.17.0.1} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Nov 8 18:52:13.043: INFO: At 2022-11-08 18:51:07 +0000 UTC - event for sample-crd-conversion-webhook-deployment-644bcc8d4c-ql9r5: {kubelet 172.17.0.1} BackOff: Back-off restarting failed container sample-crd-conversion-webhook in pod sample-crd-conversion-webhook-deployment-644bcc8d4c-ql9r5_crd-webhook-8232(555a9011-8c6f-43a7-889e-44b95b7661d8) Nov 8 18:52:13.043: INFO: At 2022-11-08 18:51:14 +0000 UTC - event for sample-crd-conversion-webhook-deployment-644bcc8d4c-ql9r5: {kubelet 172.17.0.1} Started: Started container sample-crd-conversion-webhook Nov 8 18:52:13.051: INFO: POD NODE PHASE GRACE CONDITIONS Nov 8 18:52:13.051: INFO: sample-crd-conversion-webhook-deployment-644bcc8d4c-ql9r5 172.17.0.1 Running 0s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-08 18:51:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-08 18:51:37 +0000 UTC ContainersNotReady containers with unready status: [sample-crd-conversion-webhook]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-08 18:51:37 +0000 UTC ContainersNotReady containers with unready status: [sample-crd-conversion-webhook]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-08 18:51:01 +0000 UTC }] Nov 8 18:52:13.052: INFO: Nov 8 18:52:13.062: INFO: Unable to fetch crd-webhook-8232/sample-crd-conversion-webhook-deployment-644bcc8d4c-ql9r5/sample-crd-conversion-webhook logs: the server could not find the requested resource (get pods sample-crd-conversion-webhook-deployment-644bcc8d4c-ql9r5) Nov 8 18:52:13.067: INFO: Logging node info for node 172.17.0.1 Nov 8 18:52:13.071: INFO: Node Info: &Node{ObjectMeta:{172.17.0.1 1c9ca6f0-ace7-4a33-a1cd-137d512be00a 5244 0 2022-11-08 18:07:44 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:172.17.0.1 kubernetes.io/os:linux] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-11-08 18:07:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}} } {kubelet Update v1 2022-11-08 18:07:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {kubelet Update v1 2022-11-08 18:49:43 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{259962224640 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67441348608 0} {<nil>} 65860692Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{233966001789 0} {<nil>} 233966001789 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67336491008 0} {<nil>} 65758292Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-08 18:49:43 +0000 UTC,LastTransitionTime:2022-11-08 18:07:43 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-08 18:49:43 +0000 UTC,LastTransitionTime:2022-11-08 18:07:43 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-08 18:49:43 +0000 UTC,LastTransitionTime:2022-11-08 18:07:43 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-08 18:49:43 +0000 UTC,LastTransitionTime:2022-11-08 18:07:54 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.17.0.1,},NodeAddress{Type:Hostname,Address:172.17.0.1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:,SystemUUID:7d8834b1-ec1e-71b0-7148-50316089d154,BootID:99214993-e7b1-4bff-9db2-b9548be8d199,KernelVersion:5.4.0-1078-gke,OSImage:Debian GNU/Linux 10 (buster),ContainerRuntimeVersion:containerd://1.6.8,KubeletVersion:v1.26.0-alpha.3.387+504f252722dcc8,KubeProxyVersion:v1.26.0-alpha.3.387+504f252722dcc8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:11e6a66017ba4e4b938c1612b7a54a3befcefd354796c04e1dba76873a13518e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.5],SizeBytes:112030526,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:af7e3857d87770ddb40f5ea4f89b5a2709504ab1ee31f9ea4ab5823c045f2146 registry.k8s.io/e2e-test-images/agnhost:2.40],SizeBytes:51155161,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nautilus@sha256:99c0d6f1ad24a1aa1905d9c6534d193f268f7b23f9add2ae6bb41f31094bdd5c registry.k8s.io/e2e-test-images/nautilus:1.5],SizeBytes:49642095,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 registry.k8s.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:40764680,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 registry.k8s.io/e2e-test-images/nginx:1.14-2],SizeBytes:6979041,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 8 18:52:13.071: INFO: Logging kubelet events for node 172.17.0.1 Nov 8 18:52:13.075: INFO: Logging pods the kubelet thinks is on node 172.17.0.1 Nov 8 18:52:13.081: INFO: coredns-755454cbdc-s26wr started at 2022-11-08 18:07:54 +0000 UTC (0+1 container statuses recorded) Nov 8 18:52:13.081: INFO: Container coredns ready: false, restart count 13 Nov 8 18:52:13.123: INFO: Latency metrics for node 172.17.0.1 [DeferCleanup (Each)] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] tear down framework | framework.go:193 STEP: Destroying namespace "crd-webhook-8232" for this suite. 11/08/22 18:52:13.124
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-apps\]\sCronJob\sshould\sschedule\smultiple\sjobs\sconcurrently\s\[Conformance\]$'
test/e2e/apps/cronjob.go:78 k8s.io/kubernetes/test/e2e/apps.glob..func2.1() test/e2e/apps/cronjob.go:78 +0x2dcfrom junit_01.xml
[BeforeEach] [sig-apps] CronJob set up framework | framework.go:178 STEP: Creating a kubernetes client 11/08/22 18:52:19.53 Nov 8 18:52:19.530: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename cronjob 11/08/22 18:52:19.531 STEP: Waiting for a default service account to be provisioned in namespace 11/08/22 18:52:19.547 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/08/22 18:52:19.552 [BeforeEach] [sig-apps] CronJob test/e2e/framework/metrics/init/init.go:31 [It] should schedule multiple jobs concurrently [Conformance] test/e2e/apps/cronjob.go:69 STEP: Creating a cronjob 11/08/22 18:52:19.556 STEP: Ensuring more than one job is running at a time 11/08/22 18:52:19.563 ------------------------------ Automatically polling progress: [sig-apps] CronJob should schedule multiple jobs concurrently [Conformance] (Spec Runtime: 5m0.027s) test/e2e/apps/cronjob.go:69 In [It] (Node Runtime: 5m0.001s) test/e2e/apps/cronjob.go:69 At [By Step] Ensuring more than one job is running at a time (Step Runtime: 4m59.994s) test/e2e/apps/cronjob.go:76 Spec Goroutine goroutine 3698 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7ebe6a8, 0xc0001a8000}, 0xc004081bf0, 0x2f7ec4a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7ebe6a8, 0xc0001a8000}, 0x10?, 0x2f7d7e5?, 0x30?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7ebe6a8, 0xc0001a8000}, 0x75be5b4?, 0xc004129d60?, 0x25da967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x0?, 0xc0d2c738e19367f8?, 0x26bd69a2c4c?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/apps.waitForActiveJobs({0x7efa648?, 0xc003a9c820}, {0xc0047a03c0, 0xc}, {0xc0047a11c0, 0xa}, 0x2) test/e2e/apps/cronjob.go:593 | // Wait for at least given amount of active jobs. | func waitForActiveJobs(c clientset.Interface, ns, cronJobName string, active int) error { > return wait.Poll(framework.Poll, cronJobTimeout, func() (bool, error) { | curr, err := getCronJob(c, ns, cronJobName) | if err != nil { > k8s.io/kubernetes/test/e2e/apps.glob..func2.1() test/e2e/apps/cronjob.go:77 | | ginkgo.By("Ensuring more than one job is running at a time") > err = waitForActiveJobs(f.ClientSet, f.Namespace.Name, cronJob.Name, 2) | framework.ExpectNoError(err, "Failed to wait for active jobs in CronJob %s in namespace %s", cronJob.Name, f.Namespace.Name) | k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc0047ba1b0, 0xc004781740}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 8 18:57:19.572: INFO: Unexpected error: Failed to wait for active jobs in CronJob concurrent in namespace cronjob-5429: <*errors.errorString | 0xc000285cb0>: { s: "timed out waiting for the condition", } Nov 8 18:57:19.573: FAIL: Failed to wait for active jobs in CronJob concurrent in namespace cronjob-5429: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/apps.glob..func2.1() test/e2e/apps/cronjob.go:78 +0x2dc [AfterEach] [sig-apps] CronJob test/e2e/framework/node/init/init.go:32 Nov 8 18:57:19.573: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-apps] CronJob test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-apps] CronJob dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/08/22 18:57:19.577 STEP: Collecting events from namespace "cronjob-5429". 11/08/22 18:57:19.578 STEP: Found 59 events. 11/08/22 18:57:19.582 Nov 8 18:57:19.582: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for concurrent-27798893-s6zj8: { } Scheduled: Successfully assigned cronjob-5429/concurrent-27798893-s6zj8 to 172.17.0.1 Nov 8 18:57:19.582: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for concurrent-27798894-xq7dz: { } Scheduled: Successfully assigned cronjob-5429/concurrent-27798894-xq7dz to 172.17.0.1 Nov 8 18:57:19.582: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for concurrent-27798895-4h4gn: { } Scheduled: Successfully assigned cronjob-5429/concurrent-27798895-4h4gn to 172.17.0.1 Nov 8 18:57:19.583: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for concurrent-27798896-rssjf: { } Scheduled: Successfully assigned cronjob-5429/concurrent-27798896-rssjf to 172.17.0.1 Nov 8 18:57:19.583: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for concurrent-27798897-jjsbx: { } Scheduled: Successfully assigned cronjob-5429/concurrent-27798897-jjsbx to 172.17.0.1 Nov 8 18:57:19.583: INFO: At 2022-11-08 18:53:00 +0000 UTC - event for concurrent: {cronjob-controller } SuccessfulCreate: Created job concurrent-27798893 Nov 8 18:57:19.583: INFO: At 2022-11-08 18:53:00 +0000 UTC - event for concurrent-27798893: {job-controller } SuccessfulCreate: Created pod: concurrent-27798893-s6zj8 Nov 8 18:57:19.583: INFO: At 2022-11-08 18:53:02 +0000 UTC - event for concurrent-27798893-s6zj8: {kubelet 172.17.0.1} Created: Created container c Nov 8 18:57:19.583: INFO: At 2022-11-08 18:53:02 +0000 UTC - event for concurrent-27798893-s6zj8: {kubelet 172.17.0.1} Pulled: Container image "registry.k8s.io/e2e-test-images/busybox:1.29-2" already present on machine Nov 8 18:57:19.583: INFO: At 2022-11-08 18:53:02 +0000 UTC - event for concurrent-27798893-s6zj8: {kubelet 172.17.0.1} Started: Started container c Nov 8 18:57:19.583: INFO: At 2022-11-08 18:53:03 +0000 UTC - event for concurrent-27798893-s6zj8: {kubelet 172.17.0.1} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Nov 8 18:57:19.583: INFO: At 2022-11-08 18:53:07 +0000 UTC - event for concurrent: {cronjob-controller } SawCompletedJob: Saw completed job: concurrent-27798893, status: Failed Nov 8 18:57:19.583: INFO: At 2022-11-08 18:53:07 +0000 UTC - event for concurrent-27798893: {job-controller } BackoffLimitExceeded: Job has reached the specified backoff limit Nov 8 18:57:19.583: INFO: At 2022-11-08 18:53:07 +0000 UTC - event for concurrent-27798893: {job-controller } SuccessfulDelete: Deleted pod: concurrent-27798893-s6zj8 Nov 8 18:57:19.583: INFO: At 2022-11-08 18:53:10 +0000 UTC - event for concurrent-27798893-s6zj8: {kubelet 172.17.0.1} BackOff: Back-off restarting failed container c in pod concurrent-27798893-s6zj8_cronjob-5429(4fb6634d-9277-4bc4-ac86-f6a882b08f1d) Nov 8 18:57:19.583: INFO: At 2022-11-08 18:54:00 +0000 UTC - event for concurrent: {cronjob-controller } SuccessfulCreate: Created job concurrent-27798894 Nov 8 18:57:19.583: INFO: At 2022-11-08 18:54:00 +0000 UTC - event for concurrent-27798894: {job-controller } SuccessfulCreate: Created pod: concurrent-27798894-xq7dz Nov 8 18:57:19.583: INFO: At 2022-11-08 18:54:02 +0000 UTC - event for concurrent-27798894-xq7dz: {kubelet 172.17.0.1} Started: Started container c Nov 8 18:57:19.583: INFO: At 2022-11-08 18:54:02 +0000 UTC - event for concurrent-27798894-xq7dz: {kubelet 172.17.0.1} Pulled: Container image "registry.k8s.io/e2e-test-images/busybox:1.29-2" already present on machine Nov 8 18:57:19.583: INFO: At 2022-11-08 18:54:02 +0000 UTC - event for concurrent-27798894-xq7dz: {kubelet 172.17.0.1} Created: Created container c Nov 8 18:57:19.583: INFO: At 2022-11-08 18:54:04 +0000 UTC - event for concurrent-27798894-xq7dz: {kubelet 172.17.0.1} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Nov 8 18:57:19.583: INFO: At 2022-11-08 18:54:07 +0000 UTC - event for concurrent: {cronjob-controller } SuccessfulDelete: Deleted job concurrent-27798893 Nov 8 18:57:19.583: INFO: At 2022-11-08 18:54:07 +0000 UTC - event for concurrent: {cronjob-controller } SawCompletedJob: Saw completed job: concurrent-27798894, status: Failed Nov 8 18:57:19.583: INFO: At 2022-11-08 18:54:07 +0000 UTC - event for concurrent-27798894: {job-controller } SuccessfulDelete: Deleted pod: concurrent-27798894-xq7dz Nov 8 18:57:19.583: INFO: At 2022-11-08 18:54:07 +0000 UTC - event for concurrent-27798894: {job-controller } BackoffLimitExceeded: Job has reached the specified backoff limit Nov 8 18:57:19.583: INFO: At 2022-11-08 18:54:10 +0000 UTC - event for concurrent-27798894-xq7dz: {kubelet 172.17.0.1} BackOff: Back-off restarting failed container c in pod concurrent-27798894-xq7dz_cronjob-5429(2fbf91ca-4cdd-4f47-baf7-76c624d32260) Nov 8 18:57:19.583: INFO: At 2022-11-08 18:55:00 +0000 UTC - event for concurrent: {cronjob-controller } SuccessfulCreate: Created job concurrent-27798895 Nov 8 18:57:19.583: INFO: At 2022-11-08 18:55:00 +0000 UTC - event for concurrent-27798895: {job-controller } SuccessfulCreate: Created pod: concurrent-27798895-4h4gn Nov 8 18:57:19.583: INFO: At 2022-11-08 18:55:02 +0000 UTC - event for concurrent-27798895-4h4gn: {kubelet 172.17.0.1} Started: Started container c Nov 8 18:57:19.583: INFO: At 2022-11-08 18:55:02 +0000 UTC - event for concurrent-27798895-4h4gn: {kubelet 172.17.0.1} Created: Created container c Nov 8 18:57:19.583: INFO: At 2022-11-08 18:55:02 +0000 UTC - event for concurrent-27798895-4h4gn: {kubelet 172.17.0.1} Pulled: Container image "registry.k8s.io/e2e-test-images/busybox:1.29-2" already present on machine Nov 8 18:57:19.583: INFO: At 2022-11-08 18:55:04 +0000 UTC - event for concurrent-27798895-4h4gn: {kubelet 172.17.0.1} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Nov 8 18:57:19.583: INFO: At 2022-11-08 18:55:08 +0000 UTC - event for concurrent: {cronjob-controller } SawCompletedJob: Saw completed job: concurrent-27798895, status: Failed Nov 8 18:57:19.583: INFO: At 2022-11-08 18:55:08 +0000 UTC - event for concurrent: {cronjob-controller } SuccessfulDelete: Deleted job concurrent-27798894 Nov 8 18:57:19.583: INFO: At 2022-11-08 18:55:08 +0000 UTC - event for concurrent-27798895: {job-controller } BackoffLimitExceeded: Job has reached the specified backoff limit Nov 8 18:57:19.583: INFO: At 2022-11-08 18:55:08 +0000 UTC - event for concurrent-27798895: {job-controller } SuccessfulDelete: Deleted pod: concurrent-27798895-4h4gn Nov 8 18:57:19.583: INFO: At 2022-11-08 18:55:10 +0000 UTC - event for concurrent-27798895-4h4gn: {kubelet 172.17.0.1} BackOff: Back-off restarting failed container c in pod concurrent-27798895-4h4gn_cronjob-5429(18e4b2ed-56e4-446a-8927-b2a8861d7f06) Nov 8 18:57:19.583: INFO: At 2022-11-08 18:56:00 +0000 UTC - event for concurrent: {cronjob-controller } SuccessfulCreate: Created job concurrent-27798896 Nov 8 18:57:19.583: INFO: At 2022-11-08 18:56:00 +0000 UTC - event for concurrent-27798896: {job-controller } SuccessfulCreate: Created pod: concurrent-27798896-rssjf Nov 8 18:57:19.583: INFO: At 2022-11-08 18:56:02 +0000 UTC - event for concurrent-27798896-rssjf: {kubelet 172.17.0.1} Pulled: Container image "registry.k8s.io/e2e-test-images/busybox:1.29-2" already present on machine Nov 8 18:57:19.583: INFO: At 2022-11-08 18:56:02 +0000 UTC - event for concurrent-27798896-rssjf: {kubelet 172.17.0.1} Created: Created container c Nov 8 18:57:19.583: INFO: At 2022-11-08 18:56:02 +0000 UTC - event for concurrent-27798896-rssjf: {kubelet 172.17.0.1} Started: Started container c Nov 8 18:57:19.583: INFO: At 2022-11-08 18:56:04 +0000 UTC - event for concurrent-27798896-rssjf: {kubelet 172.17.0.1} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Nov 8 18:57:19.583: INFO: At 2022-11-08 18:56:08 +0000 UTC - event for concurrent: {cronjob-controller } SuccessfulDelete: Deleted job concurrent-27798895 Nov 8 18:57:19.583: INFO: At 2022-11-08 18:56:08 +0000 UTC - event for concurrent: {cronjob-controller } SawCompletedJob: Saw completed job: concurrent-27798896, status: Failed Nov 8 18:57:19.583: INFO: At 2022-11-08 18:56:08 +0000 UTC - event for concurrent-27798896: {job-controller } BackoffLimitExceeded: Job has reached the specified backoff limit Nov 8 18:57:19.583: INFO: At 2022-11-08 18:56:08 +0000 UTC - event for concurrent-27798896: {job-controller } SuccessfulDelete: Deleted pod: concurrent-27798896-rssjf Nov 8 18:57:19.583: INFO: At 2022-11-08 18:56:10 +0000 UTC - event for concurrent-27798896-rssjf: {kubelet 172.17.0.1} BackOff: Back-off restarting failed container c in pod concurrent-27798896-rssjf_cronjob-5429(9667809f-6cbb-4a4a-bf8b-8a68fc21273f) Nov 8 18:57:19.583: INFO: At 2022-11-08 18:57:00 +0000 UTC - event for concurrent: {cronjob-controller } SuccessfulCreate: Created job concurrent-27798897 Nov 8 18:57:19.583: INFO: At 2022-11-08 18:57:00 +0000 UTC - event for concurrent-27798897: {job-controller } SuccessfulCreate: Created pod: concurrent-27798897-jjsbx Nov 8 18:57:19.583: INFO: At 2022-11-08 18:57:02 +0000 UTC - event for concurrent-27798897-jjsbx: {kubelet 172.17.0.1} Pulled: Container image "registry.k8s.io/e2e-test-images/busybox:1.29-2" already present on machine Nov 8 18:57:19.583: INFO: At 2022-11-08 18:57:02 +0000 UTC - event for concurrent-27798897-jjsbx: {kubelet 172.17.0.1} Started: Started container c Nov 8 18:57:19.583: INFO: At 2022-11-08 18:57:02 +0000 UTC - event for concurrent-27798897-jjsbx: {kubelet 172.17.0.1} Created: Created container c Nov 8 18:57:19.583: INFO: At 2022-11-08 18:57:04 +0000 UTC - event for concurrent-27798897-jjsbx: {kubelet 172.17.0.1} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Nov 8 18:57:19.583: INFO: At 2022-11-08 18:57:08 +0000 UTC - event for concurrent: {cronjob-controller } SuccessfulDelete: Deleted job concurrent-27798896 Nov 8 18:57:19.583: INFO: At 2022-11-08 18:57:08 +0000 UTC - event for concurrent: {cronjob-controller } SawCompletedJob: Saw completed job: concurrent-27798897, status: Failed Nov 8 18:57:19.583: INFO: At 2022-11-08 18:57:08 +0000 UTC - event for concurrent-27798897: {job-controller } BackoffLimitExceeded: Job has reached the specified backoff limit Nov 8 18:57:19.583: INFO: At 2022-11-08 18:57:08 +0000 UTC - event for concurrent-27798897: {job-controller } SuccessfulDelete: Deleted pod: concurrent-27798897-jjsbx Nov 8 18:57:19.583: INFO: At 2022-11-08 18:57:10 +0000 UTC - event for concurrent-27798897-jjsbx: {kubelet 172.17.0.1} BackOff: Back-off restarting failed container c in pod concurrent-27798897-jjsbx_cronjob-5429(46453de6-ac9c-49b4-98bf-47f19d252104) Nov 8 18:57:19.585: INFO: POD NODE PHASE GRACE CONDITIONS Nov 8 18:57:19.585: INFO: Nov 8 18:57:19.589: INFO: Logging node info for node 172.17.0.1 Nov 8 18:57:19.593: INFO: Node Info: &Node{ObjectMeta:{172.17.0.1 1c9ca6f0-ace7-4a33-a1cd-137d512be00a 5979 0 2022-11-08 18:07:44 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:172.17.0.1 kubernetes.io/os:linux] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-11-08 18:07:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}} } {kubelet Update v1 2022-11-08 18:07:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {kubelet Update v1 2022-11-08 18:54:49 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{259962224640 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67441348608 0} {<nil>} 65860692Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{233966001789 0} {<nil>} 233966001789 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67336491008 0} {<nil>} 65758292Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-08 18:54:49 +0000 UTC,LastTransitionTime:2022-11-08 18:07:43 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-08 18:54:49 +0000 UTC,LastTransitionTime:2022-11-08 18:07:43 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-08 18:54:49 +0000 UTC,LastTransitionTime:2022-11-08 18:07:43 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-08 18:54:49 +0000 UTC,LastTransitionTime:2022-11-08 18:07:54 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.17.0.1,},NodeAddress{Type:Hostname,Address:172.17.0.1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:,SystemUUID:7d8834b1-ec1e-71b0-7148-50316089d154,BootID:99214993-e7b1-4bff-9db2-b9548be8d199,KernelVersion:5.4.0-1078-gke,OSImage:Debian GNU/Linux 10 (buster),ContainerRuntimeVersion:containerd://1.6.8,KubeletVersion:v1.26.0-alpha.3.387+504f252722dcc8,KubeProxyVersion:v1.26.0-alpha.3.387+504f252722dcc8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:11e6a66017ba4e4b938c1612b7a54a3befcefd354796c04e1dba76873a13518e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.5],SizeBytes:112030526,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:af7e3857d87770ddb40f5ea4f89b5a2709504ab1ee31f9ea4ab5823c045f2146 registry.k8s.io/e2e-test-images/agnhost:2.40],SizeBytes:51155161,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nautilus@sha256:99c0d6f1ad24a1aa1905d9c6534d193f268f7b23f9add2ae6bb41f31094bdd5c registry.k8s.io/e2e-test-images/nautilus:1.5],SizeBytes:49642095,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 registry.k8s.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:40764680,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 registry.k8s.io/e2e-test-images/nginx:1.14-2],SizeBytes:6979041,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 8 18:57:19.593: INFO: Logging kubelet events for node 172.17.0.1 Nov 8 18:57:19.598: INFO: Logging pods the kubelet thinks is on node 172.17.0.1 Nov 8 18:57:19.616: INFO: coredns-755454cbdc-s26wr started at 2022-11-08 18:07:54 +0000 UTC (0+1 container statuses recorded) Nov 8 18:57:19.616: INFO: Container coredns ready: false, restart count 14 Nov 8 18:57:19.653: INFO: Latency metrics for node 172.17.0.1 [DeferCleanup (Each)] [sig-apps] CronJob tear down framework | framework.go:193 STEP: Destroying namespace "cronjob-5429" for this suite. 11/08/22 18:57:19.653
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-apps\]\sReplicaSet\sshould\sserve\sa\sbasic\simage\son\seach\sreplica\swith\sa\spublic\simage\s\s\[Conformance\]$'
test/e2e/apps/replica_set.go:233 k8s.io/kubernetes/test/e2e/apps.testReplicaSetServeImageOrFail(0xc000b880f0, {0x74ae937, 0x5}, {0xc0001b5cb0, 0x2c}) test/e2e/apps/replica_set.go:233 +0x8b5 k8s.io/kubernetes/test/e2e/apps.glob..func9.1() test/e2e/apps/replica_set.go:112 +0x37from junit_01.xml
[BeforeEach] [sig-apps] ReplicaSet set up framework | framework.go:178 STEP: Creating a kubernetes client 11/08/22 18:44:20.794 Nov 8 18:44:20.794: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename replicaset 11/08/22 18:44:20.795 STEP: Waiting for a default service account to be provisioned in namespace 11/08/22 18:44:20.813 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/08/22 18:44:20.818 [BeforeEach] [sig-apps] ReplicaSet test/e2e/framework/metrics/init/init.go:31 [It] should serve a basic image on each replica with a public image [Conformance] test/e2e/apps/replica_set.go:111 Nov 8 18:44:20.822: INFO: Creating ReplicaSet my-hostname-basic-fda19a45-91cc-472f-9d2c-3c563f6ab6d2 Nov 8 18:44:20.834: INFO: Pod name my-hostname-basic-fda19a45-91cc-472f-9d2c-3c563f6ab6d2: Found 0 pods out of 1 Nov 8 18:44:25.840: INFO: Pod name my-hostname-basic-fda19a45-91cc-472f-9d2c-3c563f6ab6d2: Found 1 pods out of 1 Nov 8 18:44:25.840: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-fda19a45-91cc-472f-9d2c-3c563f6ab6d2" is running Nov 8 18:44:25.840: INFO: Waiting up to 5m0s for pod "my-hostname-basic-fda19a45-91cc-472f-9d2c-3c563f6ab6d2-9ks8v" in namespace "replicaset-4545" to be "running" Nov 8 18:44:25.849: INFO: Pod "my-hostname-basic-fda19a45-91cc-472f-9d2c-3c563f6ab6d2-9ks8v": Phase="Running", Reason="", readiness=false. Elapsed: 8.113343ms Nov 8 18:44:25.849: INFO: Pod "my-hostname-basic-fda19a45-91cc-472f-9d2c-3c563f6ab6d2-9ks8v" satisfied condition "running" Nov 8 18:44:25.849: INFO: Pod "my-hostname-basic-fda19a45-91cc-472f-9d2c-3c563f6ab6d2-9ks8v" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-11-08 18:44:20 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-11-08 18:44:20 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-fda19a45-91cc-472f-9d2c-3c563f6ab6d2]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-11-08 18:44:20 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-fda19a45-91cc-472f-9d2c-3c563f6ab6d2]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-11-08 18:44:20 +0000 UTC Reason: Message:}]) Nov 8 18:44:25.849: INFO: Trying to dial the pod Nov 8 18:44:30.861: INFO: Controller my-hostname-basic-fda19a45-91cc-472f-9d2c-3c563f6ab6d2: Failed to GET from replica 1 [my-hostname-basic-fda19a45-91cc-472f-9d2c-3c563f6ab6d2-9ks8v]: the server is currently unable to handle the request (get pods my-hostname-basic-fda19a45-91cc-472f-9d2c-3c563f6ab6d2-9ks8v) pod status: v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2022, time.November, 8, 18, 44, 20, 0, time.Local), Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2022, time.November, 8, 18, 44, 20, 0, time.Local), Reason:"ContainersNotReady", Message:"containers with unready status: [my-hostname-basic-fda19a45-91cc-472f-9d2c-3c563f6ab6d2]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2022, time.November, 8, 18, 44, 20, 0, time.Local), Reason:"ContainersNotReady", Message:"containers with unready status: [my-hostname-basic-fda19a45-91cc-472f-9d2c-3c563f6ab6d2]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2022, time.November, 8, 18, 44, 20, 0, time.Local), Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.1", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:time.Date(2022, time.November, 8, 18, 44, 20, 0, time.Local), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"my-hostname-basic-fda19a45-91cc-472f-9d2c-3c563f6ab6d2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000dc8150)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"registry.k8s.io/e2e-test-images/agnhost:2.40", ImageID:"registry.k8s.io/e2e-test-images/agnhost@sha256:af7e3857d87770ddb40f5ea4f89b5a2709504ab1ee31f9ea4ab5823c045f2146", ContainerID:"containerd://483d57b36f1cf4d5e1a62c9857004f86b58d0ea5d903bde85a885faf163852c9", Started:(*bool)(0xc002d5824f)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)} Nov 8 18:44:35.866: INFO: Controller my-hostname-basic-fda19a45-91cc-472f-9d2c-3c563f6ab6d2: Failed to GET from replica 1 [my-hostname-basic-fda19a45-91cc-472f-9d2c-3c563f6ab6d2-9ks8v]: the server is currently unable to handle the request (get pods my-hostname-basic-fda19a45-91cc-472f-9d2c-3c563f6ab6d2-9ks8v) pod status: v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2022, time.November, 8, 18, 44, 20, 0, time.Local), Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2022, time.November, 8, 18, 44, 20, 0, time.Local), Reason:"ContainersNotReady", Message:"containers with unready status: [my-hostname-basic-fda19a45-91cc-472f-9d2c-3c563f6ab6d2]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2022, time.November, 8, 18, 44, 20, 0, time.Local), Reason:"ContainersNotReady", Message:"containers with unready status: [my-hostname-basic-fda19a45-91cc-472f-9d2c-3c563f6ab6d2]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2022, time.November, 8, 18, 44, 20, 0, time.Local), Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.1", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:time.Date(2022, time.November, 8, 18, 44, 20, 0, time.Local), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"my-hostname-basic-fda19a45-91cc-472f-9d2c-3c563f6ab6d2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000dc8150)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"registry.k8s.io/e2e-test-images/agnhost:2.40", ImageID:"registry.k8s.io/e2e-test-images/agnhost@sha256:af7e3857d87770ddb40f5ea4f89b5a2709504ab1ee31f9ea4ab5823c045f2146", ContainerID:"containerd://483d57b36f1cf4d5e1a62c9857004f86b58d0ea5d903bde85a885faf163852c9", Started:(*bool)(0xc002d5824f)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)} Nov 8 18:45:10.861: INFO: Controller my-hostname-basic-fda19a45-91cc-472f-9d2c-3c563f6ab6d2: Failed to GET from replica 1 [my-hostname-basic-fda19a45-91cc-472f-9d2c-3c563f6ab6d2-9ks8v]: the server is currently unable to handle the request (get pods my-hostname-basic-fda19a45-91cc-472f-9d2c-3c563f6ab6d2-9ks8v) pod status: v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2022, time.November, 8, 18, 44, 20, 0, time.Local), Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2022, time.November, 8, 18, 44, 20, 0, time.Local), Reason:"ContainersNotReady", Message:"containers with unready status: [my-hostname-basic-fda19a45-91cc-472f-9d2c-3c563f6ab6d2]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2022, time.November, 8, 18, 44, 20, 0, time.Local), Reason:"ContainersNotReady", Message:"containers with unready status: [my-hostname-basic-fda19a45-91cc-472f-9d2c-3c563f6ab6d2]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2022, time.November, 8, 18, 44, 20, 0, time.Local), Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.1", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:time.Date(2022, time.November, 8, 18, 44, 20, 0, time.Local), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"my-hostname-basic-fda19a45-91cc-472f-9d2c-3c563f6ab6d2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000dc8150)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"registry.k8s.io/e2e-test-images/agnhost:2.40", ImageID:"registry.k8s.io/e2e-test-images/agnhost@sha256:af7e3857d87770ddb40f5ea4f89b5a2709504ab1ee31f9ea4ab5823c045f2146", ContainerID:"containerd://483d57b36f1cf4d5e1a62c9857004f86b58d0ea5d903bde85a885faf163852c9", Started:(*bool)(0xc002d5824f)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)} Nov 8 18:45:18.913: INFO: Controller my-hostname-basic-fda19a45-91cc-472f-9d2c-3c563f6ab6d2: Failed to GET from replica 1 [my-hostname-basic-fda19a45-91cc-472f-9d2c-3c563f6ab6d2-9ks8v]: the server is currently unable to handle the request (get pods my-hostname-basic-fda19a45-91cc-472f-9d2c-3c563f6ab6d2-9ks8v) pod status: v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2022, time.November, 8, 18, 44, 20, 0, time.Local), Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2022, time.November, 8, 18, 44, 20, 0, time.Local), Reason:"ContainersNotReady", Message:"containers with unready status: [my-hostname-basic-fda19a45-91cc-472f-9d2c-3c563f6ab6d2]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2022, time.November, 8, 18, 44, 20, 0, time.Local), Reason:"ContainersNotReady", Message:"containers with unready status: [my-hostname-basic-fda19a45-91cc-472f-9d2c-3c563f6ab6d2]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2022, time.November, 8, 18, 44, 20, 0, time.Local), Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.1", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:time.Date(2022, time.November, 8, 18, 44, 20, 0, time.Local), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"my-hostname-basic-fda19a45-91cc-472f-9d2c-3c563f6ab6d2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000dc8150)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"registry.k8s.io/e2e-test-images/agnhost:2.40", ImageID:"registry.k8s.io/e2e-test-images/agnhost@sha256:af7e3857d87770ddb40f5ea4f89b5a2709504ab1ee31f9ea4ab5823c045f2146", ContainerID:"containerd://483d57b36f1cf4d5e1a62c9857004f86b58d0ea5d903bde85a885faf163852c9", Started:(*bool)(0xc002d5824f)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)} Nov 8 18:45:23.938: INFO: Controller my-hostname-basic-fda19a45-91cc-472f-9d2c-3c563f6ab6d2: Failed to GET from replica 1 [my-hostname-basic-fda19a45-91cc-472f-9d2c-3c563f6ab6d2-9ks8v]: the server is currently unable to handle the request (get pods my-hostname-basic-fda19a45-91cc-472f-9d2c-3c563f6ab6d2-9ks8v) pod status: v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2022, time.November, 8, 18, 44, 20, 0, time.Local), Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2022, time.November, 8, 18, 44, 20, 0, time.Local), Reason:"ContainersNotReady", Message:"containers with unready status: [my-hostname-basic-fda19a45-91cc-472f-9d2c-3c563f6ab6d2]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2022, time.November, 8, 18, 44, 20, 0, time.Local), Reason:"ContainersNotReady", Message:"containers with unready status: [my-hostname-basic-fda19a45-91cc-472f-9d2c-3c563f6ab6d2]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2022, time.November, 8, 18, 44, 20, 0, time.Local), Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.1", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:time.Date(2022, time.November, 8, 18, 44, 20, 0, time.Local), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"my-hostname-basic-fda19a45-91cc-472f-9d2c-3c563f6ab6d2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000dc8150)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"registry.k8s.io/e2e-test-images/agnhost:2.40", ImageID:"registry.k8s.io/e2e-test-images/agnhost@sha256:af7e3857d87770ddb40f5ea4f89b5a2709504ab1ee31f9ea4ab5823c045f2146", ContainerID:"containerd://483d57b36f1cf4d5e1a62c9857004f86b58d0ea5d903bde85a885faf163852c9", Started:(*bool)(0xc002d5824f)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)} Nov 8 18:45:25.864: INFO: Controller my-hostname-basic-fda19a45-91cc-472f-9d2c-3c563f6ab6d2: Failed to GET from replica 1 [my-hostname-basic-fda19a45-91cc-472f-9d2c-3c563f6ab6d2-9ks8v]: the server is currently unable to handle the request (get pods my-hostname-basic-fda19a45-91cc-472f-9d2c-3c563f6ab6d2-9ks8v) pod status: v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2022, time.November, 8, 18, 44, 20, 0, time.Local), Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2022, time.November, 8, 18, 44, 20, 0, time.Local), Reason:"ContainersNotReady", Message:"containers with unready status: [my-hostname-basic-fda19a45-91cc-472f-9d2c-3c563f6ab6d2]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2022, time.November, 8, 18, 44, 20, 0, time.Local), Reason:"ContainersNotReady", Message:"containers with unready status: [my-hostname-basic-fda19a45-91cc-472f-9d2c-3c563f6ab6d2]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2022, time.November, 8, 18, 44, 20, 0, time.Local), Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.1", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:time.Date(2022, time.November, 8, 18, 44, 20, 0, time.Local), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"my-hostname-basic-fda19a45-91cc-472f-9d2c-3c563f6ab6d2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000dc8150)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"registry.k8s.io/e2e-test-images/agnhost:2.40", ImageID:"registry.k8s.io/e2e-test-images/agnhost@sha256:af7e3857d87770ddb40f5ea4f89b5a2709504ab1ee31f9ea4ab5823c045f2146", ContainerID:"containerd://483d57b36f1cf4d5e1a62c9857004f86b58d0ea5d903bde85a885faf163852c9", Started:(*bool)(0xc002d5824f)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)} Nov 8 18:45:33.921: INFO: Controller my-hostname-basic-fda19a45-91cc-472f-9d2c-3c563f6ab6d2: Failed to GET from replica 1 [my-hostname-basic-fda19a45-91cc-472f-9d2c-3c563f6ab6d2-9ks8v]: the server is currently unable to handle the request (get pods my-hostname-basic-fda19a45-91cc-472f-9d2c-3c563f6ab6d2-9ks8v) pod status: v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2022, time.November, 8, 18, 44, 20, 0, time.Local), Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2022, time.November, 8, 18, 44, 20, 0, time.Local), Reason:"ContainersNotReady", Message:"containers with unready status: [my-hostname-basic-fda19a45-91cc-472f-9d2c-3c563f6ab6d2]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2022, time.November, 8, 18, 44, 20, 0, time.Local), Reason:"ContainersNotReady", Message:"containers with unready status: [my-hostname-basic-fda19a45-91cc-472f-9d2c-3c563f6ab6d2]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2022, time.November, 8, 18, 44, 20, 0, time.Local), Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.1", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:time.Date(2022, time.November, 8, 18, 44, 20, 0, time.Local), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"my-hostname-basic-fda19a45-91cc-472f-9d2c-3c563f6ab6d2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000dc8150)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"registry.k8s.io/e2e-test-images/agnhost:2.40", ImageID:"registry.k8s.io/e2e-test-images/agnhost@sha256:af7e3857d87770ddb40f5ea4f89b5a2709504ab1ee31f9ea4ab5823c045f2146", ContainerID:"containerd://483d57b36f1cf4d5e1a62c9857004f86b58d0ea5d903bde85a885faf163852c9", Started:(*bool)(0xc002d5824f)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)} Nov 8 18:45:38.914: INFO: Controller my-hostname-basic-fda19a45-91cc-472f-9d2c-3c563f6ab6d2: Failed to GET from replica 1 [my-hostname-basic-fda19a45-91cc-472f-9d2c-3c563f6ab6d2-9ks8v]: the server is currently unable to handle the request (get pods my-hostname-basic-fda19a45-91cc-472f-9d2c-3c563f6ab6d2-9ks8v) pod status: v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2022, time.November, 8, 18, 44, 20, 0, time.Local), Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2022, time.November, 8, 18, 44, 20, 0, time.Local), Reason:"ContainersNotReady", Message:"containers with unready status: [my-hostname-basic-fda19a45-91cc-472f-9d2c-3c563f6ab6d2]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2022, time.November, 8, 18, 44, 20, 0, time.Local), Reason:"ContainersNotReady", Message:"containers with unready status: [my-hostname-basic-fda19a45-91cc-472f-9d2c-3c563f6ab6d2]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2022, time.November, 8, 18, 44, 20, 0, time.Local), Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.1", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:time.Date(2022, time.November, 8, 18, 44, 20, 0, time.Local), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"my-hostname-basic-fda19a45-91cc-472f-9d2c-3c563f6ab6d2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000dc8150)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"registry.k8s.io/e2e-test-images/agnhost:2.40", ImageID:"registry.k8s.io/e2e-test-images/agnhost@sha256:af7e3857d87770ddb40f5ea4f89b5a2709504ab1ee31f9ea4ab5823c045f2146", ContainerID:"containerd://483d57b36f1cf4d5e1a62c9857004f86b58d0ea5d903bde85a885faf163852c9", Started:(*bool)(0xc002d5824f)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)} Nov 8 18:45:43.938: INFO: Controller my-hostname-basic-fda19a45-91cc-472f-9d2c-3c563f6ab6d2: Failed to GET from replica 1 [my-hostname-basic-fda19a45-91cc-472f-9d2c-3c563f6ab6d2-9ks8v]: the server is currently unable to handle the request (get pods my-hostname-basic-fda19a45-91cc-472f-9d2c-3c563f6ab6d2-9ks8v) pod status: v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2022, time.November, 8, 18, 44, 20, 0, time.Local), Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2022, time.November, 8, 18, 44, 20, 0, time.Local), Reason:"ContainersNotReady", Message:"containers with unready status: [my-hostname-basic-fda19a45-91cc-472f-9d2c-3c563f6ab6d2]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2022, time.November, 8, 18, 44, 20, 0, time.Local), Reason:"ContainersNotReady", Message:"containers with unready status: [my-hostname-basic-fda19a45-91cc-472f-9d2c-3c563f6ab6d2]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2022, time.November, 8, 18, 44, 20, 0, time.Local), Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.1", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:time.Date(2022, time.November, 8, 18, 44, 20, 0, time.Local), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"my-hostname-basic-fda19a45-91cc-472f-9d2c-3c563f6ab6d2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000dc8150)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"registry.k8s.io/e2e-test-images/agnhost:2.40", ImageID:"registry.k8s.io/e2e-test-images/agnhost@sha256:af7e3857d87770ddb40f5ea4f89b5a2709504ab1ee31f9ea4ab5823c045f2146", ContainerID:"containerd://483d57b36f1cf4d5e1a62c9857004f86b58d0ea5d903bde85a885faf163852c9", Started:(*bool)(0xc002d5824f)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)} Nov 8 18:45:45.858: INFO: Controller my-hostname-basic-fda19a45-91cc-472f-9d2c-3c563f6ab6d2: Failed to GET from replica 1 [my-hostname-basic-fda19a45-91cc-472f-9d2c-3c563f6ab6d2-9ks8v]: the server is currently unable to handle the request (get pods my-hostname-basic-fda19a45-91cc-472f-9d2c-3c563f6ab6d2-9ks8v) pod status: v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2022, time.November, 8, 18, 44, 20, 0, time.Local), Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2022, time.November, 8, 18, 44, 20, 0, time.Local), Reason:"ContainersNotReady", Message:"containers with unready status: [my-hostname-basic-fda19a45-91cc-472f-9d2c-3c563f6ab6d2]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2022, time.November, 8, 18, 44, 20, 0, time.Local), Reason:"ContainersNotReady", Message:"containers with unready status: [my-hostname-basic-fda19a45-91cc-472f-9d2c-3c563f6ab6d2]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2022, time.November, 8, 18, 44, 20, 0, time.Local), Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.1", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:time.Date(2022, time.November, 8, 18, 44, 20, 0, time.Local), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"my-hostname-basic-fda19a45-91cc-472f-9d2c-3c563f6ab6d2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000dc8150)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"registry.k8s.io/e2e-test-images/agnhost:2.40", ImageID:"registry.k8s.io/e2e-test-images/agnhost@sha256:af7e3857d87770ddb40f5ea4f89b5a2709504ab1ee31f9ea4ab5823c045f2146", ContainerID:"containerd://483d57b36f1cf4d5e1a62c9857004f86b58d0ea5d903bde85a885faf163852c9", Started:(*bool)(0xc002d5824f)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)} Nov 8 18:45:53.922: INFO: Controller my-hostname-basic-fda19a45-91cc-472f-9d2c-3c563f6ab6d2: Failed to GET from replica 1 [my-hostname-basic-fda19a45-91cc-472f-9d2c-3c563f6ab6d2-9ks8v]: the server is currently unable to handle the request (get pods my-hostname-basic-fda19a45-91cc-472f-9d2c-3c563f6ab6d2-9ks8v) pod status: v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2022, time.November, 8, 18, 44, 20, 0, time.Local), Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2022, time.November, 8, 18, 44, 20, 0, time.Local), Reason:"ContainersNotReady", Message:"containers with unready status: [my-hostname-basic-fda19a45-91cc-472f-9d2c-3c563f6ab6d2]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2022, time.November, 8, 18, 44, 20, 0, time.Local), Reason:"ContainersNotReady", Message:"containers with unready status: [my-hostname-basic-fda19a45-91cc-472f-9d2c-3c563f6ab6d2]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2022, time.November, 8, 18, 44, 20, 0, time.Local), Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.1", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:time.Date(2022, time.November, 8, 18, 44, 20, 0, time.Local), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"my-hostname-basic-fda19a45-91cc-472f-9d2c-3c563f6ab6d2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000dc8150)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"registry.k8s.io/e2e-test-images/agnhost:2.40", ImageID:"registry.k8s.io/e2e-test-images/agnhost@sha256:af7e3857d87770ddb40f5ea4f89b5a2709504ab1ee31f9ea4ab5823c045f2146", ContainerID:"containerd://483d57b36f1cf4d5e1a62c9857004f86b58d0ea5d903bde85a885faf163852c9", Started:(*bool)(0xc002d5824f)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)} Nov 8 18:45:58.914: INFO: Controller my-hostname-basic-fda19a45-91cc-472f-9d2c-3c563f6ab6d2: Failed to GET from replica 1 [my-hostname-basic-fda19a45-91cc-472f-9d2c-3c563f6ab6d2-9ks8v]: the server is currently unable to handle the request (get pods my-hostname-basic-fda19a45-91cc-472f-9d2c-3c563f6ab6d2-9ks8v) pod status: v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2022, time.November, 8, 18, 44, 20, 0, time.Local), Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2022, time.November, 8, 18, 44, 20, 0, time.Local), Reason:"ContainersNotReady", Message:"containers with unready status: [my-hostname-basic-fda19a45-91cc-472f-9d2c-3c563f6ab6d2]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2022, time.November, 8, 18, 44, 20, 0, time.Local), Reason:"ContainersNotReady", Message:"containers with unready status: [my-hostname-basic-fda19a45-91cc-472f-9d2c-3c563f6ab6d2]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2022, time.November, 8, 18, 44, 20, 0, time.Local), Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.1", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:time.Date(2022, time.November, 8, 18, 44, 20, 0, time.Local), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"my-hostname-basic-fda19a45-91cc-472f-9d2c-3c563f6ab6d2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000dc8150)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"registry.k8s.io/e2e-test-images/agnhost:2.40", ImageID:"registry.k8s.io/e2e-test-images/agnhost@sha256:af7e3857d87770ddb40f5ea4f89b5a2709504ab1ee31f9ea4ab5823c045f2146", ContainerID:"containerd://483d57b36f1cf4d5e1a62c9857004f86b58d0ea5d903bde85a885faf163852c9", Started:(*bool)(0xc002d5824f)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)} Nov 8 18:46:03.937: INFO: Controller my-hostname-basic-fda19a45-91cc-472f-9d2c-3c563f6ab6d2: Failed to GET from replica 1 [my-hostname-basic-fda19a45-91cc-472f-9d2c-3c563f6ab6d2-9ks8v]: the server is currently unable to handle the request (get pods my-hostname-basic-fda19a45-91cc-472f-9d2c-3c563f6ab6d2-9ks8v) pod status: v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2022, time.November, 8, 18, 44, 20, 0, time.Local), Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2022, time.November, 8, 18, 44, 20, 0, time.Local), Reason:"ContainersNotReady", Message:"containers with unready status: [my-hostname-basic-fda19a45-91cc-472f-9d2c-3c563f6ab6d2]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2022, time.November, 8, 18, 44, 20, 0, time.Local), Reason:"ContainersNotReady", Message:"containers with unready status: [my-hostname-basic-fda19a45-91cc-472f-9d2c-3c563f6ab6d2]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2022, time.November, 8, 18, 44, 20, 0, time.Local), Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.1", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:time.Date(2022, time.November, 8, 18, 44, 20, 0, time.Local), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"my-hostname-basic-fda19a45-91cc-472f-9d2c-3c563f6ab6d2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000dc8150)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"registry.k8s.io/e2e-test-images/agnhost:2.40", ImageID:"registry.k8s.io/e2e-test-images/agnhost@sha256:af7e3857d87770ddb40f5ea4f89b5a2709504ab1ee31f9ea4ab5823c045f2146", ContainerID:"containerd://483d57b36f1cf4d5e1a62c9857004f86b58d0ea5d903bde85a885faf163852c9", Started:(*bool)(0xc002d5824f)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)} Nov 8 18:46:08.930: INFO: Controller my-hostname-basic-fda19a45-91cc-472f-9d2c-3c563f6ab6d2: Failed to GET from replica 1 [my-hostname-basic-fda19a45-91cc-472f-9d2c-3c563f6ab6d2-9ks8v]: the server is currently unable to handle the request (get pods my-hostname-basic-fda19a45-91cc-472f-9d2c-3c563f6ab6d2-9ks8v) pod status: v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2022, time.November, 8, 18, 44, 20, 0, time.Local), Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2022, time.November, 8, 18, 44, 20, 0, time.Local), Reason:"ContainersNotReady", Message:"containers with unready status: [my-hostname-basic-fda19a45-91cc-472f-9d2c-3c563f6ab6d2]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2022, time.November, 8, 18, 44, 20, 0, time.Local), Reason:"ContainersNotReady", Message:"containers with unready status: [my-hostname-basic-fda19a45-91cc-472f-9d2c-3c563f6ab6d2]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2022, time.November, 8, 18, 44, 20, 0, time.Local), Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.1", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:time.Date(2022, time.November, 8, 18, 44, 20, 0, time.Local), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"my-hostname-basic-fda19a45-91cc-472f-9d2c-3c563f6ab6d2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000dc8150)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"registry.k8s.io/e2e-test-images/agnhost:2.40", ImageID:"registry.k8s.io/e2e-test-images/agnhost@sha256:af7e3857d87770ddb40f5ea4f89b5a2709504ab1ee31f9ea4ab5823c045f2146", ContainerID:"containerd://483d57b36f1cf4d5e1a62c9857004f86b58d0ea5d903bde85a885faf163852c9", Started:(*bool)(0xc002d5824f)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)} Nov 8 18:46:13.922: INFO: Controller my-hostname-basic-fda19a45-91cc-472f-9d2c-3c563f6ab6d2: Failed to GET from replica 1 [my-hostname-basic-fda19a45-91cc-472f-9d2c-3c563f6ab6d2-9ks8v]: the server is currently unable to handle the request (get pods my-hostname-basic-fda19a45-91cc-472f-9d2c-3c563f6ab6d2-9ks8v) pod status: v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2022, time.November, 8, 18, 44, 20, 0, time.Local), Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2022, time.November, 8, 18, 44, 20, 0, time.Local), Reason:"ContainersNotReady", Message:"containers with unready status: [my-hostname-basic-fda19a45-91cc-472f-9d2c-3c563f6ab6d2]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2022, time.November, 8, 18, 44, 20, 0, time.Local), Reason:"ContainersNotReady", Message:"containers with unready status: [my-hostname-basic-fda19a45-91cc-472f-9d2c-3c563f6ab6d2]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2022, time.November, 8, 18, 44, 20, 0, time.Local), Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.1", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:time.Date(2022, time.November, 8, 18, 44, 20, 0, time.Local), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"my-hostname-basic-fda19a45-91cc-472f-9d2c-3c563f6ab6d2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000dc8150)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"registry.k8s.io/e2e-test-images/agnhost:2.40", ImageID:"registry.k8s.io/e2e-test-images/agnhost@sha256:af7e3857d87770ddb40f5ea4f89b5a2709504ab1ee31f9ea4ab5823c045f2146", ContainerID:"containerd://483d57b36f1cf4d5e1a62c9857004f86b58d0ea5d903bde85a885faf163852c9", Started:(*bool)(0xc002d5824f)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)} Nov 8 18:46:18.918: INFO: Controller my-hostname-basic-fda19a45-91cc-472f-9d2c-3c563f6ab6d2: Failed to GET from replica 1 [my-hostname-basic-fda19a45-91cc-472f-9d2c-3c563f6ab6d2-9ks8v]: the server is currently unable to handle the request (get pods my-hostname-basic-fda19a45-91cc-472f-9d2c-3c563f6ab6d2-9ks8v) pod status: v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2022, time.November, 8, 18, 44, 20, 0, time.Local), Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2022, time.November, 8, 18, 44, 20, 0, time.Local), Reason:"ContainersNotReady", Message:"containers with unready status: [my-hostname-basic-fda19a45-91cc-472f-9d2c-3c563f6ab6d2]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2022, time.November, 8, 18, 44, 20, 0, time.Local), Reason:"ContainersNotReady", Message:"containers with unready status: [my-hostname-basic-fda19a45-91cc-472f-9d2c-3c563f6ab6d2]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2022, time.November, 8, 18, 44, 20, 0, time.Local), Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.1", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:time.Date(2022, time.November, 8, 18, 44, 20, 0, time.Local), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"my-hostname-basic-fda19a45-91cc-472f-9d2c-3c563f6ab6d2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000dc8150)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"registry.k8s.io/e2e-test-images/agnhost:2.40", ImageID:"registry.k8s.io/e2e-test-images/agnhost@sha256:af7e3857d87770ddb40f5ea4f89b5a2709504ab1ee31f9ea4ab5823c045f2146", ContainerID:"containerd://483d57b36f1cf4d5e1a62c9857004f86b58d0ea5d903bde85a885faf163852c9", Started:(*bool)(0xc002d5824f)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)} Nov 8 18:46:23.942: INFO: Controller my-hostname-basic-fda19a45-91cc-472f-9d2c-3c563f6ab6d2: Failed to GET from replica 1 [my-hostname-basic-fda19a45-91cc-472f-9d2c-3c563f6ab6d2-9ks8v]: the server is currently unable to handle the request (get pods my-hostname-basic-fda19a45-91cc-472f-9d2c-3c563f6ab6d2-9ks8v) pod status: v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2022, time.November, 8, 18, 44, 20, 0, time.Local), Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2022, time.November, 8, 18, 44, 20, 0, time.Local), Reason:"ContainersNotReady", Message:"containers with unready status: [my-hostname-basic-fda19a45-91cc-472f-9d2c-3c563f6ab6d2]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2022, time.November, 8, 18, 44, 20, 0, time.Local), Reason:"ContainersNotReady", Message:"containers with unready status: [my-hostname-basic-fda19a45-91cc-472f-9d2c-3c563f6ab6d2]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2022, time.November, 8, 18, 44, 20, 0, time.Local), Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.1", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:time.Date(2022, time.November, 8, 18, 44, 20, 0, time.Local), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"my-hostname-basic-fda19a45-91cc-472f-9d2c-3c563f6ab6d2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000dc8150)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"registry.k8s.io/e2e-test-images/agnhost:2.40", ImageID:"registry.k8s.io/e2e-test-images/agnhost@sha256:af7e3857d87770ddb40f5ea4f89b5a2709504ab1ee31f9ea4ab5823c045f2146", ContainerID:"containerd://483d57b36f1cf4d5e1a62c9857004f86b58d0ea5d903bde85a885faf163852c9", Started:(*bool)(0xc002d5824f)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)} Nov 8 18:46:25.861: INFO: Controller my-hostname-basic-fda19a45-91cc-472f-9d2c-3c563f6ab6d2: Failed to GET from replica 1 [my-hostname-basic-fda19a45-91cc-472f-9d2c-3c563f6ab6d2-9ks8v]: the server is currently unable to handle the request (get pods my-hostname-basic-fda19a45-91cc-472f-9d2c-3c563f6ab6d2-9ks8v) pod status: v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2022, time.November, 8, 18, 44, 20, 0, time.Local), Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2022, time.November, 8, 18, 44, 20, 0, time.Local), Reason:"ContainersNotReady", Message:"containers with unready status: [my-hostname-basic-fda19a45-91cc-472f-9d2c-3c563f6ab6d2]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2022, time.November, 8, 18, 44, 20, 0, time.Local), Reason:"ContainersNotReady", Message:"containers with unready status: [my-hostname-basic-fda19a45-91cc-472f-9d2c-3c563f6ab6d2]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2022, time.November, 8, 18, 44, 20, 0, time.Local), Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.1", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:time.Date(2022, time.November, 8, 18, 44, 20, 0, time.Local), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"my-hostname-basic-fda19a45-91cc-472f-9d2c-3c563f6ab6d2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000dc8150)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"registry.k8s.io/e2e-test-images/agnhost:2.40", ImageID:"registry.k8s.io/e2e-test-images/agnhost@sha256:af7e3857d87770ddb40f5ea4f89b5a2709504ab1ee31f9ea4ab5823c045f2146", ContainerID:"containerd://483d57b36f1cf4d5e1a62c9857004f86b58d0ea5d903bde85a885faf163852c9", Started:(*bool)(0xc002d5824f)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)} Nov 8 18:46:25.873: INFO: Controller my-hostname-basic-fda19a45-91cc-472f-9d2c-3c563f6ab6d2: Failed to GET from replica 1 [my-hostname-basic-fda19a45-91cc-472f-9d2c-3c563f6ab6d2-9ks8v]: the server is currently unable to handle the request (get pods my-hostname-basic-fda19a45-91cc-472f-9d2c-3c563f6ab6d2-9ks8v) pod status: v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2022, time.November, 8, 18, 44, 20, 0, time.Local), Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2022, time.November, 8, 18, 44, 20, 0, time.Local), Reason:"ContainersNotReady", Message:"containers with unready status: [my-hostname-basic-fda19a45-91cc-472f-9d2c-3c563f6ab6d2]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2022, time.November, 8, 18, 44, 20, 0, time.Local), Reason:"ContainersNotReady", Message:"containers with unready status: [my-hostname-basic-fda19a45-91cc-472f-9d2c-3c563f6ab6d2]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2022, time.November, 8, 18, 44, 20, 0, time.Local), Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.1", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:time.Date(2022, time.November, 8, 18, 44, 20, 0, time.Local), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"my-hostname-basic-fda19a45-91cc-472f-9d2c-3c563f6ab6d2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000dc8150)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"registry.k8s.io/e2e-test-images/agnhost:2.40", ImageID:"registry.k8s.io/e2e-test-images/agnhost@sha256:af7e3857d87770ddb40f5ea4f89b5a2709504ab1ee31f9ea4ab5823c045f2146", ContainerID:"containerd://483d57b36f1cf4d5e1a62c9857004f86b58d0ea5d903bde85a885faf163852c9", Started:(*bool)(0xc002d5824f)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)} Nov 8 18:46:25.873: FAIL: Did not get expected responses within the timeout period of 120.00 seconds. Full Stack Trace k8s.io/kubernetes/test/e2e/apps.testReplicaSetServeImageOrFail(0xc000b880f0, {0x74ae937, 0x5}, {0xc0001b5cb0, 0x2c}) test/e2e/apps/replica_set.go:233 +0x8b5 k8s.io/kubernetes/test/e2e/apps.glob..func9.1() test/e2e/apps/replica_set.go:112 +0x37 [AfterEach] [sig-apps] ReplicaSet test/e2e/framework/node/init/init.go:32 Nov 8 18:46:25.873: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-apps] ReplicaSet test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-apps] ReplicaSet dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/08/22 18:46:25.877 STEP: Collecting events from namespace "replicaset-4545". 11/08/22 18:46:25.877 STEP: Found 8 events. 11/08/22 18:46:25.88 Nov 8 18:46:25.880: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for my-hostname-basic-fda19a45-91cc-472f-9d2c-3c563f6ab6d2-9ks8v: { } Scheduled: Successfully assigned replicaset-4545/my-hostname-basic-fda19a45-91cc-472f-9d2c-3c563f6ab6d2-9ks8v to 172.17.0.1 Nov 8 18:46:25.880: INFO: At 2022-11-08 18:44:20 +0000 UTC - event for my-hostname-basic-fda19a45-91cc-472f-9d2c-3c563f6ab6d2: {replicaset-controller } SuccessfulCreate: Created pod: my-hostname-basic-fda19a45-91cc-472f-9d2c-3c563f6ab6d2-9ks8v Nov 8 18:46:25.880: INFO: At 2022-11-08 18:44:23 +0000 UTC - event for my-hostname-basic-fda19a45-91cc-472f-9d2c-3c563f6ab6d2-9ks8v: {kubelet 172.17.0.1} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.40" already present on machine Nov 8 18:46:25.880: INFO: At 2022-11-08 18:44:23 +0000 UTC - event for my-hostname-basic-fda19a45-91cc-472f-9d2c-3c563f6ab6d2-9ks8v: {kubelet 172.17.0.1} Created: Created container my-hostname-basic-fda19a45-91cc-472f-9d2c-3c563f6ab6d2 Nov 8 18:46:25.880: INFO: At 2022-11-08 18:44:23 +0000 UTC - event for my-hostname-basic-fda19a45-91cc-472f-9d2c-3c563f6ab6d2-9ks8v: {kubelet 172.17.0.1} Started: Started container my-hostname-basic-fda19a45-91cc-472f-9d2c-3c563f6ab6d2 Nov 8 18:46:25.880: INFO: At 2022-11-08 18:44:23 +0000 UTC - event for my-hostname-basic-fda19a45-91cc-472f-9d2c-3c563f6ab6d2-9ks8v: {kubelet 172.17.0.1} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Nov 8 18:46:25.880: INFO: At 2022-11-08 18:44:30 +0000 UTC - event for my-hostname-basic-fda19a45-91cc-472f-9d2c-3c563f6ab6d2-9ks8v: {kubelet 172.17.0.1} BackOff: Back-off restarting failed container my-hostname-basic-fda19a45-91cc-472f-9d2c-3c563f6ab6d2 in pod my-hostname-basic-fda19a45-91cc-472f-9d2c-3c563f6ab6d2-9ks8v_replicaset-4545(f40a830d-0adb-45e7-b02c-0674f2115e9f) Nov 8 18:46:25.880: INFO: At 2022-11-08 18:44:37 +0000 UTC - event for my-hostname-basic-fda19a45-91cc-472f-9d2c-3c563f6ab6d2-9ks8v: {kubelet 172.17.0.1} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: %!w(<nil>): unknown Nov 8 18:46:25.884: INFO: POD NODE PHASE GRACE CONDITIONS Nov 8 18:46:25.884: INFO: my-hostname-basic-fda19a45-91cc-472f-9d2c-3c563f6ab6d2-9ks8v 172.17.0.1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-08 18:44:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-08 18:46:02 +0000 UTC ContainersNotReady containers with unready status: [my-hostname-basic-fda19a45-91cc-472f-9d2c-3c563f6ab6d2]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-08 18:46:02 +0000 UTC ContainersNotReady containers with unready status: [my-hostname-basic-fda19a45-91cc-472f-9d2c-3c563f6ab6d2]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-08 18:44:20 +0000 UTC }] Nov 8 18:46:25.884: INFO: Nov 8 18:46:25.907: INFO: Logging node info for node 172.17.0.1 Nov 8 18:46:25.911: INFO: Node Info: &Node{ObjectMeta:{172.17.0.1 1c9ca6f0-ace7-4a33-a1cd-137d512be00a 4852 0 2022-11-08 18:07:44 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:172.17.0.1 kubernetes.io/os:linux] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-11-08 18:07:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}} } {kubelet Update v1 2022-11-08 18:07:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {kubelet Update v1 2022-11-08 18:44:38 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{259962224640 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67441348608 0} {<nil>} 65860692Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{233966001789 0} {<nil>} 233966001789 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67336491008 0} {<nil>} 65758292Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-08 18:44:38 +0000 UTC,LastTransitionTime:2022-11-08 18:07:43 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-08 18:44:38 +0000 UTC,LastTransitionTime:2022-11-08 18:07:43 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-08 18:44:38 +0000 UTC,LastTransitionTime:2022-11-08 18:07:43 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-08 18:44:38 +0000 UTC,LastTransitionTime:2022-11-08 18:07:54 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.17.0.1,},NodeAddress{Type:Hostname,Address:172.17.0.1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:,SystemUUID:7d8834b1-ec1e-71b0-7148-50316089d154,BootID:99214993-e7b1-4bff-9db2-b9548be8d199,KernelVersion:5.4.0-1078-gke,OSImage:Debian GNU/Linux 10 (buster),ContainerRuntimeVersion:containerd://1.6.8,KubeletVersion:v1.26.0-alpha.3.387+504f252722dcc8,KubeProxyVersion:v1.26.0-alpha.3.387+504f252722dcc8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:11e6a66017ba4e4b938c1612b7a54a3befcefd354796c04e1dba76873a13518e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.5],SizeBytes:112030526,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:af7e3857d87770ddb40f5ea4f89b5a2709504ab1ee31f9ea4ab5823c045f2146 registry.k8s.io/e2e-test-images/agnhost:2.40],SizeBytes:51155161,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nautilus@sha256:99c0d6f1ad24a1aa1905d9c6534d193f268f7b23f9add2ae6bb41f31094bdd5c registry.k8s.io/e2e-test-images/nautilus:1.5],SizeBytes:49642095,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 registry.k8s.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:40764680,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 registry.k8s.io/e2e-test-images/nginx:1.14-2],SizeBytes:6979041,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 8 18:46:25.911: INFO: Logging kubelet events for node 172.17.0.1 Nov 8 18:46:25.917: INFO: Logging pods the kubelet thinks is on node 172.17.0.1 Nov 8 18:46:25.924: INFO: coredns-755454cbdc-s26wr started at 2022-11-08 18:07:54 +0000 UTC (0+1 container statuses recorded) Nov 8 18:46:25.924: INFO: Container coredns ready: false, restart count 12 Nov 8 18:46:25.924: INFO: my-hostname-basic-fda19a45-91cc-472f-9d2c-3c563f6ab6d2-9ks8v started at 2022-11-08 18:44:20 +0000 UTC (0+1 container statuses recorded) Nov 8 18:46:25.924: INFO: Container my-hostname-basic-fda19a45-91cc-472f-9d2c-3c563f6ab6d2 ready: false, restart count 4 Nov 8 18:46:25.959: INFO: Latency metrics for node 172.17.0.1 [DeferCleanup (Each)] [sig-apps] ReplicaSet tear down framework | framework.go:193 STEP: Destroying namespace "replicaset-4545" for this suite. 11/08/22 18:46:25.96
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-apps\]\sStatefulSet\sBasic\sStatefulSet\sfunctionality\s\[StatefulSetBasic\]\sBurst\sscaling\sshould\srun\sto\scompletion\seven\swith\sunhealthy\spods\s\[Slow\]\s\[Conformance\]$'
test/e2e/framework/statefulset/wait.go:58 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning({0x7efa648?, 0xc0038c2d00}, 0x1, 0x1, 0xc004161400) test/e2e/framework/statefulset/wait.go:58 +0xf9 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunningAndReady(...) test/e2e/framework/statefulset/wait.go:80 k8s.io/kubernetes/test/e2e/apps.glob..func10.2.11() test/e2e/apps/statefulset.go:708 +0x27bfrom junit_01.xml
[BeforeEach] [sig-apps] StatefulSet set up framework | framework.go:178 STEP: Creating a kubernetes client 11/08/22 18:19:02.058 Nov 8 18:19:02.058: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename statefulset 11/08/22 18:19:02.06 STEP: Waiting for a default service account to be provisioned in namespace 11/08/22 18:19:02.08 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/08/22 18:19:02.085 [BeforeEach] [sig-apps] StatefulSet test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-apps] StatefulSet test/e2e/apps/statefulset.go:98 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] test/e2e/apps/statefulset.go:113 STEP: Creating service test in namespace statefulset-8979 11/08/22 18:19:02.09 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] test/e2e/apps/statefulset.go:697 STEP: Creating stateful set ss in namespace statefulset-8979 11/08/22 18:19:02.099 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-8979 11/08/22 18:19:02.109 Nov 8 18:19:02.114: INFO: Found 0 stateful pods, waiting for 1 Nov 8 18:19:12.119: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false Nov 8 18:19:22.122: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Nov 8 18:19:32.119: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Nov 8 18:19:42.119: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Nov 8 18:19:52.118: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Nov 8 18:20:02.118: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Nov 8 18:20:12.119: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Nov 8 18:20:22.119: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Nov 8 18:20:32.119: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Nov 8 18:20:42.119: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Nov 8 18:20:52.119: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Nov 8 18:21:02.121: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Nov 8 18:21:12.120: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Nov 8 18:21:22.118: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Nov 8 18:21:32.119: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Nov 8 18:21:42.121: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Nov 8 18:21:52.119: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Nov 8 18:22:02.120: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Nov 8 18:22:12.121: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Nov 8 18:22:22.118: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Nov 8 18:22:32.118: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Nov 8 18:22:42.119: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Nov 8 18:22:52.119: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Nov 8 18:23:02.118: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Nov 8 18:23:12.119: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Nov 8 18:23:22.119: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Nov 8 18:23:32.118: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Nov 8 18:23:42.121: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Nov 8 18:23:52.119: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false ------------------------------ Automatically polling progress: [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] (Spec Runtime: 5m0.041s) test/e2e/apps/statefulset.go:697 In [It] (Node Runtime: 5m0s) test/e2e/apps/statefulset.go:697 At [By Step] Waiting until all stateful set ss replicas will be running in namespace statefulset-8979 (Step Runtime: 4m59.99s) test/e2e/apps/statefulset.go:707 Spec Goroutine goroutine 1753 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7ebe6a8, 0xc0001a8000}, 0xc0043c0408, 0x2f7ec4a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7ebe6a8, 0xc0001a8000}, 0xf8?, 0x2f7d7e5?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7ebe6a8, 0xc0001a8000}, 0x64e09a0?, 0xc003bb3e48?, 0x25da967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x2c3?, 0x0?, 0x0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning({0x7efa648?, 0xc0038c2d00}, 0x1, 0x1, 0xc004161400) test/e2e/framework/statefulset/wait.go:35 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunningAndReady(...) test/e2e/framework/statefulset/wait.go:80 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.11() test/e2e/apps/statefulset.go:708 | | ginkgo.By("Waiting until all stateful set " + ssName + " replicas will be running in namespace " + ns) > e2estatefulset.WaitForRunningAndReady(c, *ss.Spec.Replicas, ss) | | ginkgo.By("Confirming that stateful set scale up will not halt with unhealthy stateful pod") k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d04bce, 0xc00440a780}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 8 18:24:02.118: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Nov 8 18:24:12.119: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false ------------------------------ Automatically polling progress: [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] (Spec Runtime: 5m20.043s) test/e2e/apps/statefulset.go:697 In [It] (Node Runtime: 5m20.002s) test/e2e/apps/statefulset.go:697 At [By Step] Waiting until all stateful set ss replicas will be running in namespace statefulset-8979 (Step Runtime: 5m19.992s) test/e2e/apps/statefulset.go:707 Spec Goroutine goroutine 1753 [select, 2 minutes] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7ebe6a8, 0xc0001a8000}, 0xc0043c0408, 0x2f7ec4a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7ebe6a8, 0xc0001a8000}, 0xf8?, 0x2f7d7e5?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7ebe6a8, 0xc0001a8000}, 0x64e09a0?, 0xc003bb3e48?, 0x25da967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x2c3?, 0x0?, 0x0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning({0x7efa648?, 0xc0038c2d00}, 0x1, 0x1, 0xc004161400) test/e2e/framework/statefulset/wait.go:35 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunningAndReady(...) test/e2e/framework/statefulset/wait.go:80 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.11() test/e2e/apps/statefulset.go:708 | | ginkgo.By("Waiting until all stateful set " + ssName + " replicas will be running in namespace " + ns) > e2estatefulset.WaitForRunningAndReady(c, *ss.Spec.Replicas, ss) | | ginkgo.By("Confirming that stateful set scale up will not halt with unhealthy stateful pod") k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d04bce, 0xc00440a780}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 8 18:24:22.119: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Nov 8 18:24:32.119: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false ------------------------------ Automatically polling progress: [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] (Spec Runtime: 5m40.044s) test/e2e/apps/statefulset.go:697 In [It] (Node Runtime: 5m40.003s) test/e2e/apps/statefulset.go:697 At [By Step] Waiting until all stateful set ss replicas will be running in namespace statefulset-8979 (Step Runtime: 5m39.993s) test/e2e/apps/statefulset.go:707 Spec Goroutine goroutine 1753 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7ebe6a8, 0xc0001a8000}, 0xc0043c0408, 0x2f7ec4a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7ebe6a8, 0xc0001a8000}, 0xf8?, 0x2f7d7e5?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7ebe6a8, 0xc0001a8000}, 0x64e09a0?, 0xc003bb3e48?, 0x25da967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x2c3?, 0x0?, 0x0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning({0x7efa648?, 0xc0038c2d00}, 0x1, 0x1, 0xc004161400) test/e2e/framework/statefulset/wait.go:35 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunningAndReady(...) test/e2e/framework/statefulset/wait.go:80 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.11() test/e2e/apps/statefulset.go:708 | | ginkgo.By("Waiting until all stateful set " + ssName + " replicas will be running in namespace " + ns) > e2estatefulset.WaitForRunningAndReady(c, *ss.Spec.Replicas, ss) | | ginkgo.By("Confirming that stateful set scale up will not halt with unhealthy stateful pod") k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d04bce, 0xc00440a780}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 8 18:24:42.119: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Nov 8 18:24:52.119: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false ------------------------------ Automatically polling progress: [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] (Spec Runtime: 6m0.046s) test/e2e/apps/statefulset.go:697 In [It] (Node Runtime: 6m0.005s) test/e2e/apps/statefulset.go:697 At [By Step] Waiting until all stateful set ss replicas will be running in namespace statefulset-8979 (Step Runtime: 5m59.994s) test/e2e/apps/statefulset.go:707 Spec Goroutine goroutine 1753 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7ebe6a8, 0xc0001a8000}, 0xc0043c0408, 0x2f7ec4a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7ebe6a8, 0xc0001a8000}, 0xf8?, 0x2f7d7e5?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7ebe6a8, 0xc0001a8000}, 0x64e09a0?, 0xc003bb3e48?, 0x25da967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x2c3?, 0x0?, 0x0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning({0x7efa648?, 0xc0038c2d00}, 0x1, 0x1, 0xc004161400) test/e2e/framework/statefulset/wait.go:35 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunningAndReady(...) test/e2e/framework/statefulset/wait.go:80 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.11() test/e2e/apps/statefulset.go:708 | | ginkgo.By("Waiting until all stateful set " + ssName + " replicas will be running in namespace " + ns) > e2estatefulset.WaitForRunningAndReady(c, *ss.Spec.Replicas, ss) | | ginkgo.By("Confirming that stateful set scale up will not halt with unhealthy stateful pod") k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d04bce, 0xc00440a780}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 8 18:25:02.118: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Nov 8 18:25:12.120: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false ------------------------------ Automatically polling progress: [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] (Spec Runtime: 6m20.048s) test/e2e/apps/statefulset.go:697 In [It] (Node Runtime: 6m20.006s) test/e2e/apps/statefulset.go:697 At [By Step] Waiting until all stateful set ss replicas will be running in namespace statefulset-8979 (Step Runtime: 6m19.996s) test/e2e/apps/statefulset.go:707 Spec Goroutine goroutine 1753 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7ebe6a8, 0xc0001a8000}, 0xc0043c0408, 0x2f7ec4a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7ebe6a8, 0xc0001a8000}, 0xf8?, 0x2f7d7e5?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7ebe6a8, 0xc0001a8000}, 0x64e09a0?, 0xc003bb3e48?, 0x25da967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x2c3?, 0x0?, 0x0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning({0x7efa648?, 0xc0038c2d00}, 0x1, 0x1, 0xc004161400) test/e2e/framework/statefulset/wait.go:35 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunningAndReady(...) test/e2e/framework/statefulset/wait.go:80 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.11() test/e2e/apps/statefulset.go:708 | | ginkgo.By("Waiting until all stateful set " + ssName + " replicas will be running in namespace " + ns) > e2estatefulset.WaitForRunningAndReady(c, *ss.Spec.Replicas, ss) | | ginkgo.By("Confirming that stateful set scale up will not halt with unhealthy stateful pod") k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d04bce, 0xc00440a780}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 8 18:25:22.119: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Nov 8 18:25:32.119: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false ------------------------------ Automatically polling progress: [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] (Spec Runtime: 6m40.05s) test/e2e/apps/statefulset.go:697 In [It] (Node Runtime: 6m40.009s) test/e2e/apps/statefulset.go:697 At [By Step] Waiting until all stateful set ss replicas will be running in namespace statefulset-8979 (Step Runtime: 6m39.998s) test/e2e/apps/statefulset.go:707 Spec Goroutine goroutine 1753 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7ebe6a8, 0xc0001a8000}, 0xc0043c0408, 0x2f7ec4a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7ebe6a8, 0xc0001a8000}, 0xf8?, 0x2f7d7e5?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7ebe6a8, 0xc0001a8000}, 0x64e09a0?, 0xc003bb3e48?, 0x25da967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x2c3?, 0x0?, 0x0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning({0x7efa648?, 0xc0038c2d00}, 0x1, 0x1, 0xc004161400) test/e2e/framework/statefulset/wait.go:35 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunningAndReady(...) test/e2e/framework/statefulset/wait.go:80 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.11() test/e2e/apps/statefulset.go:708 | | ginkgo.By("Waiting until all stateful set " + ssName + " replicas will be running in namespace " + ns) > e2estatefulset.WaitForRunningAndReady(c, *ss.Spec.Replicas, ss) | | ginkgo.By("Confirming that stateful set scale up will not halt with unhealthy stateful pod") k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d04bce, 0xc00440a780}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 8 18:25:42.119: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Nov 8 18:25:52.118: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false ------------------------------ Automatically polling progress: [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] (Spec Runtime: 7m0.051s) test/e2e/apps/statefulset.go:697 In [It] (Node Runtime: 7m0.01s) test/e2e/apps/statefulset.go:697 At [By Step] Waiting until all stateful set ss replicas will be running in namespace statefulset-8979 (Step Runtime: 7m0s) test/e2e/apps/statefulset.go:707 Spec Goroutine goroutine 1753 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7ebe6a8, 0xc0001a8000}, 0xc0043c0408, 0x2f7ec4a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7ebe6a8, 0xc0001a8000}, 0xf8?, 0x2f7d7e5?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7ebe6a8, 0xc0001a8000}, 0x64e09a0?, 0xc003bb3e48?, 0x25da967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x2c3?, 0x0?, 0x0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning({0x7efa648?, 0xc0038c2d00}, 0x1, 0x1, 0xc004161400) test/e2e/framework/statefulset/wait.go:35 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunningAndReady(...) test/e2e/framework/statefulset/wait.go:80 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.11() test/e2e/apps/statefulset.go:708 | | ginkgo.By("Waiting until all stateful set " + ssName + " replicas will be running in namespace " + ns) > e2estatefulset.WaitForRunningAndReady(c, *ss.Spec.Replicas, ss) | | ginkgo.By("Confirming that stateful set scale up will not halt with unhealthy stateful pod") k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d04bce, 0xc00440a780}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 8 18:26:02.119: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Nov 8 18:26:12.119: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false ------------------------------ Automatically polling progress: [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] (Spec Runtime: 7m20.053s) test/e2e/apps/statefulset.go:697 In [It] (Node Runtime: 7m20.012s) test/e2e/apps/statefulset.go:697 At [By Step] Waiting until all stateful set ss replicas will be running in namespace statefulset-8979 (Step Runtime: 7m20.002s) test/e2e/apps/statefulset.go:707 Spec Goroutine goroutine 1753 [select, 2 minutes] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7ebe6a8, 0xc0001a8000}, 0xc0043c0408, 0x2f7ec4a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7ebe6a8, 0xc0001a8000}, 0xf8?, 0x2f7d7e5?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7ebe6a8, 0xc0001a8000}, 0x64e09a0?, 0xc003bb3e48?, 0x25da967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x2c3?, 0x0?, 0x0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning({0x7efa648?, 0xc0038c2d00}, 0x1, 0x1, 0xc004161400) test/e2e/framework/statefulset/wait.go:35 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunningAndReady(...) test/e2e/framework/statefulset/wait.go:80 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.11() test/e2e/apps/statefulset.go:708 | | ginkgo.By("Waiting until all stateful set " + ssName + " replicas will be running in namespace " + ns) > e2estatefulset.WaitForRunningAndReady(c, *ss.Spec.Replicas, ss) | | ginkgo.By("Confirming that stateful set scale up will not halt with unhealthy stateful pod") k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d04bce, 0xc00440a780}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 8 18:26:22.120: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Nov 8 18:26:32.119: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false ------------------------------ Automatically polling progress: [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] (Spec Runtime: 7m40.055s) test/e2e/apps/statefulset.go:697 In [It] (Node Runtime: 7m40.014s) test/e2e/apps/statefulset.go:697 At [By Step] Waiting until all stateful set ss replicas will be running in namespace statefulset-8979 (Step Runtime: 7m40.004s) test/e2e/apps/statefulset.go:707 Spec Goroutine goroutine 1753 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7ebe6a8, 0xc0001a8000}, 0xc0043c0408, 0x2f7ec4a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7ebe6a8, 0xc0001a8000}, 0xf8?, 0x2f7d7e5?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7ebe6a8, 0xc0001a8000}, 0x64e09a0?, 0xc003bb3e48?, 0x25da967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x2c3?, 0x0?, 0x0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning({0x7efa648?, 0xc0038c2d00}, 0x1, 0x1, 0xc004161400) test/e2e/framework/statefulset/wait.go:35 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunningAndReady(...) test/e2e/framework/statefulset/wait.go:80 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.11() test/e2e/apps/statefulset.go:708 | | ginkgo.By("Waiting until all stateful set " + ssName + " replicas will be running in namespace " + ns) > e2estatefulset.WaitForRunningAndReady(c, *ss.Spec.Replicas, ss) | | ginkgo.By("Confirming that stateful set scale up will not halt with unhealthy stateful pod") k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d04bce, 0xc00440a780}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 8 18:26:42.119: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Nov 8 18:26:52.119: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false ------------------------------ Automatically polling progress: [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] (Spec Runtime: 8m0.057s) test/e2e/apps/statefulset.go:697 In [It] (Node Runtime: 8m0.016s) test/e2e/apps/statefulset.go:697 At [By Step] Waiting until all stateful set ss replicas will be running in namespace statefulset-8979 (Step Runtime: 8m0.006s) test/e2e/apps/statefulset.go:707 Spec Goroutine goroutine 1753 [runnable] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7ebe6a8, 0xc0001a8000}, 0xc0043c0408, 0x2f7ec4a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7ebe6a8, 0xc0001a8000}, 0xf8?, 0x2f7d7e5?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7ebe6a8, 0xc0001a8000}, 0x64e09a0?, 0xc003bb3e48?, 0x25da967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x2c3?, 0x0?, 0x0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning({0x7efa648?, 0xc0038c2d00}, 0x1, 0x1, 0xc004161400) test/e2e/framework/statefulset/wait.go:35 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunningAndReady(...) test/e2e/framework/statefulset/wait.go:80 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.11() test/e2e/apps/statefulset.go:708 | | ginkgo.By("Waiting until all stateful set " + ssName + " replicas will be running in namespace " + ns) > e2estatefulset.WaitForRunningAndReady(c, *ss.Spec.Replicas, ss) | | ginkgo.By("Confirming that stateful set scale up will not halt with unhealthy stateful pod") k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d04bce, 0xc00440a780}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 8 18:27:02.123: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Nov 8 18:27:12.119: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false ------------------------------ Automatically polling progress: [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] (Spec Runtime: 8m20.06s) test/e2e/apps/statefulset.go:697 In [It] (Node Runtime: 8m20.019s) test/e2e/apps/statefulset.go:697 At [By Step] Waiting until all stateful set ss replicas will be running in namespace statefulset-8979 (Step Runtime: 8m20.008s) test/e2e/apps/statefulset.go:707 Spec Goroutine goroutine 1753 [select] k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*ClientConn).RoundTrip(0xc000af0d80, 0xc0001d1800) vendor/golang.org/x/net/http2/transport.go:1200 k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).RoundTripOpt(0xc003140800, 0xc0001d1800, {0xa0?}) vendor/golang.org/x/net/http2/transport.go:519 k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).RoundTrip(...) vendor/golang.org/x/net/http2/transport.go:480 k8s.io/kubernetes/vendor/golang.org/x/net/http2.noDialH2RoundTripper.RoundTrip({0xc00168f040?}, 0xc0001d1800?) vendor/golang.org/x/net/http2/transport.go:3020 net/http.(*Transport).roundTrip(0xc00168f040, 0xc0001d1800) /usr/local/go/src/net/http/transport.go:540 net/http.(*Transport).RoundTrip(0x6ee5440?, 0xc003185470?) /usr/local/go/src/net/http/roundtrip.go:17 k8s.io/kubernetes/vendor/k8s.io/client-go/transport.(*userAgentRoundTripper).RoundTrip(0xc004840d40, 0xc0001d1100) vendor/k8s.io/client-go/transport/round_trippers.go:168 net/http.send(0xc0001d1100, {0x7e8b940, 0xc004840d40}, {0x73cd720?, 0x1?, 0x0?}) /usr/local/go/src/net/http/client.go:251 net/http.(*Client).send(0xc00483e1e0, 0xc0001d1100, {0x7f963eb825b8?, 0x100?, 0x0?}) /usr/local/go/src/net/http/client.go:175 net/http.(*Client).do(0xc00483e1e0, 0xc0001d1100) /usr/local/go/src/net/http/client.go:715 net/http.(*Client).Do(...) /usr/local/go/src/net/http/client.go:581 k8s.io/kubernetes/vendor/k8s.io/client-go/rest.(*Request).request(0xc0001d0600, {0x7ebe6a8, 0xc0001a8008}, 0x0?) vendor/k8s.io/client-go/rest/request.go:964 k8s.io/kubernetes/vendor/k8s.io/client-go/rest.(*Request).Do(0xc0001d0600, {0x7ebe6a8, 0xc0001a8008}) vendor/k8s.io/client-go/rest/request.go:1005 k8s.io/kubernetes/vendor/k8s.io/client-go/kubernetes/typed/core/v1.(*pods).List(0xc000f59e40, {0x7ebe6a8, 0xc0001a8008}, {{{0x0, 0x0}, {0x0, 0x0}}, {0xc0043a1b50, 0x10}, {0x0, ...}, ...}) vendor/k8s.io/client-go/kubernetes/typed/core/v1/pod.go:99 k8s.io/kubernetes/test/e2e/framework/statefulset.GetPodList({0x7efa648, 0xc0038c2d00}, 0xc004161400) test/e2e/framework/statefulset/rest.go:68 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning.func1() test/e2e/framework/statefulset/wait.go:37 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x26f2811, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7ebe6a8?, 0xc0001a8000?}, 0x25da61f?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7ebe6a8, 0xc0001a8000}, 0xc0043c0408, 0x2f7ec4a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7ebe6a8, 0xc0001a8000}, 0xf8?, 0x2f7d7e5?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7ebe6a8, 0xc0001a8000}, 0x64e09a0?, 0xc003bb3e48?, 0x25da967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x2c3?, 0x0?, 0x0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning({0x7efa648?, 0xc0038c2d00}, 0x1, 0x1, 0xc004161400) test/e2e/framework/statefulset/wait.go:35 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunningAndReady(...) test/e2e/framework/statefulset/wait.go:80 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.11() test/e2e/apps/statefulset.go:708 | | ginkgo.By("Waiting until all stateful set " + ssName + " replicas will be running in namespace " + ns) > e2estatefulset.WaitForRunningAndReady(c, *ss.Spec.Replicas, ss) | | ginkgo.By("Confirming that stateful set scale up will not halt with unhealthy stateful pod") k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d04bce, 0xc00440a780}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 8 18:27:22.121: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Nov 8 18:27:32.119: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Nov 8 18:27:42.118: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false ------------------------------ Automatically polling progress: [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] (Spec Runtime: 8m40.062s) test/e2e/apps/statefulset.go:697 In [It] (Node Runtime: 8m40.021s) test/e2e/apps/statefulset.go:697 At [By Step] Waiting until all stateful set ss replicas will be running in namespace statefulset-8979 (Step Runtime: 8m40.01s) test/e2e/apps/statefulset.go:707 Spec Goroutine goroutine 1753 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7ebe6a8, 0xc0001a8000}, 0xc0043c0408, 0x2f7ec4a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7ebe6a8, 0xc0001a8000}, 0xf8?, 0x2f7d7e5?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7ebe6a8, 0xc0001a8000}, 0x64e09a0?, 0xc003bb3e48?, 0x25da967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x2c3?, 0x0?, 0x0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning({0x7efa648?, 0xc0038c2d00}, 0x1, 0x1, 0xc004161400) test/e2e/framework/statefulset/wait.go:35 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunningAndReady(...) test/e2e/framework/statefulset/wait.go:80 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.11() test/e2e/apps/statefulset.go:708 | | ginkgo.By("Waiting until all stateful set " + ssName + " replicas will be running in namespace " + ns) > e2estatefulset.WaitForRunningAndReady(c, *ss.Spec.Replicas, ss) | | ginkgo.By("Confirming that stateful set scale up will not halt with unhealthy stateful pod") k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d04bce, 0xc00440a780}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 8 18:27:52.118: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Nov 8 18:28:02.119: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false ------------------------------ Automatically polling progress: [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] (Spec Runtime: 9m0.064s) test/e2e/apps/statefulset.go:697 In [It] (Node Runtime: 9m0.023s) test/e2e/apps/statefulset.go:697 At [By Step] Waiting until all stateful set ss replicas will be running in namespace statefulset-8979 (Step Runtime: 9m0.012s) test/e2e/apps/statefulset.go:707 Spec Goroutine goroutine 1753 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7ebe6a8, 0xc0001a8000}, 0xc0043c0408, 0x2f7ec4a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7ebe6a8, 0xc0001a8000}, 0xf8?, 0x2f7d7e5?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7ebe6a8, 0xc0001a8000}, 0x64e09a0?, 0xc003bb3e48?, 0x25da967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x2c3?, 0x0?, 0x0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning({0x7efa648?, 0xc0038c2d00}, 0x1, 0x1, 0xc004161400) test/e2e/framework/statefulset/wait.go:35 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunningAndReady(...) test/e2e/framework/statefulset/wait.go:80 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.11() test/e2e/apps/statefulset.go:708 | | ginkgo.By("Waiting until all stateful set " + ssName + " replicas will be running in namespace " + ns) > e2estatefulset.WaitForRunningAndReady(c, *ss.Spec.Replicas, ss) | | ginkgo.By("Confirming that stateful set scale up will not halt with unhealthy stateful pod") k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d04bce, 0xc00440a780}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 8 18:28:12.119: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Nov 8 18:28:22.119: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false ------------------------------ Automatically polling progress: [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] (Spec Runtime: 9m20.065s) test/e2e/apps/statefulset.go:697 In [It] (Node Runtime: 9m20.024s) test/e2e/apps/statefulset.go:697 At [By Step] Waiting until all stateful set ss replicas will be running in namespace statefulset-8979 (Step Runtime: 9m20.014s) test/e2e/apps/statefulset.go:707 Spec Goroutine goroutine 1753 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7ebe6a8, 0xc0001a8000}, 0xc0043c0408, 0x2f7ec4a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7ebe6a8, 0xc0001a8000}, 0xf8?, 0x2f7d7e5?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7ebe6a8, 0xc0001a8000}, 0x64e09a0?, 0xc003bb3e48?, 0x25da967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x2c3?, 0x0?, 0x0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning({0x7efa648?, 0xc0038c2d00}, 0x1, 0x1, 0xc004161400) test/e2e/framework/statefulset/wait.go:35 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunningAndReady(...) test/e2e/framework/statefulset/wait.go:80 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.11() test/e2e/apps/statefulset.go:708 | | ginkgo.By("Waiting until all stateful set " + ssName + " replicas will be running in namespace " + ns) > e2estatefulset.WaitForRunningAndReady(c, *ss.Spec.Replicas, ss) | | ginkgo.By("Confirming that stateful set scale up will not halt with unhealthy stateful pod") k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d04bce, 0xc00440a780}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 8 18:28:32.118: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Nov 8 18:28:42.119: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false ------------------------------ Automatically polling progress: [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] (Spec Runtime: 9m40.067s) test/e2e/apps/statefulset.go:697 In [It] (Node Runtime: 9m40.026s) test/e2e/apps/statefulset.go:697 At [By Step] Waiting until all stateful set ss replicas will be running in namespace statefulset-8979 (Step Runtime: 9m40.016s) test/e2e/apps/statefulset.go:707 Spec Goroutine goroutine 1753 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7ebe6a8, 0xc0001a8000}, 0xc0043c0408, 0x2f7ec4a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7ebe6a8, 0xc0001a8000}, 0xf8?, 0x2f7d7e5?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7ebe6a8, 0xc0001a8000}, 0x64e09a0?, 0xc003bb3e48?, 0x25da967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x2c3?, 0x0?, 0x0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning({0x7efa648?, 0xc0038c2d00}, 0x1, 0x1, 0xc004161400) test/e2e/framework/statefulset/wait.go:35 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunningAndReady(...) test/e2e/framework/statefulset/wait.go:80 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.11() test/e2e/apps/statefulset.go:708 | | ginkgo.By("Waiting until all stateful set " + ssName + " replicas will be running in namespace " + ns) > e2estatefulset.WaitForRunningAndReady(c, *ss.Spec.Replicas, ss) | | ginkgo.By("Confirming that stateful set scale up will not halt with unhealthy stateful pod") k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d04bce, 0xc00440a780}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 8 18:28:52.119: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Nov 8 18:29:02.120: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Nov 8 18:29:02.125: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Nov 8 18:29:02.125: FAIL: Failed waiting for pods to enter running: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning({0x7efa648?, 0xc0038c2d00}, 0x1, 0x1, 0xc004161400) test/e2e/framework/statefulset/wait.go:58 +0xf9 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunningAndReady(...) test/e2e/framework/statefulset/wait.go:80 k8s.io/kubernetes/test/e2e/apps.glob..func10.2.11() test/e2e/apps/statefulset.go:708 +0x27b [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] test/e2e/apps/statefulset.go:124 Nov 8 18:29:02.130: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=statefulset-8979 describe po ss-0' Nov 8 18:29:02.254: INFO: stderr: "" Nov 8 18:29:02.254: INFO: stdout: "Name: ss-0\nNamespace: statefulset-8979\nPriority: 0\nService Account: default\nNode: 172.17.0.1/172.17.0.1\nStart Time: Tue, 08 Nov 2022 18:19:02 +0000\nLabels: baz=blah\n controller-revision-hash=ss-6557876d87\n foo=bar\n statefulset.kubernetes.io/pod-name=ss-0\nAnnotations: <none>\nStatus: Running\nIP: 10.88.2.251\nIPs:\n IP: 10.88.2.251\n IP: 2001:4860:4860::2fb\nControlled By: StatefulSet/ss\nContainers:\n webserver:\n Container ID: containerd://e55e3afa7e9490812739de2e976ae7f9dd53d7591c7dac37365827d41aa0ad78\n Image: registry.k8s.io/e2e-test-images/httpd:2.4.38-2\n Image ID: registry.k8s.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3\n Port: <none>\n Host Port: <none>\n State: Waiting\n Reason: CrashLoopBackOff\n Last State: Terminated\n Reason: Error\n Exit Code: 137\n Started: Tue, 08 Nov 2022 18:24:45 +0000\n Finished: Tue, 08 Nov 2022 18:24:45 +0000\n Ready: False\n Restart Count: 6\n Readiness: http-get http://:80/index.html delay=0s timeout=1s period=1s #success=1 #failure=1\n Environment: <none>\n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9tn4w (ro)\nConditions:\n Type Status\n Initialized True \n Ready False \n ContainersReady False \n PodScheduled True \nVolumes:\n kube-api-access-9tn4w:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: <nil>\n DownwardAPI: true\nQoS Class: BestEffort\nNode-Selectors: <none>\nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 10m default-scheduler Successfully assigned statefulset-8979/ss-0 to 172.17.0.1\n Normal Pulling 9m58s kubelet Pulling image \"registry.k8s.io/e2e-test-images/httpd:2.4.38-2\"\n Normal Pulled 9m53s kubelet Successfully pulled image \"registry.k8s.io/e2e-test-images/httpd:2.4.38-2\" in 4.695293503s (4.695307366s including waiting)\n Warning Failed 9m53s kubelet Error: failed to get sandbox container task: no running task found: task e877f34d864685994c25b210face7df2a0393facc157acd338fa395759e0a846 not found: not found\n Normal Pulled 9m34s (x3 over 9m50s) kubelet Container image \"registry.k8s.io/e2e-test-images/httpd:2.4.38-2\" already present on machine\n Normal Created 9m34s (x3 over 9m50s) kubelet Created container webserver\n Normal Started 9m34s (x3 over 9m50s) kubelet Started container webserver\n Normal SandboxChanged 9m32s (x6 over 9m52s) kubelet Pod sandbox changed, it will be killed and re-created.\n Warning BackOff 4m3s (x151 over 9m42s) kubelet Back-off restarting failed container webserver in pod ss-0_statefulset-8979(638d5cb8-d8fb-4bb6-9d6c-7732dc5b0ffe)\n" Nov 8 18:29:02.254: INFO: Output of kubectl describe ss-0: Name: ss-0 Namespace: statefulset-8979 Priority: 0 Service Account: default Node: 172.17.0.1/172.17.0.1 Start Time: Tue, 08 Nov 2022 18:19:02 +0000 Labels: baz=blah controller-revision-hash=ss-6557876d87 foo=bar statefulset.kubernetes.io/pod-name=ss-0 Annotations: <none> Status: Running IP: 10.88.2.251 IPs: IP: 10.88.2.251 IP: 2001:4860:4860::2fb Controlled By: StatefulSet/ss Containers: webserver: Container ID: containerd://e55e3afa7e9490812739de2e976ae7f9dd53d7591c7dac37365827d41aa0ad78 Image: registry.k8s.io/e2e-test-images/httpd:2.4.38-2 Image ID: registry.k8s.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 Port: <none> Host Port: <none> State: Waiting Reason: CrashLoopBackOff Last State: Terminated Reason: Error Exit Code: 137 Started: Tue, 08 Nov 2022 18:24:45 +0000 Finished: Tue, 08 Nov 2022 18:24:45 +0000 Ready: False Restart Count: 6 Readiness: http-get http://:80/index.html delay=0s timeout=1s period=1s #success=1 #failure=1 Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9tn4w (ro) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: kube-api-access-9tn4w: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true QoS Class: BestEffort Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 10m default-scheduler Successfully assigned statefulset-8979/ss-0 to 172.17.0.1 Normal Pulling 9m58s kubelet Pulling image "registry.k8s.io/e2e-test-images/httpd:2.4.38-2" Normal Pulled 9m53s kubelet Successfully pulled image "registry.k8s.io/e2e-test-images/httpd:2.4.38-2" in 4.695293503s (4.695307366s including waiting) Warning Failed 9m53s kubelet Error: failed to get sandbox container task: no running task found: task e877f34d864685994c25b210face7df2a0393facc157acd338fa395759e0a846 not found: not found Normal Pulled 9m34s (x3 over 9m50s) kubelet Container image "registry.k8s.io/e2e-test-images/httpd:2.4.38-2" already present on machine Normal Created 9m34s (x3 over 9m50s) kubelet Created container webserver Normal Started 9m34s (x3 over 9m50s) kubelet Started container webserver Normal SandboxChanged 9m32s (x6 over 9m52s) kubelet Pod sandbox changed, it will be killed and re-created. Warning BackOff 4m3s (x151 over 9m42s) kubelet Back-off restarting failed container webserver in pod ss-0_statefulset-8979(638d5cb8-d8fb-4bb6-9d6c-7732dc5b0ffe) Nov 8 18:29:02.255: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=statefulset-8979 logs ss-0 --tail=100' Nov 8 18:29:02.386: INFO: stderr: "" Nov 8 18:29:02.386: INFO: stdout: "[Tue Nov 08 18:24:45.061723 2022] [mpm_event:notice] [pid 1:tid 140332922055528] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations\n[Tue Nov 08 18:24:45.061990 2022] [core:notice] [pid 1:tid 140332922055528] AH00094: Command line: 'httpd -D FOREGROUND'\n" Nov 8 18:29:02.386: INFO: Last 100 log lines of ss-0: [Tue Nov 08 18:24:45.061723 2022] [mpm_event:notice] [pid 1:tid 140332922055528] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations [Tue Nov 08 18:24:45.061990 2022] [core:notice] [pid 1:tid 140332922055528] AH00094: Command line: 'httpd -D FOREGROUND' Nov 8 18:29:02.386: INFO: Deleting all statefulset in ns statefulset-8979 Nov 8 18:29:02.391: INFO: Scaling statefulset ss to 0 Nov 8 18:29:12.417: INFO: Waiting for statefulset status.replicas updated to 0 Nov 8 18:29:12.421: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet test/e2e/framework/node/init/init.go:32 Nov 8 18:29:12.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-apps] StatefulSet test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-apps] StatefulSet dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/08/22 18:29:12.443 STEP: Collecting events from namespace "statefulset-8979". 11/08/22 18:29:12.444 STEP: Found 11 events. 11/08/22 18:29:12.448 Nov 8 18:29:12.448: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for ss-0: { } Scheduled: Successfully assigned statefulset-8979/ss-0 to 172.17.0.1 Nov 8 18:29:12.448: INFO: At 2022-11-08 18:19:02 +0000 UTC - event for ss: {statefulset-controller } SuccessfulCreate: create Pod ss-0 in StatefulSet ss successful Nov 8 18:29:12.448: INFO: At 2022-11-08 18:19:04 +0000 UTC - event for ss-0: {kubelet 172.17.0.1} Pulling: Pulling image "registry.k8s.io/e2e-test-images/httpd:2.4.38-2" Nov 8 18:29:12.448: INFO: At 2022-11-08 18:19:09 +0000 UTC - event for ss-0: {kubelet 172.17.0.1} Pulled: Successfully pulled image "registry.k8s.io/e2e-test-images/httpd:2.4.38-2" in 4.695293503s (4.695307366s including waiting) Nov 8 18:29:12.448: INFO: At 2022-11-08 18:19:09 +0000 UTC - event for ss-0: {kubelet 172.17.0.1} Failed: Error: failed to get sandbox container task: no running task found: task e877f34d864685994c25b210face7df2a0393facc157acd338fa395759e0a846 not found: not found Nov 8 18:29:12.448: INFO: At 2022-11-08 18:19:10 +0000 UTC - event for ss-0: {kubelet 172.17.0.1} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Nov 8 18:29:12.448: INFO: At 2022-11-08 18:19:12 +0000 UTC - event for ss-0: {kubelet 172.17.0.1} Pulled: Container image "registry.k8s.io/e2e-test-images/httpd:2.4.38-2" already present on machine Nov 8 18:29:12.448: INFO: At 2022-11-08 18:19:12 +0000 UTC - event for ss-0: {kubelet 172.17.0.1} Created: Created container webserver Nov 8 18:29:12.448: INFO: At 2022-11-08 18:19:12 +0000 UTC - event for ss-0: {kubelet 172.17.0.1} Started: Started container webserver Nov 8 18:29:12.448: INFO: At 2022-11-08 18:19:20 +0000 UTC - event for ss-0: {kubelet 172.17.0.1} BackOff: Back-off restarting failed container webserver in pod ss-0_statefulset-8979(638d5cb8-d8fb-4bb6-9d6c-7732dc5b0ffe) Nov 8 18:29:12.448: INFO: At 2022-11-08 18:29:02 +0000 UTC - event for ss: {statefulset-controller } SuccessfulDelete: delete Pod ss-0 in StatefulSet ss successful Nov 8 18:29:12.459: INFO: POD NODE PHASE GRACE CONDITIONS Nov 8 18:29:12.459: INFO: Nov 8 18:29:12.463: INFO: Logging node info for node 172.17.0.1 Nov 8 18:29:12.466: INFO: Node Info: &Node{ObjectMeta:{172.17.0.1 1c9ca6f0-ace7-4a33-a1cd-137d512be00a 2758 0 2022-11-08 18:07:44 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:172.17.0.1 kubernetes.io/os:linux] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-11-08 18:07:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}} } {kubelet Update v1 2022-11-08 18:07:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {kubelet Update v1 2022-11-08 18:24:43 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{259962224640 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67441348608 0} {<nil>} 65860692Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{233966001789 0} {<nil>} 233966001789 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67336491008 0} {<nil>} 65758292Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-08 18:24:43 +0000 UTC,LastTransitionTime:2022-11-08 18:07:43 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-08 18:24:43 +0000 UTC,LastTransitionTime:2022-11-08 18:07:43 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-08 18:24:43 +0000 UTC,LastTransitionTime:2022-11-08 18:07:43 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-08 18:24:43 +0000 UTC,LastTransitionTime:2022-11-08 18:07:54 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.17.0.1,},NodeAddress{Type:Hostname,Address:172.17.0.1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:,SystemUUID:7d8834b1-ec1e-71b0-7148-50316089d154,BootID:99214993-e7b1-4bff-9db2-b9548be8d199,KernelVersion:5.4.0-1078-gke,OSImage:Debian GNU/Linux 10 (buster),ContainerRuntimeVersion:containerd://1.6.8,KubeletVersion:v1.26.0-alpha.3.387+504f252722dcc8,KubeProxyVersion:v1.26.0-alpha.3.387+504f252722dcc8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:af7e3857d87770ddb40f5ea4f89b5a2709504ab1ee31f9ea4ab5823c045f2146 registry.k8s.io/e2e-test-images/agnhost:2.40],SizeBytes:51155161,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nautilus@sha256:99c0d6f1ad24a1aa1905d9c6534d193f268f7b23f9add2ae6bb41f31094bdd5c registry.k8s.io/e2e-test-images/nautilus:1.5],SizeBytes:49642095,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 registry.k8s.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:40764680,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 registry.k8s.io/e2e-test-images/nginx:1.14-2],SizeBytes:6979041,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 8 18:29:12.466: INFO: Logging kubelet events for node 172.17.0.1 Nov 8 18:29:12.470: INFO: Logging pods the kubelet thinks is on node 172.17.0.1 Nov 8 18:29:12.479: INFO: coredns-755454cbdc-s26wr started at 2022-11-08 18:07:54 +0000 UTC (0+1 container statuses recorded) Nov 8 18:29:12.479: INFO: Container coredns ready: false, restart count 9 Nov 8 18:29:12.521: INFO: Latency metrics for node 172.17.0.1 [DeferCleanup (Each)] [sig-apps] StatefulSet tear down framework | framework.go:193 STEP: Destroying namespace "statefulset-8979" for this suite. 11/08/22 18:29:12.521
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-apps\]\sStatefulSet\sBasic\sStatefulSet\sfunctionality\s\[StatefulSetBasic\]\sScaling\sshould\shappen\sin\spredictable\sorder\sand\shalt\sif\sany\sstateful\spod\sis\sunhealthy\s\[Slow\]\s\[Conformance\]$'
test/e2e/framework/statefulset/wait.go:58 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning({0x7efa648?, 0xc002de2ea0}, 0x1, 0x1, 0xc00394c000) test/e2e/framework/statefulset/wait.go:58 +0xf9 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunningAndReady(...) test/e2e/framework/statefulset/wait.go:80 k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10() test/e2e/apps/statefulset.go:632 +0x57bfrom junit_01.xml
[BeforeEach] [sig-apps] StatefulSet set up framework | framework.go:178 STEP: Creating a kubernetes client 11/08/22 19:17:34.072 Nov 8 19:17:34.072: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename statefulset 11/08/22 19:17:34.073 STEP: Waiting for a default service account to be provisioned in namespace 11/08/22 19:17:34.092 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/08/22 19:17:34.096 [BeforeEach] [sig-apps] StatefulSet test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-apps] StatefulSet test/e2e/apps/statefulset.go:98 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] test/e2e/apps/statefulset.go:113 STEP: Creating service test in namespace statefulset-1165 11/08/22 19:17:34.101 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] test/e2e/apps/statefulset.go:587 STEP: Initializing watcher for selector baz=blah,foo=bar 11/08/22 19:17:34.108 STEP: Creating stateful set ss in namespace statefulset-1165 11/08/22 19:17:34.113 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-1165 11/08/22 19:17:34.122 Nov 8 19:17:34.126: INFO: Found 0 stateful pods, waiting for 1 Nov 8 19:17:44.132: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Nov 8 19:17:54.131: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Nov 8 19:18:04.132: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Nov 8 19:18:14.131: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Nov 8 19:18:24.134: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Nov 8 19:18:34.132: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Nov 8 19:18:44.132: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Nov 8 19:18:54.130: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Nov 8 19:19:04.131: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Nov 8 19:19:14.130: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Nov 8 19:19:24.131: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Nov 8 19:19:34.131: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Nov 8 19:19:44.132: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Nov 8 19:19:54.131: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Nov 8 19:20:04.131: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Nov 8 19:20:14.131: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Nov 8 19:20:24.130: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Nov 8 19:20:34.130: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Nov 8 19:20:44.131: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Nov 8 19:20:54.129: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Nov 8 19:21:04.131: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Nov 8 19:21:14.130: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Nov 8 19:21:24.132: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Nov 8 19:21:34.131: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Nov 8 19:21:44.130: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Nov 8 19:21:54.131: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Nov 8 19:22:04.131: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Nov 8 19:22:14.130: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Nov 8 19:22:24.130: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false ------------------------------ Automatically polling progress: [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] (Spec Runtime: 5m0.036s) test/e2e/apps/statefulset.go:587 In [It] (Node Runtime: 5m0s) test/e2e/apps/statefulset.go:587 At [By Step] Waiting until all stateful set ss replicas will be running in namespace statefulset-1165 (Step Runtime: 4m59.986s) test/e2e/apps/statefulset.go:631 Spec Goroutine goroutine 7361 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7ebe6a8, 0xc0001a8000}, 0xc000563b30, 0x2f7ec4a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7ebe6a8, 0xc0001a8000}, 0x90?, 0x2f7d7e5?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7ebe6a8, 0xc0001a8000}, 0x64e09a0?, 0xc002a6dde0?, 0x25da967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x277?, 0x0?, 0x0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning({0x7efa648?, 0xc002de2ea0}, 0x1, 0x1, 0xc00394c000) test/e2e/framework/statefulset/wait.go:35 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunningAndReady(...) test/e2e/framework/statefulset/wait.go:80 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10() test/e2e/apps/statefulset.go:632 | | ginkgo.By("Waiting until all stateful set " + ssName + " replicas will be running in namespace " + ns) > e2estatefulset.WaitForRunningAndReady(c, *ss.Spec.Replicas, ss) | | ginkgo.By("Confirming that stateful set scale up will halt with unhealthy stateful pod") k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x7e96238, 0xc003bd2000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 Goroutines of Interest goroutine 7396 [select] k8s.io/kubernetes/vendor/k8s.io/client-go/tools/watch.UntilWithoutRetry({0x7ebe6e0, 0xc002ccf500}, {0x7e9aec8, 0xc002a793c0}, {0xc0016c6f38, 0x1, 0x2?}) vendor/k8s.io/client-go/tools/watch/until.go:73 k8s.io/kubernetes/vendor/k8s.io/client-go/tools/watch.Until({0x7ebe6e0, 0xc002ccf500}, {0xc003fdf328?, 0x74aacf8?}, {0x7e8b720?, 0xc001180f48?}, {0xc0016c6f38, 0x1, 0x1}) vendor/k8s.io/client-go/tools/watch/until.go:114 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10.2() test/e2e/apps/statefulset.go:613 | defer cancel() | > _, orderErr = watchtools.Until(ctx, pl.ResourceVersion, w, func(event watch.Event) (bool, error) { | if event.Type != watch.Added { | return false, nil > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10 test/e2e/apps/statefulset.go:605 | var orderErr error | wg.Add(1) > go func() { | defer ginkgo.GinkgoRecover() | defer wg.Done() ------------------------------ Nov 8 19:22:34.131: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Nov 8 19:22:44.130: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false ------------------------------ Automatically polling progress: [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] (Spec Runtime: 5m20.039s) test/e2e/apps/statefulset.go:587 In [It] (Node Runtime: 5m20.003s) test/e2e/apps/statefulset.go:587 At [By Step] Waiting until all stateful set ss replicas will be running in namespace statefulset-1165 (Step Runtime: 5m19.988s) test/e2e/apps/statefulset.go:631 Spec Goroutine goroutine 7361 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7ebe6a8, 0xc0001a8000}, 0xc000563b30, 0x2f7ec4a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7ebe6a8, 0xc0001a8000}, 0x90?, 0x2f7d7e5?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7ebe6a8, 0xc0001a8000}, 0x64e09a0?, 0xc002a6dde0?, 0x25da967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x277?, 0x0?, 0x0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning({0x7efa648?, 0xc002de2ea0}, 0x1, 0x1, 0xc00394c000) test/e2e/framework/statefulset/wait.go:35 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunningAndReady(...) test/e2e/framework/statefulset/wait.go:80 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10() test/e2e/apps/statefulset.go:632 | | ginkgo.By("Waiting until all stateful set " + ssName + " replicas will be running in namespace " + ns) > e2estatefulset.WaitForRunningAndReady(c, *ss.Spec.Replicas, ss) | | ginkgo.By("Confirming that stateful set scale up will halt with unhealthy stateful pod") k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x7e96238, 0xc003bd2000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 Goroutines of Interest goroutine 7396 [select] k8s.io/kubernetes/vendor/k8s.io/client-go/tools/watch.UntilWithoutRetry({0x7ebe6e0, 0xc002ccf500}, {0x7e9aec8, 0xc002a793c0}, {0xc0016c6f38, 0x1, 0x2?}) vendor/k8s.io/client-go/tools/watch/until.go:73 k8s.io/kubernetes/vendor/k8s.io/client-go/tools/watch.Until({0x7ebe6e0, 0xc002ccf500}, {0xc003fdf328?, 0x74aacf8?}, {0x7e8b720?, 0xc001180f48?}, {0xc0016c6f38, 0x1, 0x1}) vendor/k8s.io/client-go/tools/watch/until.go:114 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10.2() test/e2e/apps/statefulset.go:613 | defer cancel() | > _, orderErr = watchtools.Until(ctx, pl.ResourceVersion, w, func(event watch.Event) (bool, error) { | if event.Type != watch.Added { | return false, nil > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10 test/e2e/apps/statefulset.go:605 | var orderErr error | wg.Add(1) > go func() { | defer ginkgo.GinkgoRecover() | defer wg.Done() ------------------------------ Nov 8 19:22:54.130: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Nov 8 19:23:04.130: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false ------------------------------ Automatically polling progress: [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] (Spec Runtime: 5m40.041s) test/e2e/apps/statefulset.go:587 In [It] (Node Runtime: 5m40.005s) test/e2e/apps/statefulset.go:587 At [By Step] Waiting until all stateful set ss replicas will be running in namespace statefulset-1165 (Step Runtime: 5m39.991s) test/e2e/apps/statefulset.go:631 Spec Goroutine goroutine 7361 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7ebe6a8, 0xc0001a8000}, 0xc000563b30, 0x2f7ec4a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7ebe6a8, 0xc0001a8000}, 0x90?, 0x2f7d7e5?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7ebe6a8, 0xc0001a8000}, 0x64e09a0?, 0xc002a6dde0?, 0x25da967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x277?, 0x0?, 0x0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning({0x7efa648?, 0xc002de2ea0}, 0x1, 0x1, 0xc00394c000) test/e2e/framework/statefulset/wait.go:35 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunningAndReady(...) test/e2e/framework/statefulset/wait.go:80 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10() test/e2e/apps/statefulset.go:632 | | ginkgo.By("Waiting until all stateful set " + ssName + " replicas will be running in namespace " + ns) > e2estatefulset.WaitForRunningAndReady(c, *ss.Spec.Replicas, ss) | | ginkgo.By("Confirming that stateful set scale up will halt with unhealthy stateful pod") k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x7e96238, 0xc003bd2000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 Goroutines of Interest goroutine 7396 [select] k8s.io/kubernetes/vendor/k8s.io/client-go/tools/watch.UntilWithoutRetry({0x7ebe6e0, 0xc002ccf500}, {0x7e9aec8, 0xc002a793c0}, {0xc0016c6f38, 0x1, 0x2?}) vendor/k8s.io/client-go/tools/watch/until.go:73 k8s.io/kubernetes/vendor/k8s.io/client-go/tools/watch.Until({0x7ebe6e0, 0xc002ccf500}, {0xc003fdf328?, 0x74aacf8?}, {0x7e8b720?, 0xc001180f48?}, {0xc0016c6f38, 0x1, 0x1}) vendor/k8s.io/client-go/tools/watch/until.go:114 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10.2() test/e2e/apps/statefulset.go:613 | defer cancel() | > _, orderErr = watchtools.Until(ctx, pl.ResourceVersion, w, func(event watch.Event) (bool, error) { | if event.Type != watch.Added { | return false, nil > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10 test/e2e/apps/statefulset.go:605 | var orderErr error | wg.Add(1) > go func() { | defer ginkgo.GinkgoRecover() | defer wg.Done() ------------------------------ Nov 8 19:23:14.131: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Nov 8 19:23:24.130: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false ------------------------------ Automatically polling progress: [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] (Spec Runtime: 6m0.044s) test/e2e/apps/statefulset.go:587 In [It] (Node Runtime: 6m0.008s) test/e2e/apps/statefulset.go:587 At [By Step] Waiting until all stateful set ss replicas will be running in namespace statefulset-1165 (Step Runtime: 5m59.994s) test/e2e/apps/statefulset.go:631 Spec Goroutine goroutine 7361 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7ebe6a8, 0xc0001a8000}, 0xc000563b30, 0x2f7ec4a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7ebe6a8, 0xc0001a8000}, 0x90?, 0x2f7d7e5?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7ebe6a8, 0xc0001a8000}, 0x64e09a0?, 0xc002a6dde0?, 0x25da967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x277?, 0x0?, 0x0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning({0x7efa648?, 0xc002de2ea0}, 0x1, 0x1, 0xc00394c000) test/e2e/framework/statefulset/wait.go:35 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunningAndReady(...) test/e2e/framework/statefulset/wait.go:80 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10() test/e2e/apps/statefulset.go:632 | | ginkgo.By("Waiting until all stateful set " + ssName + " replicas will be running in namespace " + ns) > e2estatefulset.WaitForRunningAndReady(c, *ss.Spec.Replicas, ss) | | ginkgo.By("Confirming that stateful set scale up will halt with unhealthy stateful pod") k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x7e96238, 0xc003bd2000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 Goroutines of Interest goroutine 7396 [select] k8s.io/kubernetes/vendor/k8s.io/client-go/tools/watch.UntilWithoutRetry({0x7ebe6e0, 0xc002ccf500}, {0x7e9aec8, 0xc002a793c0}, {0xc0016c6f38, 0x1, 0x2?}) vendor/k8s.io/client-go/tools/watch/until.go:73 k8s.io/kubernetes/vendor/k8s.io/client-go/tools/watch.Until({0x7ebe6e0, 0xc002ccf500}, {0xc003fdf328?, 0x74aacf8?}, {0x7e8b720?, 0xc001180f48?}, {0xc0016c6f38, 0x1, 0x1}) vendor/k8s.io/client-go/tools/watch/until.go:114 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10.2() test/e2e/apps/statefulset.go:613 | defer cancel() | > _, orderErr = watchtools.Until(ctx, pl.ResourceVersion, w, func(event watch.Event) (bool, error) { | if event.Type != watch.Added { | return false, nil > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10 test/e2e/apps/statefulset.go:605 | var orderErr error | wg.Add(1) > go func() { | defer ginkgo.GinkgoRecover() | defer wg.Done() ------------------------------ Nov 8 19:23:34.131: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Nov 8 19:23:44.130: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false ------------------------------ Automatically polling progress: [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] (Spec Runtime: 6m20.046s) test/e2e/apps/statefulset.go:587 In [It] (Node Runtime: 6m20.01s) test/e2e/apps/statefulset.go:587 At [By Step] Waiting until all stateful set ss replicas will be running in namespace statefulset-1165 (Step Runtime: 6m19.995s) test/e2e/apps/statefulset.go:631 Spec Goroutine goroutine 7361 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7ebe6a8, 0xc0001a8000}, 0xc000563b30, 0x2f7ec4a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7ebe6a8, 0xc0001a8000}, 0x90?, 0x2f7d7e5?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7ebe6a8, 0xc0001a8000}, 0x64e09a0?, 0xc002a6dde0?, 0x25da967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x277?, 0x0?, 0x0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning({0x7efa648?, 0xc002de2ea0}, 0x1, 0x1, 0xc00394c000) test/e2e/framework/statefulset/wait.go:35 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunningAndReady(...) test/e2e/framework/statefulset/wait.go:80 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10() test/e2e/apps/statefulset.go:632 | | ginkgo.By("Waiting until all stateful set " + ssName + " replicas will be running in namespace " + ns) > e2estatefulset.WaitForRunningAndReady(c, *ss.Spec.Replicas, ss) | | ginkgo.By("Confirming that stateful set scale up will halt with unhealthy stateful pod") k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x7e96238, 0xc003bd2000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 Goroutines of Interest goroutine 7396 [select] k8s.io/kubernetes/vendor/k8s.io/client-go/tools/watch.UntilWithoutRetry({0x7ebe6e0, 0xc002ccf500}, {0x7e9aec8, 0xc002a793c0}, {0xc0016c6f38, 0x1, 0x2?}) vendor/k8s.io/client-go/tools/watch/until.go:73 k8s.io/kubernetes/vendor/k8s.io/client-go/tools/watch.Until({0x7ebe6e0, 0xc002ccf500}, {0xc003fdf328?, 0x74aacf8?}, {0x7e8b720?, 0xc001180f48?}, {0xc0016c6f38, 0x1, 0x1}) vendor/k8s.io/client-go/tools/watch/until.go:114 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10.2() test/e2e/apps/statefulset.go:613 | defer cancel() | > _, orderErr = watchtools.Until(ctx, pl.ResourceVersion, w, func(event watch.Event) (bool, error) { | if event.Type != watch.Added { | return false, nil > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10 test/e2e/apps/statefulset.go:605 | var orderErr error | wg.Add(1) > go func() { | defer ginkgo.GinkgoRecover() | defer wg.Done() ------------------------------ Nov 8 19:23:54.131: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Nov 8 19:24:04.129: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false ------------------------------ Automatically polling progress: [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] (Spec Runtime: 6m40.047s) test/e2e/apps/statefulset.go:587 In [It] (Node Runtime: 6m40.011s) test/e2e/apps/statefulset.go:587 At [By Step] Waiting until all stateful set ss replicas will be running in namespace statefulset-1165 (Step Runtime: 6m39.997s) test/e2e/apps/statefulset.go:631 Spec Goroutine goroutine 7361 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7ebe6a8, 0xc0001a8000}, 0xc000563b30, 0x2f7ec4a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7ebe6a8, 0xc0001a8000}, 0x90?, 0x2f7d7e5?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7ebe6a8, 0xc0001a8000}, 0x64e09a0?, 0xc002a6dde0?, 0x25da967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x277?, 0x0?, 0x0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning({0x7efa648?, 0xc002de2ea0}, 0x1, 0x1, 0xc00394c000) test/e2e/framework/statefulset/wait.go:35 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunningAndReady(...) test/e2e/framework/statefulset/wait.go:80 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10() test/e2e/apps/statefulset.go:632 | | ginkgo.By("Waiting until all stateful set " + ssName + " replicas will be running in namespace " + ns) > e2estatefulset.WaitForRunningAndReady(c, *ss.Spec.Replicas, ss) | | ginkgo.By("Confirming that stateful set scale up will halt with unhealthy stateful pod") k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x7e96238, 0xc003bd2000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 Goroutines of Interest goroutine 7396 [select] k8s.io/kubernetes/vendor/k8s.io/client-go/tools/watch.UntilWithoutRetry({0x7ebe6e0, 0xc002ccf500}, {0x7e9aec8, 0xc002a793c0}, {0xc0016c6f38, 0x1, 0x2?}) vendor/k8s.io/client-go/tools/watch/until.go:73 k8s.io/kubernetes/vendor/k8s.io/client-go/tools/watch.Until({0x7ebe6e0, 0xc002ccf500}, {0xc003fdf328?, 0x74aacf8?}, {0x7e8b720?, 0xc001180f48?}, {0xc0016c6f38, 0x1, 0x1}) vendor/k8s.io/client-go/tools/watch/until.go:114 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10.2() test/e2e/apps/statefulset.go:613 | defer cancel() | > _, orderErr = watchtools.Until(ctx, pl.ResourceVersion, w, func(event watch.Event) (bool, error) { | if event.Type != watch.Added { | return false, nil > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10 test/e2e/apps/statefulset.go:605 | var orderErr error | wg.Add(1) > go func() { | defer ginkgo.GinkgoRecover() | defer wg.Done() ------------------------------ Nov 8 19:24:14.131: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Nov 8 19:24:24.132: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false ------------------------------ Automatically polling progress: [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] (Spec Runtime: 7m0.049s) test/e2e/apps/statefulset.go:587 In [It] (Node Runtime: 7m0.013s) test/e2e/apps/statefulset.go:587 At [By Step] Waiting until all stateful set ss replicas will be running in namespace statefulset-1165 (Step Runtime: 6m59.998s) test/e2e/apps/statefulset.go:631 Spec Goroutine goroutine 7361 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7ebe6a8, 0xc0001a8000}, 0xc000563b30, 0x2f7ec4a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7ebe6a8, 0xc0001a8000}, 0x90?, 0x2f7d7e5?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7ebe6a8, 0xc0001a8000}, 0x64e09a0?, 0xc002a6dde0?, 0x25da967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x277?, 0x0?, 0x0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning({0x7efa648?, 0xc002de2ea0}, 0x1, 0x1, 0xc00394c000) test/e2e/framework/statefulset/wait.go:35 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunningAndReady(...) test/e2e/framework/statefulset/wait.go:80 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10() test/e2e/apps/statefulset.go:632 | | ginkgo.By("Waiting until all stateful set " + ssName + " replicas will be running in namespace " + ns) > e2estatefulset.WaitForRunningAndReady(c, *ss.Spec.Replicas, ss) | | ginkgo.By("Confirming that stateful set scale up will halt with unhealthy stateful pod") k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x7e96238, 0xc003bd2000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 Goroutines of Interest goroutine 7396 [select] k8s.io/kubernetes/vendor/k8s.io/client-go/tools/watch.UntilWithoutRetry({0x7ebe6e0, 0xc002ccf500}, {0x7e9aec8, 0xc002a793c0}, {0xc0016c6f38, 0x1, 0x2?}) vendor/k8s.io/client-go/tools/watch/until.go:73 k8s.io/kubernetes/vendor/k8s.io/client-go/tools/watch.Until({0x7ebe6e0, 0xc002ccf500}, {0xc003fdf328?, 0x74aacf8?}, {0x7e8b720?, 0xc001180f48?}, {0xc0016c6f38, 0x1, 0x1}) vendor/k8s.io/client-go/tools/watch/until.go:114 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10.2() test/e2e/apps/statefulset.go:613 | defer cancel() | > _, orderErr = watchtools.Until(ctx, pl.ResourceVersion, w, func(event watch.Event) (bool, error) { | if event.Type != watch.Added { | return false, nil > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10 test/e2e/apps/statefulset.go:605 | var orderErr error | wg.Add(1) > go func() { | defer ginkgo.GinkgoRecover() | defer wg.Done() ------------------------------ Nov 8 19:24:34.131: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Nov 8 19:24:44.132: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false ------------------------------ Automatically polling progress: [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] (Spec Runtime: 7m20.051s) test/e2e/apps/statefulset.go:587 In [It] (Node Runtime: 7m20.015s) test/e2e/apps/statefulset.go:587 At [By Step] Waiting until all stateful set ss replicas will be running in namespace statefulset-1165 (Step Runtime: 7m20s) test/e2e/apps/statefulset.go:631 Spec Goroutine goroutine 7361 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7ebe6a8, 0xc0001a8000}, 0xc000563b30, 0x2f7ec4a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7ebe6a8, 0xc0001a8000}, 0x90?, 0x2f7d7e5?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7ebe6a8, 0xc0001a8000}, 0x64e09a0?, 0xc002a6dde0?, 0x25da967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x277?, 0x0?, 0x0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning({0x7efa648?, 0xc002de2ea0}, 0x1, 0x1, 0xc00394c000) test/e2e/framework/statefulset/wait.go:35 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunningAndReady(...) test/e2e/framework/statefulset/wait.go:80 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10() test/e2e/apps/statefulset.go:632 | | ginkgo.By("Waiting until all stateful set " + ssName + " replicas will be running in namespace " + ns) > e2estatefulset.WaitForRunningAndReady(c, *ss.Spec.Replicas, ss) | | ginkgo.By("Confirming that stateful set scale up will halt with unhealthy stateful pod") k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x7e96238, 0xc003bd2000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 Goroutines of Interest goroutine 7396 [select] k8s.io/kubernetes/vendor/k8s.io/client-go/tools/watch.UntilWithoutRetry({0x7ebe6e0, 0xc002ccf500}, {0x7e9aec8, 0xc002a793c0}, {0xc0016c6f38, 0x1, 0x2?}) vendor/k8s.io/client-go/tools/watch/until.go:73 k8s.io/kubernetes/vendor/k8s.io/client-go/tools/watch.Until({0x7ebe6e0, 0xc002ccf500}, {0xc003fdf328?, 0x74aacf8?}, {0x7e8b720?, 0xc001180f48?}, {0xc0016c6f38, 0x1, 0x1}) vendor/k8s.io/client-go/tools/watch/until.go:114 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10.2() test/e2e/apps/statefulset.go:613 | defer cancel() | > _, orderErr = watchtools.Until(ctx, pl.ResourceVersion, w, func(event watch.Event) (bool, error) { | if event.Type != watch.Added { | return false, nil > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10 test/e2e/apps/statefulset.go:605 | var orderErr error | wg.Add(1) > go func() { | defer ginkgo.GinkgoRecover() | defer wg.Done() ------------------------------ Nov 8 19:24:54.131: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Nov 8 19:25:04.131: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false ------------------------------ Automatically polling progress: [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] (Spec Runtime: 7m40.053s) test/e2e/apps/statefulset.go:587 In [It] (Node Runtime: 7m40.017s) test/e2e/apps/statefulset.go:587 At [By Step] Waiting until all stateful set ss replicas will be running in namespace statefulset-1165 (Step Runtime: 7m40.003s) test/e2e/apps/statefulset.go:631 Spec Goroutine goroutine 7361 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7ebe6a8, 0xc0001a8000}, 0xc000563b30, 0x2f7ec4a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7ebe6a8, 0xc0001a8000}, 0x90?, 0x2f7d7e5?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7ebe6a8, 0xc0001a8000}, 0x64e09a0?, 0xc002a6dde0?, 0x25da967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x277?, 0x0?, 0x0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning({0x7efa648?, 0xc002de2ea0}, 0x1, 0x1, 0xc00394c000) test/e2e/framework/statefulset/wait.go:35 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunningAndReady(...) test/e2e/framework/statefulset/wait.go:80 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10() test/e2e/apps/statefulset.go:632 | | ginkgo.By("Waiting until all stateful set " + ssName + " replicas will be running in namespace " + ns) > e2estatefulset.WaitForRunningAndReady(c, *ss.Spec.Replicas, ss) | | ginkgo.By("Confirming that stateful set scale up will halt with unhealthy stateful pod") k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x7e96238, 0xc003bd2000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 Goroutines of Interest goroutine 7396 [select] k8s.io/kubernetes/vendor/k8s.io/client-go/tools/watch.UntilWithoutRetry({0x7ebe6e0, 0xc002ccf500}, {0x7e9aec8, 0xc002a793c0}, {0xc0016c6f38, 0x1, 0x2?}) vendor/k8s.io/client-go/tools/watch/until.go:73 k8s.io/kubernetes/vendor/k8s.io/client-go/tools/watch.Until({0x7ebe6e0, 0xc002ccf500}, {0xc003fdf328?, 0x74aacf8?}, {0x7e8b720?, 0xc001180f48?}, {0xc0016c6f38, 0x1, 0x1}) vendor/k8s.io/client-go/tools/watch/until.go:114 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10.2() test/e2e/apps/statefulset.go:613 | defer cancel() | > _, orderErr = watchtools.Until(ctx, pl.ResourceVersion, w, func(event watch.Event) (bool, error) { | if event.Type != watch.Added { | return false, nil > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10 test/e2e/apps/statefulset.go:605 | var orderErr error | wg.Add(1) > go func() { | defer ginkgo.GinkgoRecover() | defer wg.Done() ------------------------------ Nov 8 19:25:14.133: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Nov 8 19:25:24.131: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false ------------------------------ Automatically polling progress: [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] (Spec Runtime: 8m0.055s) test/e2e/apps/statefulset.go:587 In [It] (Node Runtime: 8m0.019s) test/e2e/apps/statefulset.go:587 At [By Step] Waiting until all stateful set ss replicas will be running in namespace statefulset-1165 (Step Runtime: 8m0.005s) test/e2e/apps/statefulset.go:631 Spec Goroutine goroutine 7361 [runnable] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7ebe6a8, 0xc0001a8000}, 0xc000563b30, 0x2f7ec4a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7ebe6a8, 0xc0001a8000}, 0x90?, 0x2f7d7e5?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7ebe6a8, 0xc0001a8000}, 0x64e09a0?, 0xc002a6dde0?, 0x25da967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x277?, 0x0?, 0x0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning({0x7efa648?, 0xc002de2ea0}, 0x1, 0x1, 0xc00394c000) test/e2e/framework/statefulset/wait.go:35 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunningAndReady(...) test/e2e/framework/statefulset/wait.go:80 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10() test/e2e/apps/statefulset.go:632 | | ginkgo.By("Waiting until all stateful set " + ssName + " replicas will be running in namespace " + ns) > e2estatefulset.WaitForRunningAndReady(c, *ss.Spec.Replicas, ss) | | ginkgo.By("Confirming that stateful set scale up will halt with unhealthy stateful pod") k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x7e96238, 0xc003bd2000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 Goroutines of Interest goroutine 7396 [select] k8s.io/kubernetes/vendor/k8s.io/client-go/tools/watch.UntilWithoutRetry({0x7ebe6e0, 0xc002ccf500}, {0x7e9aec8, 0xc002a793c0}, {0xc0016c6f38, 0x1, 0x2?}) vendor/k8s.io/client-go/tools/watch/until.go:73 k8s.io/kubernetes/vendor/k8s.io/client-go/tools/watch.Until({0x7ebe6e0, 0xc002ccf500}, {0xc003fdf328?, 0x74aacf8?}, {0x7e8b720?, 0xc001180f48?}, {0xc0016c6f38, 0x1, 0x1}) vendor/k8s.io/client-go/tools/watch/until.go:114 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10.2() test/e2e/apps/statefulset.go:613 | defer cancel() | > _, orderErr = watchtools.Until(ctx, pl.ResourceVersion, w, func(event watch.Event) (bool, error) { | if event.Type != watch.Added { | return false, nil > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10 test/e2e/apps/statefulset.go:605 | var orderErr error | wg.Add(1) > go func() { | defer ginkgo.GinkgoRecover() | defer wg.Done() ------------------------------ Nov 8 19:25:34.137: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Nov 8 19:25:44.131: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false ------------------------------ Automatically polling progress: [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] (Spec Runtime: 8m20.058s) test/e2e/apps/statefulset.go:587 In [It] (Node Runtime: 8m20.022s) test/e2e/apps/statefulset.go:587 At [By Step] Waiting until all stateful set ss replicas will be running in namespace statefulset-1165 (Step Runtime: 8m20.007s) test/e2e/apps/statefulset.go:631 Spec Goroutine goroutine 7361 [select] k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*ClientConn).RoundTrip(0xc000af0d80, 0xc002715500) vendor/golang.org/x/net/http2/transport.go:1200 k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).RoundTripOpt(0xc003140800, 0xc002715500, {0xa0?}) vendor/golang.org/x/net/http2/transport.go:519 k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).RoundTrip(...) vendor/golang.org/x/net/http2/transport.go:480 k8s.io/kubernetes/vendor/golang.org/x/net/http2.noDialH2RoundTripper.RoundTrip({0xc00168f040?}, 0xc002715500?) vendor/golang.org/x/net/http2/transport.go:3020 net/http.(*Transport).roundTrip(0xc00168f040, 0xc002715500) /usr/local/go/src/net/http/transport.go:540 net/http.(*Transport).RoundTrip(0x6ee5440?, 0xc0034c1980?) /usr/local/go/src/net/http/roundtrip.go:17 k8s.io/kubernetes/vendor/k8s.io/client-go/transport.(*userAgentRoundTripper).RoundTrip(0xc003f2bf80, 0xc002715400) vendor/k8s.io/client-go/transport/round_trippers.go:168 net/http.send(0xc002715400, {0x7e8b940, 0xc003f2bf80}, {0x73cd720?, 0x1?, 0x0?}) /usr/local/go/src/net/http/client.go:251 net/http.(*Client).send(0xc0036dc6c0, 0xc002715400, {0x7f963eb82108?, 0x100?, 0x0?}) /usr/local/go/src/net/http/client.go:175 net/http.(*Client).do(0xc0036dc6c0, 0xc002715400) /usr/local/go/src/net/http/client.go:715 net/http.(*Client).Do(...) /usr/local/go/src/net/http/client.go:581 k8s.io/kubernetes/vendor/k8s.io/client-go/rest.(*Request).request(0xc002715200, {0x7ebe6a8, 0xc0001a8008}, 0x0?) vendor/k8s.io/client-go/rest/request.go:964 k8s.io/kubernetes/vendor/k8s.io/client-go/rest.(*Request).Do(0xc002715200, {0x7ebe6a8, 0xc0001a8008}) vendor/k8s.io/client-go/rest/request.go:1005 k8s.io/kubernetes/vendor/k8s.io/client-go/kubernetes/typed/core/v1.(*pods).List(0xc0016e7740, {0x7ebe6a8, 0xc0001a8008}, {{{0x0, 0x0}, {0x0, 0x0}}, {0xc003fdfa20, 0x10}, {0x0, ...}, ...}) vendor/k8s.io/client-go/kubernetes/typed/core/v1/pod.go:99 k8s.io/kubernetes/test/e2e/framework/statefulset.GetPodList({0x7efa648, 0xc002de2ea0}, 0xc00394c000) test/e2e/framework/statefulset/rest.go:68 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning.func1() test/e2e/framework/statefulset/wait.go:37 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x26f2811, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7ebe6a8?, 0xc0001a8000?}, 0x25da61f?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7ebe6a8, 0xc0001a8000}, 0xc000563b30, 0x2f7ec4a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7ebe6a8, 0xc0001a8000}, 0x90?, 0x2f7d7e5?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7ebe6a8, 0xc0001a8000}, 0x64e09a0?, 0xc002a6dde0?, 0x25da967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x277?, 0x0?, 0x0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning({0x7efa648?, 0xc002de2ea0}, 0x1, 0x1, 0xc00394c000) test/e2e/framework/statefulset/wait.go:35 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunningAndReady(...) test/e2e/framework/statefulset/wait.go:80 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10() test/e2e/apps/statefulset.go:632 | | ginkgo.By("Waiting until all stateful set " + ssName + " replicas will be running in namespace " + ns) > e2estatefulset.WaitForRunningAndReady(c, *ss.Spec.Replicas, ss) | | ginkgo.By("Confirming that stateful set scale up will halt with unhealthy stateful pod") k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x7e96238, 0xc003bd2000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 Goroutines of Interest goroutine 7396 [select] k8s.io/kubernetes/vendor/k8s.io/client-go/tools/watch.UntilWithoutRetry({0x7ebe6e0, 0xc002ccf500}, {0x7e9aec8, 0xc002a793c0}, {0xc0016c6f38, 0x1, 0x2?}) vendor/k8s.io/client-go/tools/watch/until.go:73 k8s.io/kubernetes/vendor/k8s.io/client-go/tools/watch.Until({0x7ebe6e0, 0xc002ccf500}, {0xc003fdf328?, 0x74aacf8?}, {0x7e8b720?, 0xc001180f48?}, {0xc0016c6f38, 0x1, 0x1}) vendor/k8s.io/client-go/tools/watch/until.go:114 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10.2() test/e2e/apps/statefulset.go:613 | defer cancel() | > _, orderErr = watchtools.Until(ctx, pl.ResourceVersion, w, func(event watch.Event) (bool, error) { | if event.Type != watch.Added { | return false, nil > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10 test/e2e/apps/statefulset.go:605 | var orderErr error | wg.Add(1) > go func() { | defer ginkgo.GinkgoRecover() | defer wg.Done() ------------------------------ Nov 8 19:25:54.134: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Nov 8 19:26:04.131: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Nov 8 19:26:14.131: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false ------------------------------ Automatically polling progress: [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] (Spec Runtime: 8m40.06s) test/e2e/apps/statefulset.go:587 In [It] (Node Runtime: 8m40.024s) test/e2e/apps/statefulset.go:587 At [By Step] Waiting until all stateful set ss replicas will be running in namespace statefulset-1165 (Step Runtime: 8m40.01s) test/e2e/apps/statefulset.go:631 Spec Goroutine goroutine 7361 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7ebe6a8, 0xc0001a8000}, 0xc000563b30, 0x2f7ec4a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7ebe6a8, 0xc0001a8000}, 0x90?, 0x2f7d7e5?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7ebe6a8, 0xc0001a8000}, 0x64e09a0?, 0xc002a6dde0?, 0x25da967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x277?, 0x0?, 0x0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning({0x7efa648?, 0xc002de2ea0}, 0x1, 0x1, 0xc00394c000) test/e2e/framework/statefulset/wait.go:35 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunningAndReady(...) test/e2e/framework/statefulset/wait.go:80 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10() test/e2e/apps/statefulset.go:632 | | ginkgo.By("Waiting until all stateful set " + ssName + " replicas will be running in namespace " + ns) > e2estatefulset.WaitForRunningAndReady(c, *ss.Spec.Replicas, ss) | | ginkgo.By("Confirming that stateful set scale up will halt with unhealthy stateful pod") k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x7e96238, 0xc003bd2000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 Goroutines of Interest goroutine 7396 [select] k8s.io/kubernetes/vendor/k8s.io/client-go/tools/watch.UntilWithoutRetry({0x7ebe6e0, 0xc002ccf500}, {0x7e9aec8, 0xc002a793c0}, {0xc0016c6f38, 0x1, 0x2?}) vendor/k8s.io/client-go/tools/watch/until.go:73 k8s.io/kubernetes/vendor/k8s.io/client-go/tools/watch.Until({0x7ebe6e0, 0xc002ccf500}, {0xc003fdf328?, 0x74aacf8?}, {0x7e8b720?, 0xc001180f48?}, {0xc0016c6f38, 0x1, 0x1}) vendor/k8s.io/client-go/tools/watch/until.go:114 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10.2() test/e2e/apps/statefulset.go:613 | defer cancel() | > _, orderErr = watchtools.Until(ctx, pl.ResourceVersion, w, func(event watch.Event) (bool, error) { | if event.Type != watch.Added { | return false, nil > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10 test/e2e/apps/statefulset.go:605 | var orderErr error | wg.Add(1) > go func() { | defer ginkgo.GinkgoRecover() | defer wg.Done() ------------------------------ Nov 8 19:26:24.131: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Nov 8 19:26:34.130: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false ------------------------------ Automatically polling progress: [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] (Spec Runtime: 9m0.063s) test/e2e/apps/statefulset.go:587 In [It] (Node Runtime: 9m0.027s) test/e2e/apps/statefulset.go:587 At [By Step] Waiting until all stateful set ss replicas will be running in namespace statefulset-1165 (Step Runtime: 9m0.012s) test/e2e/apps/statefulset.go:631 Spec Goroutine goroutine 7361 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7ebe6a8, 0xc0001a8000}, 0xc000563b30, 0x2f7ec4a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7ebe6a8, 0xc0001a8000}, 0x90?, 0x2f7d7e5?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7ebe6a8, 0xc0001a8000}, 0x64e09a0?, 0xc002a6dde0?, 0x25da967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x277?, 0x0?, 0x0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning({0x7efa648?, 0xc002de2ea0}, 0x1, 0x1, 0xc00394c000) test/e2e/framework/statefulset/wait.go:35 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunningAndReady(...) test/e2e/framework/statefulset/wait.go:80 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10() test/e2e/apps/statefulset.go:632 | | ginkgo.By("Waiting until all stateful set " + ssName + " replicas will be running in namespace " + ns) > e2estatefulset.WaitForRunningAndReady(c, *ss.Spec.Replicas, ss) | | ginkgo.By("Confirming that stateful set scale up will halt with unhealthy stateful pod") k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x7e96238, 0xc003bd2000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 Goroutines of Interest goroutine 7396 [select, 2 minutes] k8s.io/kubernetes/vendor/k8s.io/client-go/tools/watch.UntilWithoutRetry({0x7ebe6e0, 0xc002ccf500}, {0x7e9aec8, 0xc002a793c0}, {0xc0016c6f38, 0x1, 0x2?}) vendor/k8s.io/client-go/tools/watch/until.go:73 k8s.io/kubernetes/vendor/k8s.io/client-go/tools/watch.Until({0x7ebe6e0, 0xc002ccf500}, {0xc003fdf328?, 0x74aacf8?}, {0x7e8b720?, 0xc001180f48?}, {0xc0016c6f38, 0x1, 0x1}) vendor/k8s.io/client-go/tools/watch/until.go:114 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10.2() test/e2e/apps/statefulset.go:613 | defer cancel() | > _, orderErr = watchtools.Until(ctx, pl.ResourceVersion, w, func(event watch.Event) (bool, error) { | if event.Type != watch.Added { | return false, nil > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10 test/e2e/apps/statefulset.go:605 | var orderErr error | wg.Add(1) > go func() { | defer ginkgo.GinkgoRecover() | defer wg.Done() ------------------------------ Nov 8 19:26:44.131: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Nov 8 19:26:54.130: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false ------------------------------ Automatically polling progress: [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] (Spec Runtime: 9m20.065s) test/e2e/apps/statefulset.go:587 In [It] (Node Runtime: 9m20.029s) test/e2e/apps/statefulset.go:587 At [By Step] Waiting until all stateful set ss replicas will be running in namespace statefulset-1165 (Step Runtime: 9m20.014s) test/e2e/apps/statefulset.go:631 Spec Goroutine goroutine 7361 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7ebe6a8, 0xc0001a8000}, 0xc000563b30, 0x2f7ec4a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7ebe6a8, 0xc0001a8000}, 0x90?, 0x2f7d7e5?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7ebe6a8, 0xc0001a8000}, 0x64e09a0?, 0xc002a6dde0?, 0x25da967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x277?, 0x0?, 0x0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning({0x7efa648?, 0xc002de2ea0}, 0x1, 0x1, 0xc00394c000) test/e2e/framework/statefulset/wait.go:35 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunningAndReady(...) test/e2e/framework/statefulset/wait.go:80 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10() test/e2e/apps/statefulset.go:632 | | ginkgo.By("Waiting until all stateful set " + ssName + " replicas will be running in namespace " + ns) > e2estatefulset.WaitForRunningAndReady(c, *ss.Spec.Replicas, ss) | | ginkgo.By("Confirming that stateful set scale up will halt with unhealthy stateful pod") k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x7e96238, 0xc003bd2000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 Goroutines of Interest goroutine 7396 [select] k8s.io/kubernetes/vendor/k8s.io/client-go/tools/watch.UntilWithoutRetry({0x7ebe6e0, 0xc002ccf500}, {0x7e9aec8, 0xc002a793c0}, {0xc0016c6f38, 0x1, 0x2?}) vendor/k8s.io/client-go/tools/watch/until.go:73 k8s.io/kubernetes/vendor/k8s.io/client-go/tools/watch.Until({0x7ebe6e0, 0xc002ccf500}, {0xc003fdf328?, 0x74aacf8?}, {0x7e8b720?, 0xc001180f48?}, {0xc0016c6f38, 0x1, 0x1}) vendor/k8s.io/client-go/tools/watch/until.go:114 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10.2() test/e2e/apps/statefulset.go:613 | defer cancel() | > _, orderErr = watchtools.Until(ctx, pl.ResourceVersion, w, func(event watch.Event) (bool, error) { | if event.Type != watch.Added { | return false, nil > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10 test/e2e/apps/statefulset.go:605 | var orderErr error | wg.Add(1) > go func() { | defer ginkgo.GinkgoRecover() | defer wg.Done() ------------------------------ Nov 8 19:27:04.130: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Nov 8 19:27:14.131: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false ------------------------------ Automatically polling progress: [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] (Spec Runtime: 9m40.067s) test/e2e/apps/statefulset.go:587 In [It] (Node Runtime: 9m40.031s) test/e2e/apps/statefulset.go:587 At [By Step] Waiting until all stateful set ss replicas will be running in namespace statefulset-1165 (Step Runtime: 9m40.017s) test/e2e/apps/statefulset.go:631 Spec Goroutine goroutine 7361 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7ebe6a8, 0xc0001a8000}, 0xc000563b30, 0x2f7ec4a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7ebe6a8, 0xc0001a8000}, 0x90?, 0x2f7d7e5?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7ebe6a8, 0xc0001a8000}, 0x64e09a0?, 0xc002a6dde0?, 0x25da967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x277?, 0x0?, 0x0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning({0x7efa648?, 0xc002de2ea0}, 0x1, 0x1, 0xc00394c000) test/e2e/framework/statefulset/wait.go:35 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunningAndReady(...) test/e2e/framework/statefulset/wait.go:80 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10() test/e2e/apps/statefulset.go:632 | | ginkgo.By("Waiting until all stateful set " + ssName + " replicas will be running in namespace " + ns) > e2estatefulset.WaitForRunningAndReady(c, *ss.Spec.Replicas, ss) | | ginkgo.By("Confirming that stateful set scale up will halt with unhealthy stateful pod") k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x7e96238, 0xc003bd2000}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 Goroutines of Interest goroutine 7396 [select] k8s.io/kubernetes/vendor/k8s.io/client-go/tools/watch.UntilWithoutRetry({0x7ebe6e0, 0xc002ccf500}, {0x7e9aec8, 0xc002a793c0}, {0xc0016c6f38, 0x1, 0x2?}) vendor/k8s.io/client-go/tools/watch/until.go:73 k8s.io/kubernetes/vendor/k8s.io/client-go/tools/watch.Until({0x7ebe6e0, 0xc002ccf500}, {0xc003fdf328?, 0x74aacf8?}, {0x7e8b720?, 0xc001180f48?}, {0xc0016c6f38, 0x1, 0x1}) vendor/k8s.io/client-go/tools/watch/until.go:114 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10.2() test/e2e/apps/statefulset.go:613 | defer cancel() | > _, orderErr = watchtools.Until(ctx, pl.ResourceVersion, w, func(event watch.Event) (bool, error) { | if event.Type != watch.Added { | return false, nil > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10 test/e2e/apps/statefulset.go:605 | var orderErr error | wg.Add(1) > go func() { | defer ginkgo.GinkgoRecover() | defer wg.Done() ------------------------------ Nov 8 19:27:24.131: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Nov 8 19:27:34.131: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Nov 8 19:27:34.135: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Nov 8 19:27:34.135: FAIL: Failed waiting for pods to enter running: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning({0x7efa648?, 0xc002de2ea0}, 0x1, 0x1, 0xc00394c000) test/e2e/framework/statefulset/wait.go:58 +0xf9 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunningAndReady(...) test/e2e/framework/statefulset/wait.go:80 k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10() test/e2e/apps/statefulset.go:632 +0x57b [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] test/e2e/apps/statefulset.go:124 Nov 8 19:27:34.139: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=statefulset-1165 describe po ss-0' Nov 8 19:27:34.264: INFO: stderr: "" Nov 8 19:27:34.264: INFO: stdout: "Name: ss-0\nNamespace: statefulset-1165\nPriority: 0\nService Account: default\nNode: 172.17.0.1/172.17.0.1\nStart Time: Tue, 08 Nov 2022 19:17:34 +0000\nLabels: baz=blah\n controller-revision-hash=ss-6557876d87\n foo=bar\n statefulset.kubernetes.io/pod-name=ss-0\nAnnotations: <none>\nStatus: Running\nIP: 10.88.9.60\nIPs:\n IP: 10.88.9.60\n IP: 2001:4860:4860::93c\nControlled By: StatefulSet/ss\nContainers:\n webserver:\n Container ID: containerd://7c9cd529885ddf5b1f5c569a3d4bf876a6982c84f989273e69609971c5d443e7\n Image: registry.k8s.io/e2e-test-images/httpd:2.4.38-2\n Image ID: registry.k8s.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3\n Port: <none>\n Host Port: <none>\n State: Waiting\n Reason: CrashLoopBackOff\n Last State: Terminated\n Reason: Error\n Exit Code: 137\n Started: Tue, 08 Nov 2022 19:23:06 +0000\n Finished: Tue, 08 Nov 2022 19:23:07 +0000\n Ready: False\n Restart Count: 6\n Readiness: http-get http://:80/index.html delay=0s timeout=1s period=1s #success=1 #failure=1\n Environment: <none>\n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-nkcgn (ro)\nConditions:\n Type Status\n Initialized True \n Ready False \n ContainersReady False \n PodScheduled True \nVolumes:\n kube-api-access-nkcgn:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: <nil>\n DownwardAPI: true\nQoS Class: BestEffort\nNode-Selectors: <none>\nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 10m default-scheduler Successfully assigned statefulset-1165/ss-0 to 172.17.0.1\n Warning Unhealthy 9m56s kubelet Readiness probe failed: Get \"http://10.88.8.31:80/index.html\": dial tcp 10.88.8.31:80: connect: connection refused\n Normal Pulled 9m42s (x3 over 9m58s) kubelet Container image \"registry.k8s.io/e2e-test-images/httpd:2.4.38-2\" already present on machine\n Normal Created 9m42s (x3 over 9m58s) kubelet Created container webserver\n Normal Started 9m42s (x3 over 9m58s) kubelet Started container webserver\n Normal SandboxChanged 9m33s (x7 over 9m56s) kubelet Pod sandbox changed, it will be killed and re-created.\n Warning BackOff 4m57s (x126 over 9m51s) kubelet Back-off restarting failed container webserver in pod ss-0_statefulset-1165(190a7ab3-b0f4-4a9f-a76d-b6d64c1e9763)\n" Nov 8 19:27:34.265: INFO: Output of kubectl describe ss-0: Name: ss-0 Namespace: statefulset-1165 Priority: 0 Service Account: default Node: 172.17.0.1/172.17.0.1 Start Time: Tue, 08 Nov 2022 19:17:34 +0000 Labels: baz=blah controller-revision-hash=ss-6557876d87 foo=bar statefulset.kubernetes.io/pod-name=ss-0 Annotations: <none> Status: Running IP: 10.88.9.60 IPs: IP: 10.88.9.60 IP: 2001:4860:4860::93c Controlled By: StatefulSet/ss Containers: webserver: Container ID: containerd://7c9cd529885ddf5b1f5c569a3d4bf876a6982c84f989273e69609971c5d443e7 Image: registry.k8s.io/e2e-test-images/httpd:2.4.38-2 Image ID: registry.k8s.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 Port: <none> Host Port: <none> State: Waiting Reason: CrashLoopBackOff Last State: Terminated Reason: Error Exit Code: 137 Started: Tue, 08 Nov 2022 19:23:06 +0000 Finished: Tue, 08 Nov 2022 19:23:07 +0000 Ready: False Restart Count: 6 Readiness: http-get http://:80/index.html delay=0s timeout=1s period=1s #success=1 #failure=1 Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-nkcgn (ro) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: kube-api-access-nkcgn: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true QoS Class: BestEffort Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 10m default-scheduler Successfully assigned statefulset-1165/ss-0 to 172.17.0.1 Warning Unhealthy 9m56s kubelet Readiness probe failed: Get "http://10.88.8.31:80/index.html": dial tcp 10.88.8.31:80: connect: connection refused Normal Pulled 9m42s (x3 over 9m58s) kubelet Container image "registry.k8s.io/e2e-test-images/httpd:2.4.38-2" already present on machine Normal Created 9m42s (x3 over 9m58s) kubelet Created container webserver Normal Started 9m42s (x3 over 9m58s) kubelet Started container webserver Normal SandboxChanged 9m33s (x7 over 9m56s) kubelet Pod sandbox changed, it will be killed and re-created. Warning BackOff 4m57s (x126 over 9m51s) kubelet Back-off restarting failed container webserver in pod ss-0_statefulset-1165(190a7ab3-b0f4-4a9f-a76d-b6d64c1e9763) Nov 8 19:27:34.265: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=statefulset-1165 logs ss-0 --tail=100' Nov 8 19:27:34.402: INFO: stderr: "" Nov 8 19:27:34.402: INFO: stdout: "[Tue Nov 08 19:23:06.525219 2022] [mpm_event:notice] [pid 1:tid 140042197576552] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations\n[Tue Nov 08 19:23:06.525303 2022] [core:notice] [pid 1:tid 140042197576552] AH00094: Command line: 'httpd -D FOREGROUND'\n10.88.0.1 - - [08/Nov/2022:19:23:06 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.88.0.1 - - [08/Nov/2022:19:23:07 +0000] \"GET /index.html HTTP/1.1\" 200 45\n" Nov 8 19:27:34.402: INFO: Last 100 log lines of ss-0: [Tue Nov 08 19:23:06.525219 2022] [mpm_event:notice] [pid 1:tid 140042197576552] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations [Tue Nov 08 19:23:06.525303 2022] [core:notice] [pid 1:tid 140042197576552] AH00094: Command line: 'httpd -D FOREGROUND' 10.88.0.1 - - [08/Nov/2022:19:23:06 +0000] "GET /index.html HTTP/1.1" 200 45 10.88.0.1 - - [08/Nov/2022:19:23:07 +0000] "GET /index.html HTTP/1.1" 200 45 Nov 8 19:27:34.402: INFO: Deleting all statefulset in ns statefulset-1165 Nov 8 19:27:34.406: INFO: Scaling statefulset ss to 0 Nov 8 19:27:44.434: INFO: Waiting for statefulset status.replicas updated to 0 Nov 8 19:27:44.437: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet test/e2e/framework/node/init/init.go:32 Nov 8 19:27:44.450: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-apps] StatefulSet test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-apps] StatefulSet dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/08/22 19:27:44.454 STEP: Collecting events from namespace "statefulset-1165". 11/08/22 19:27:44.454 STEP: Found 9 events. 11/08/22 19:27:44.459 Nov 8 19:27:44.459: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for ss-0: { } Scheduled: Successfully assigned statefulset-1165/ss-0 to 172.17.0.1 Nov 8 19:27:44.459: INFO: At 2022-11-08 19:17:34 +0000 UTC - event for ss: {statefulset-controller } SuccessfulCreate: create Pod ss-0 in StatefulSet ss successful Nov 8 19:27:44.459: INFO: At 2022-11-08 19:17:36 +0000 UTC - event for ss-0: {kubelet 172.17.0.1} Pulled: Container image "registry.k8s.io/e2e-test-images/httpd:2.4.38-2" already present on machine Nov 8 19:27:44.459: INFO: At 2022-11-08 19:17:36 +0000 UTC - event for ss-0: {kubelet 172.17.0.1} Created: Created container webserver Nov 8 19:27:44.459: INFO: At 2022-11-08 19:17:36 +0000 UTC - event for ss-0: {kubelet 172.17.0.1} Started: Started container webserver Nov 8 19:27:44.459: INFO: At 2022-11-08 19:17:38 +0000 UTC - event for ss-0: {kubelet 172.17.0.1} Unhealthy: Readiness probe failed: Get "http://10.88.8.31:80/index.html": dial tcp 10.88.8.31:80: connect: connection refused Nov 8 19:27:44.459: INFO: At 2022-11-08 19:17:38 +0000 UTC - event for ss-0: {kubelet 172.17.0.1} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Nov 8 19:27:44.459: INFO: At 2022-11-08 19:17:43 +0000 UTC - event for ss-0: {kubelet 172.17.0.1} BackOff: Back-off restarting failed container webserver in pod ss-0_statefulset-1165(190a7ab3-b0f4-4a9f-a76d-b6d64c1e9763) Nov 8 19:27:44.459: INFO: At 2022-11-08 19:27:34 +0000 UTC - event for ss: {statefulset-controller } SuccessfulDelete: delete Pod ss-0 in StatefulSet ss successful Nov 8 19:27:44.462: INFO: POD NODE PHASE GRACE CONDITIONS Nov 8 19:27:44.462: INFO: Nov 8 19:27:44.467: INFO: Logging node info for node 172.17.0.1 Nov 8 19:27:44.470: INFO: Node Info: &Node{ObjectMeta:{172.17.0.1 1c9ca6f0-ace7-4a33-a1cd-137d512be00a 10328 0 2022-11-08 18:07:44 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:172.17.0.1 kubernetes.io/os:linux] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-11-08 18:07:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}} } {kubelet Update v1 2022-11-08 18:07:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {kubelet Update v1 2022-11-08 19:25:24 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{259962224640 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67441348608 0} {<nil>} 65860692Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{233966001789 0} {<nil>} 233966001789 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67336491008 0} {<nil>} 65758292Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-08 19:25:24 +0000 UTC,LastTransitionTime:2022-11-08 18:07:43 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-08 19:25:24 +0000 UTC,LastTransitionTime:2022-11-08 18:07:43 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-08 19:25:24 +0000 UTC,LastTransitionTime:2022-11-08 18:07:43 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-08 19:25:24 +0000 UTC,LastTransitionTime:2022-11-08 18:07:54 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.17.0.1,},NodeAddress{Type:Hostname,Address:172.17.0.1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:,SystemUUID:7d8834b1-ec1e-71b0-7148-50316089d154,BootID:99214993-e7b1-4bff-9db2-b9548be8d199,KernelVersion:5.4.0-1078-gke,OSImage:Debian GNU/Linux 10 (buster),ContainerRuntimeVersion:containerd://1.6.8,KubeletVersion:v1.26.0-alpha.3.387+504f252722dcc8,KubeProxyVersion:v1.26.0-alpha.3.387+504f252722dcc8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:11e6a66017ba4e4b938c1612b7a54a3befcefd354796c04e1dba76873a13518e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.5],SizeBytes:112030526,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:af7e3857d87770ddb40f5ea4f89b5a2709504ab1ee31f9ea4ab5823c045f2146 registry.k8s.io/e2e-test-images/agnhost:2.40],SizeBytes:51155161,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nautilus@sha256:99c0d6f1ad24a1aa1905d9c6534d193f268f7b23f9add2ae6bb41f31094bdd5c registry.k8s.io/e2e-test-images/nautilus:1.5],SizeBytes:49642095,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 registry.k8s.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:40764680,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 registry.k8s.io/e2e-test-images/nginx:1.14-2],SizeBytes:6979041,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 8 19:27:44.470: INFO: Logging kubelet events for node 172.17.0.1 Nov 8 19:27:44.473: INFO: Logging pods the kubelet thinks is on node 172.17.0.1 Nov 8 19:27:44.479: INFO: coredns-755454cbdc-s26wr started at 2022-11-08 18:07:54 +0000 UTC (0+1 container statuses recorded) Nov 8 19:27:44.480: INFO: Container coredns ready: false, restart count 20 Nov 8 19:27:44.525: INFO: Latency metrics for node 172.17.0.1 [DeferCleanup (Each)] [sig-apps] StatefulSet tear down framework | framework.go:193 STEP: Destroying namespace "statefulset-1165" for this suite. 11/08/22 19:27:44.525
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-cli\]\sKubectl\sclient\sUpdate\sDemo\sshould\screate\sand\sstop\sa\sreplication\scontroller\s\s\[Conformance\]$'
test/e2e/kubectl/kubectl.go:2431 k8s.io/kubernetes/test/e2e/kubectl.validateController({0x7efa648, 0xc0034624e0}, {0xc0007b2ab0?, 0x0?}, 0x2, {0x74c4ed1, 0xb}, {0x74dccef, 0x10}, 0xc00332b170, ...) test/e2e/kubectl/kubectl.go:2431 +0x49d k8s.io/kubernetes/test/e2e/kubectl.glob..func1.6.2() test/e2e/kubectl/kubectl.go:344 +0x1ecfrom junit_01.xml
[BeforeEach] [sig-cli] Kubectl client set up framework | framework.go:178 STEP: Creating a kubernetes client 11/08/22 18:13:22.277 Nov 8 18:13:22.277: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename kubectl 11/08/22 18:13:22.279 STEP: Waiting for a default service account to be provisioned in namespace 11/08/22 18:13:22.307 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/08/22 18:13:22.318 [BeforeEach] [sig-cli] Kubectl client test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-cli] Kubectl client test/e2e/kubectl/kubectl.go:274 [BeforeEach] Update Demo test/e2e/kubectl/kubectl.go:326 [It] should create and stop a replication controller [Conformance] test/e2e/kubectl/kubectl.go:339 STEP: creating a replication controller 11/08/22 18:13:22.326 Nov 8 18:13:22.326: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=kubectl-2855 create -f -' Nov 8 18:13:22.743: INFO: stderr: "" Nov 8 18:13:22.743: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. 11/08/22 18:13:22.743 Nov 8 18:13:22.743: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=kubectl-2855 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Nov 8 18:13:22.893: INFO: stderr: "" Nov 8 18:13:22.893: INFO: stdout: "update-demo-nautilus-4lxzf update-demo-nautilus-lwbm5 " Nov 8 18:13:22.893: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=kubectl-2855 get pods update-demo-nautilus-4lxzf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Nov 8 18:13:23.029: INFO: stderr: "" Nov 8 18:13:23.029: INFO: stdout: "" Nov 8 18:13:23.029: INFO: update-demo-nautilus-4lxzf is created but not running Nov 8 18:13:28.029: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=kubectl-2855 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Nov 8 18:13:28.165: INFO: stderr: "" Nov 8 18:13:28.165: INFO: stdout: "update-demo-nautilus-4lxzf update-demo-nautilus-lwbm5 " Nov 8 18:13:28.166: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=kubectl-2855 get pods update-demo-nautilus-4lxzf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Nov 8 18:13:28.283: INFO: stderr: "" Nov 8 18:13:28.283: INFO: stdout: "" Nov 8 18:13:28.283: INFO: update-demo-nautilus-4lxzf is created but not running Nov 8 18:13:33.283: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=kubectl-2855 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Nov 8 18:13:33.374: INFO: stderr: "" Nov 8 18:13:33.374: INFO: stdout: "update-demo-nautilus-4lxzf update-demo-nautilus-lwbm5 " Nov 8 18:13:33.374: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=kubectl-2855 get pods update-demo-nautilus-4lxzf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Nov 8 18:13:33.464: INFO: stderr: "" Nov 8 18:13:33.464: INFO: stdout: "" Nov 8 18:13:33.464: INFO: update-demo-nautilus-4lxzf is created but not running Nov 8 18:13:38.465: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=kubectl-2855 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Nov 8 18:13:38.584: INFO: stderr: "" Nov 8 18:13:38.584: INFO: stdout: "update-demo-nautilus-4lxzf update-demo-nautilus-lwbm5 " Nov 8 18:13:38.584: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=kubectl-2855 get pods update-demo-nautilus-4lxzf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Nov 8 18:13:38.702: INFO: stderr: "" Nov 8 18:13:38.702: INFO: stdout: "" Nov 8 18:13:38.702: INFO: update-demo-nautilus-4lxzf is created but not running Nov 8 18:13:43.703: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=kubectl-2855 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Nov 8 18:13:43.824: INFO: stderr: "" Nov 8 18:13:43.824: INFO: stdout: "update-demo-nautilus-4lxzf update-demo-nautilus-lwbm5 " Nov 8 18:13:43.825: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=kubectl-2855 get pods update-demo-nautilus-4lxzf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Nov 8 18:13:43.928: INFO: stderr: "" Nov 8 18:13:43.928: INFO: stdout: "" Nov 8 18:13:43.928: INFO: update-demo-nautilus-4lxzf is created but not running Nov 8 18:13:48.928: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=kubectl-2855 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Nov 8 18:13:49.039: INFO: stderr: "" Nov 8 18:13:49.039: INFO: stdout: "update-demo-nautilus-4lxzf update-demo-nautilus-lwbm5 " Nov 8 18:13:49.040: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=kubectl-2855 get pods update-demo-nautilus-4lxzf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Nov 8 18:13:49.150: INFO: stderr: "" Nov 8 18:13:49.150: INFO: stdout: "" Nov 8 18:13:49.150: INFO: update-demo-nautilus-4lxzf is created but not running Nov 8 18:13:54.151: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=kubectl-2855 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Nov 8 18:13:54.261: INFO: stderr: "" Nov 8 18:13:54.261: INFO: stdout: "update-demo-nautilus-4lxzf update-demo-nautilus-lwbm5 " Nov 8 18:13:54.261: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=kubectl-2855 get pods update-demo-nautilus-4lxzf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Nov 8 18:13:54.368: INFO: stderr: "" Nov 8 18:13:54.368: INFO: stdout: "" Nov 8 18:13:54.368: INFO: update-demo-nautilus-4lxzf is created but not running Nov 8 18:13:59.369: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=kubectl-2855 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Nov 8 18:13:59.484: INFO: stderr: "" Nov 8 18:13:59.484: INFO: stdout: "update-demo-nautilus-4lxzf update-demo-nautilus-lwbm5 " Nov 8 18:13:59.484: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=kubectl-2855 get pods update-demo-nautilus-4lxzf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Nov 8 18:13:59.598: INFO: stderr: "" Nov 8 18:13:59.598: INFO: stdout: "" Nov 8 18:13:59.598: INFO: update-demo-nautilus-4lxzf is created but not running Nov 8 18:14:04.599: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=kubectl-2855 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Nov 8 18:14:04.741: INFO: stderr: "" Nov 8 18:14:04.741: INFO: stdout: "update-demo-nautilus-4lxzf update-demo-nautilus-lwbm5 " Nov 8 18:14:04.741: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=kubectl-2855 get pods update-demo-nautilus-4lxzf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Nov 8 18:14:04.854: INFO: stderr: "" Nov 8 18:14:04.854: INFO: stdout: "" Nov 8 18:14:04.854: INFO: update-demo-nautilus-4lxzf is created but not running Nov 8 18:14:09.854: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=kubectl-2855 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Nov 8 18:14:09.960: INFO: stderr: "" Nov 8 18:14:09.960: INFO: stdout: "update-demo-nautilus-4lxzf update-demo-nautilus-lwbm5 " Nov 8 18:14:09.960: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=kubectl-2855 get pods update-demo-nautilus-4lxzf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Nov 8 18:14:10.063: INFO: stderr: "" Nov 8 18:14:10.063: INFO: stdout: "" Nov 8 18:14:10.063: INFO: update-demo-nautilus-4lxzf is created but not running Nov 8 18:14:15.064: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=kubectl-2855 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Nov 8 18:14:15.181: INFO: stderr: "" Nov 8 18:14:15.181: INFO: stdout: "update-demo-nautilus-4lxzf update-demo-nautilus-lwbm5 " Nov 8 18:14:15.181: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=kubectl-2855 get pods update-demo-nautilus-4lxzf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Nov 8 18:14:15.290: INFO: stderr: "" Nov 8 18:14:15.290: INFO: stdout: "" Nov 8 18:14:15.290: INFO: update-demo-nautilus-4lxzf is created but not running Nov 8 18:14:20.291: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=kubectl-2855 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Nov 8 18:14:20.403: INFO: stderr: "" Nov 8 18:14:20.403: INFO: stdout: "update-demo-nautilus-4lxzf update-demo-nautilus-lwbm5 " Nov 8 18:14:20.404: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=kubectl-2855 get pods update-demo-nautilus-4lxzf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Nov 8 18:14:20.515: INFO: stderr: "" Nov 8 18:14:20.515: INFO: stdout: "" Nov 8 18:14:20.515: INFO: update-demo-nautilus-4lxzf is created but not running Nov 8 18:14:25.516: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=kubectl-2855 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Nov 8 18:14:25.626: INFO: stderr: "" Nov 8 18:14:25.626: INFO: stdout: "update-demo-nautilus-4lxzf update-demo-nautilus-lwbm5 " Nov 8 18:14:25.626: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=kubectl-2855 get pods update-demo-nautilus-4lxzf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Nov 8 18:14:25.733: INFO: stderr: "" Nov 8 18:14:25.733: INFO: stdout: "" Nov 8 18:14:25.733: INFO: update-demo-nautilus-4lxzf is created but not running Nov 8 18:14:30.735: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=kubectl-2855 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Nov 8 18:14:30.841: INFO: stderr: "" Nov 8 18:14:30.841: INFO: stdout: "update-demo-nautilus-4lxzf update-demo-nautilus-lwbm5 " Nov 8 18:14:30.841: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=kubectl-2855 get pods update-demo-nautilus-4lxzf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Nov 8 18:14:30.947: INFO: stderr: "" Nov 8 18:14:30.947: INFO: stdout: "" Nov 8 18:14:30.947: INFO: update-demo-nautilus-4lxzf is created but not running Nov 8 18:14:35.948: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=kubectl-2855 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Nov 8 18:14:36.054: INFO: stderr: "" Nov 8 18:14:36.054: INFO: stdout: "update-demo-nautilus-4lxzf update-demo-nautilus-lwbm5 " Nov 8 18:14:36.054: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=kubectl-2855 get pods update-demo-nautilus-4lxzf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Nov 8 18:14:36.170: INFO: stderr: "" Nov 8 18:14:36.170: INFO: stdout: "" Nov 8 18:14:36.170: INFO: update-demo-nautilus-4lxzf is created but not running Nov 8 18:14:41.171: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=kubectl-2855 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Nov 8 18:14:41.274: INFO: stderr: "" Nov 8 18:14:41.274: INFO: stdout: "update-demo-nautilus-4lxzf update-demo-nautilus-lwbm5 " Nov 8 18:14:41.274: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=kubectl-2855 get pods update-demo-nautilus-4lxzf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Nov 8 18:14:41.370: INFO: stderr: "" Nov 8 18:14:41.370: INFO: stdout: "" Nov 8 18:14:41.370: INFO: update-demo-nautilus-4lxzf is created but not running Nov 8 18:14:46.371: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=kubectl-2855 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Nov 8 18:14:46.471: INFO: stderr: "" Nov 8 18:14:46.471: INFO: stdout: "update-demo-nautilus-4lxzf update-demo-nautilus-lwbm5 " Nov 8 18:14:46.472: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=kubectl-2855 get pods update-demo-nautilus-4lxzf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Nov 8 18:14:46.566: INFO: stderr: "" Nov 8 18:14:46.566: INFO: stdout: "" Nov 8 18:14:46.566: INFO: update-demo-nautilus-4lxzf is created but not running Nov 8 18:14:51.567: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=kubectl-2855 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Nov 8 18:14:51.681: INFO: stderr: "" Nov 8 18:14:51.681: INFO: stdout: "update-demo-nautilus-4lxzf update-demo-nautilus-lwbm5 " Nov 8 18:14:51.681: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=kubectl-2855 get pods update-demo-nautilus-4lxzf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Nov 8 18:14:51.787: INFO: stderr: "" Nov 8 18:14:51.787: INFO: stdout: "" Nov 8 18:14:51.787: INFO: update-demo-nautilus-4lxzf is created but not running Nov 8 18:14:56.787: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=kubectl-2855 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Nov 8 18:14:56.888: INFO: stderr: "" Nov 8 18:14:56.888: INFO: stdout: "update-demo-nautilus-4lxzf update-demo-nautilus-lwbm5 " Nov 8 18:14:56.888: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=kubectl-2855 get pods update-demo-nautilus-4lxzf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Nov 8 18:14:56.983: INFO: stderr: "" Nov 8 18:14:56.984: INFO: stdout: "" Nov 8 18:14:56.984: INFO: update-demo-nautilus-4lxzf is created but not running Nov 8 18:15:01.984: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=kubectl-2855 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Nov 8 18:15:02.090: INFO: stderr: "" Nov 8 18:15:02.090: INFO: stdout: "update-demo-nautilus-4lxzf update-demo-nautilus-lwbm5 " Nov 8 18:15:02.090: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=kubectl-2855 get pods update-demo-nautilus-4lxzf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Nov 8 18:15:02.183: INFO: stderr: "" Nov 8 18:15:02.183: INFO: stdout: "" Nov 8 18:15:02.183: INFO: update-demo-nautilus-4lxzf is created but not running Nov 8 18:15:07.184: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=kubectl-2855 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Nov 8 18:15:07.279: INFO: stderr: "" Nov 8 18:15:07.280: INFO: stdout: "update-demo-nautilus-4lxzf update-demo-nautilus-lwbm5 " Nov 8 18:15:07.280: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=kubectl-2855 get pods update-demo-nautilus-4lxzf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Nov 8 18:15:07.380: INFO: stderr: "" Nov 8 18:15:07.380: INFO: stdout: "" Nov 8 18:15:07.380: INFO: update-demo-nautilus-4lxzf is created but not running Nov 8 18:15:12.380: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=kubectl-2855 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Nov 8 18:15:12.504: INFO: stderr: "" Nov 8 18:15:12.504: INFO: stdout: "update-demo-nautilus-4lxzf update-demo-nautilus-lwbm5 " Nov 8 18:15:12.504: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=kubectl-2855 get pods update-demo-nautilus-4lxzf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Nov 8 18:15:12.618: INFO: stderr: "" Nov 8 18:15:12.618: INFO: stdout: "" Nov 8 18:15:12.618: INFO: update-demo-nautilus-4lxzf is created but not running Nov 8 18:15:17.619: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=kubectl-2855 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Nov 8 18:15:17.717: INFO: stderr: "" Nov 8 18:15:17.718: INFO: stdout: "update-demo-nautilus-4lxzf update-demo-nautilus-lwbm5 " Nov 8 18:15:17.718: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=kubectl-2855 get pods update-demo-nautilus-4lxzf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Nov 8 18:15:17.822: INFO: stderr: "" Nov 8 18:15:17.822: INFO: stdout: "" Nov 8 18:15:17.822: INFO: update-demo-nautilus-4lxzf is created but not running Nov 8 18:15:22.822: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=kubectl-2855 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Nov 8 18:15:22.911: INFO: stderr: "" Nov 8 18:15:22.911: INFO: stdout: "update-demo-nautilus-4lxzf update-demo-nautilus-lwbm5 " Nov 8 18:15:22.912: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=kubectl-2855 get pods update-demo-nautilus-4lxzf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Nov 8 18:15:23.011: INFO: stderr: "" Nov 8 18:15:23.011: INFO: stdout: "" Nov 8 18:15:23.011: INFO: update-demo-nautilus-4lxzf is created but not running Nov 8 18:15:28.012: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=kubectl-2855 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Nov 8 18:15:28.114: INFO: stderr: "" Nov 8 18:15:28.114: INFO: stdout: "update-demo-nautilus-4lxzf update-demo-nautilus-lwbm5 " Nov 8 18:15:28.114: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=kubectl-2855 get pods update-demo-nautilus-4lxzf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Nov 8 18:15:28.210: INFO: stderr: "" Nov 8 18:15:28.210: INFO: stdout: "" Nov 8 18:15:28.210: INFO: update-demo-nautilus-4lxzf is created but not running Nov 8 18:15:33.211: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=kubectl-2855 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Nov 8 18:15:33.321: INFO: stderr: "" Nov 8 18:15:33.321: INFO: stdout: "update-demo-nautilus-4lxzf update-demo-nautilus-lwbm5 " Nov 8 18:15:33.321: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=kubectl-2855 get pods update-demo-nautilus-4lxzf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Nov 8 18:15:33.425: INFO: stderr: "" Nov 8 18:15:33.425: INFO: stdout: "" Nov 8 18:15:33.425: INFO: update-demo-nautilus-4lxzf is created but not running Nov 8 18:15:38.426: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=kubectl-2855 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Nov 8 18:15:38.543: INFO: stderr: "" Nov 8 18:15:38.543: INFO: stdout: "update-demo-nautilus-4lxzf update-demo-nautilus-lwbm5 " Nov 8 18:15:38.543: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=kubectl-2855 get pods update-demo-nautilus-4lxzf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Nov 8 18:15:38.641: INFO: stderr: "" Nov 8 18:15:38.641: INFO: stdout: "" Nov 8 18:15:38.641: INFO: update-demo-nautilus-4lxzf is created but not running Nov 8 18:15:43.642: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=kubectl-2855 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Nov 8 18:15:43.749: INFO: stderr: "" Nov 8 18:15:43.750: INFO: stdout: "update-demo-nautilus-4lxzf update-demo-nautilus-lwbm5 " Nov 8 18:15:43.750: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=kubectl-2855 get pods update-demo-nautilus-4lxzf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Nov 8 18:15:43.846: INFO: stderr: "" Nov 8 18:15:43.846: INFO: stdout: "" Nov 8 18:15:43.846: INFO: update-demo-nautilus-4lxzf is created but not running Nov 8 18:15:48.846: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=kubectl-2855 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Nov 8 18:15:48.951: INFO: stderr: "" Nov 8 18:15:48.951: INFO: stdout: "update-demo-nautilus-4lxzf update-demo-nautilus-lwbm5 " Nov 8 18:15:48.951: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=kubectl-2855 get pods update-demo-nautilus-4lxzf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Nov 8 18:15:49.055: INFO: stderr: "" Nov 8 18:15:49.055: INFO: stdout: "" Nov 8 18:15:49.055: INFO: update-demo-nautilus-4lxzf is created but not running Nov 8 18:15:54.056: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=kubectl-2855 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Nov 8 18:15:54.172: INFO: stderr: "" Nov 8 18:15:54.172: INFO: stdout: "update-demo-nautilus-4lxzf update-demo-nautilus-lwbm5 " Nov 8 18:15:54.172: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=kubectl-2855 get pods update-demo-nautilus-4lxzf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Nov 8 18:15:54.295: INFO: stderr: "" Nov 8 18:15:54.295: INFO: stdout: "" Nov 8 18:15:54.295: INFO: update-demo-nautilus-4lxzf is created but not running Nov 8 18:15:59.295: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=kubectl-2855 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Nov 8 18:15:59.392: INFO: stderr: "" Nov 8 18:15:59.392: INFO: stdout: "update-demo-nautilus-4lxzf update-demo-nautilus-lwbm5 " Nov 8 18:15:59.392: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=kubectl-2855 get pods update-demo-nautilus-4lxzf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Nov 8 18:15:59.485: INFO: stderr: "" Nov 8 18:15:59.485: INFO: stdout: "" Nov 8 18:15:59.485: INFO: update-demo-nautilus-4lxzf is created but not running Nov 8 18:16:04.486: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=kubectl-2855 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Nov 8 18:16:04.604: INFO: stderr: "" Nov 8 18:16:04.604: INFO: stdout: "update-demo-nautilus-4lxzf update-demo-nautilus-lwbm5 " Nov 8 18:16:04.604: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=kubectl-2855 get pods update-demo-nautilus-4lxzf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Nov 8 18:16:04.731: INFO: stderr: "" Nov 8 18:16:04.731: INFO: stdout: "" Nov 8 18:16:04.731: INFO: update-demo-nautilus-4lxzf is created but not running Nov 8 18:16:09.732: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=kubectl-2855 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Nov 8 18:16:09.841: INFO: stderr: "" Nov 8 18:16:09.841: INFO: stdout: "update-demo-nautilus-4lxzf update-demo-nautilus-lwbm5 " Nov 8 18:16:09.842: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=kubectl-2855 get pods update-demo-nautilus-4lxzf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Nov 8 18:16:09.936: INFO: stderr: "" Nov 8 18:16:09.936: INFO: stdout: "" Nov 8 18:16:09.936: INFO: update-demo-nautilus-4lxzf is created but not running Nov 8 18:16:14.937: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=kubectl-2855 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Nov 8 18:16:15.036: INFO: stderr: "" Nov 8 18:16:15.036: INFO: stdout: "update-demo-nautilus-4lxzf update-demo-nautilus-lwbm5 " Nov 8 18:16:15.036: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=kubectl-2855 get pods update-demo-nautilus-4lxzf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Nov 8 18:16:15.134: INFO: stderr: "" Nov 8 18:16:15.134: INFO: stdout: "" Nov 8 18:16:15.134: INFO: update-demo-nautilus-4lxzf is created but not running Nov 8 18:16:20.135: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=kubectl-2855 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Nov 8 18:16:20.239: INFO: stderr: "" Nov 8 18:16:20.239: INFO: stdout: "update-demo-nautilus-4lxzf update-demo-nautilus-lwbm5 " Nov 8 18:16:20.239: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=kubectl-2855 get pods update-demo-nautilus-4lxzf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Nov 8 18:16:20.337: INFO: stderr: "" Nov 8 18:16:20.337: INFO: stdout: "true" Nov 8 18:16:20.338: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=kubectl-2855 get pods update-demo-nautilus-4lxzf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Nov 8 18:16:20.450: INFO: stderr: "" Nov 8 18:16:20.450: INFO: stdout: "registry.k8s.io/e2e-test-images/nautilus:1.5" Nov 8 18:16:20.450: INFO: validating pod update-demo-nautilus-4lxzf Nov 8 18:16:23.521: INFO: update-demo-nautilus-4lxzf is running right image but validator function failed: the server is currently unable to handle the request (get pods update-demo-nautilus-4lxzf) Nov 8 18:16:28.522: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=kubectl-2855 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Nov 8 18:16:28.655: INFO: stderr: "" Nov 8 18:16:28.656: INFO: stdout: "update-demo-nautilus-4lxzf update-demo-nautilus-lwbm5 " Nov 8 18:16:28.656: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=kubectl-2855 get pods update-demo-nautilus-4lxzf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Nov 8 18:16:28.779: INFO: stderr: "" Nov 8 18:16:28.779: INFO: stdout: "" Nov 8 18:16:28.779: INFO: update-demo-nautilus-4lxzf is created but not running Nov 8 18:16:33.780: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=kubectl-2855 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Nov 8 18:16:33.882: INFO: stderr: "" Nov 8 18:16:33.882: INFO: stdout: "update-demo-nautilus-4lxzf update-demo-nautilus-lwbm5 " Nov 8 18:16:33.882: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=kubectl-2855 get pods update-demo-nautilus-4lxzf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Nov 8 18:16:33.977: INFO: stderr: "" Nov 8 18:16:33.977: INFO: stdout: "" Nov 8 18:16:33.977: INFO: update-demo-nautilus-4lxzf is created but not running Nov 8 18:16:38.978: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=kubectl-2855 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Nov 8 18:16:39.076: INFO: stderr: "" Nov 8 18:16:39.076: INFO: stdout: "update-demo-nautilus-4lxzf update-demo-nautilus-lwbm5 " Nov 8 18:16:39.077: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=kubectl-2855 get pods update-demo-nautilus-4lxzf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Nov 8 18:16:39.178: INFO: stderr: "" Nov 8 18:16:39.178: INFO: stdout: "" Nov 8 18:16:39.178: INFO: update-demo-nautilus-4lxzf is created but not running Nov 8 18:16:44.179: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=kubectl-2855 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Nov 8 18:16:44.282: INFO: stderr: "" Nov 8 18:16:44.282: INFO: stdout: "update-demo-nautilus-4lxzf update-demo-nautilus-lwbm5 " Nov 8 18:16:44.282: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=kubectl-2855 get pods update-demo-nautilus-4lxzf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Nov 8 18:16:44.386: INFO: stderr: "" Nov 8 18:16:44.386: INFO: stdout: "" Nov 8 18:16:44.386: INFO: update-demo-nautilus-4lxzf is created but not running Nov 8 18:16:49.386: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=kubectl-2855 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Nov 8 18:16:49.494: INFO: stderr: "" Nov 8 18:16:49.494: INFO: stdout: "update-demo-nautilus-4lxzf update-demo-nautilus-lwbm5 " Nov 8 18:16:49.495: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=kubectl-2855 get pods update-demo-nautilus-4lxzf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Nov 8 18:16:49.599: INFO: stderr: "" Nov 8 18:16:49.599: INFO: stdout: "" Nov 8 18:16:49.599: INFO: update-demo-nautilus-4lxzf is created but not running Nov 8 18:16:54.600: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=kubectl-2855 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Nov 8 18:16:54.695: INFO: stderr: "" Nov 8 18:16:54.695: INFO: stdout: "update-demo-nautilus-4lxzf update-demo-nautilus-lwbm5 " Nov 8 18:16:54.695: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=kubectl-2855 get pods update-demo-nautilus-4lxzf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Nov 8 18:16:54.793: INFO: stderr: "" Nov 8 18:16:54.793: INFO: stdout: "" Nov 8 18:16:54.793: INFO: update-demo-nautilus-4lxzf is created but not running Nov 8 18:16:59.794: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=kubectl-2855 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Nov 8 18:16:59.886: INFO: stderr: "" Nov 8 18:16:59.886: INFO: stdout: "update-demo-nautilus-4lxzf update-demo-nautilus-lwbm5 " Nov 8 18:16:59.887: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=kubectl-2855 get pods update-demo-nautilus-4lxzf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Nov 8 18:16:59.972: INFO: stderr: "" Nov 8 18:16:59.972: INFO: stdout: "" Nov 8 18:16:59.972: INFO: update-demo-nautilus-4lxzf is created but not running Nov 8 18:17:04.973: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=kubectl-2855 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Nov 8 18:17:05.077: INFO: stderr: "" Nov 8 18:17:05.077: INFO: stdout: "update-demo-nautilus-4lxzf update-demo-nautilus-lwbm5 " Nov 8 18:17:05.077: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=kubectl-2855 get pods update-demo-nautilus-4lxzf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Nov 8 18:17:05.182: INFO: stderr: "" Nov 8 18:17:05.182: INFO: stdout: "" Nov 8 18:17:05.182: INFO: update-demo-nautilus-4lxzf is created but not running Nov 8 18:17:10.183: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=kubectl-2855 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Nov 8 18:17:10.307: INFO: stderr: "" Nov 8 18:17:10.307: INFO: stdout: "update-demo-nautilus-4lxzf update-demo-nautilus-lwbm5 " Nov 8 18:17:10.307: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=kubectl-2855 get pods update-demo-nautilus-4lxzf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Nov 8 18:17:10.440: INFO: stderr: "" Nov 8 18:17:10.440: INFO: stdout: "" Nov 8 18:17:10.440: INFO: update-demo-nautilus-4lxzf is created but not running Nov 8 18:17:15.441: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=kubectl-2855 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Nov 8 18:17:15.552: INFO: stderr: "" Nov 8 18:17:15.552: INFO: stdout: "update-demo-nautilus-4lxzf update-demo-nautilus-lwbm5 " Nov 8 18:17:15.552: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=kubectl-2855 get pods update-demo-nautilus-4lxzf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Nov 8 18:17:15.659: INFO: stderr: "" Nov 8 18:17:15.659: INFO: stdout: "" Nov 8 18:17:15.659: INFO: update-demo-nautilus-4lxzf is created but not running Nov 8 18:17:20.660: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=kubectl-2855 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Nov 8 18:17:20.790: INFO: stderr: "" Nov 8 18:17:20.790: INFO: stdout: "update-demo-nautilus-4lxzf update-demo-nautilus-lwbm5 " Nov 8 18:17:20.790: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=kubectl-2855 get pods update-demo-nautilus-4lxzf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Nov 8 18:17:20.890: INFO: stderr: "" Nov 8 18:17:20.890: INFO: stdout: "" Nov 8 18:17:20.890: INFO: update-demo-nautilus-4lxzf is created but not running Nov 8 18:17:25.891: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=kubectl-2855 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Nov 8 18:17:26.002: INFO: stderr: "" Nov 8 18:17:26.002: INFO: stdout: "update-demo-nautilus-4lxzf update-demo-nautilus-lwbm5 " Nov 8 18:17:26.002: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=kubectl-2855 get pods update-demo-nautilus-4lxzf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Nov 8 18:17:26.104: INFO: stderr: "" Nov 8 18:17:26.104: INFO: stdout: "" Nov 8 18:17:26.104: INFO: update-demo-nautilus-4lxzf is created but not running Nov 8 18:17:31.105: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=kubectl-2855 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Nov 8 18:17:31.234: INFO: stderr: "" Nov 8 18:17:31.234: INFO: stdout: "update-demo-nautilus-4lxzf update-demo-nautilus-lwbm5 " Nov 8 18:17:31.234: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=kubectl-2855 get pods update-demo-nautilus-4lxzf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Nov 8 18:17:31.343: INFO: stderr: "" Nov 8 18:17:31.343: INFO: stdout: "" Nov 8 18:17:31.343: INFO: update-demo-nautilus-4lxzf is created but not running Nov 8 18:17:36.344: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=kubectl-2855 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Nov 8 18:17:36.457: INFO: stderr: "" Nov 8 18:17:36.457: INFO: stdout: "update-demo-nautilus-4lxzf update-demo-nautilus-lwbm5 " Nov 8 18:17:36.457: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=kubectl-2855 get pods update-demo-nautilus-4lxzf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Nov 8 18:17:36.553: INFO: stderr: "" Nov 8 18:17:36.553: INFO: stdout: "" Nov 8 18:17:36.553: INFO: update-demo-nautilus-4lxzf is created but not running Nov 8 18:17:41.554: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=kubectl-2855 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Nov 8 18:17:41.659: INFO: stderr: "" Nov 8 18:17:41.659: INFO: stdout: "update-demo-nautilus-4lxzf update-demo-nautilus-lwbm5 " Nov 8 18:17:41.659: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=kubectl-2855 get pods update-demo-nautilus-4lxzf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Nov 8 18:17:41.786: INFO: stderr: "" Nov 8 18:17:41.786: INFO: stdout: "" Nov 8 18:17:41.786: INFO: update-demo-nautilus-4lxzf is created but not running Nov 8 18:17:46.787: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=kubectl-2855 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Nov 8 18:17:46.909: INFO: stderr: "" Nov 8 18:17:46.909: INFO: stdout: "update-demo-nautilus-4lxzf update-demo-nautilus-lwbm5 " Nov 8 18:17:46.909: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=kubectl-2855 get pods update-demo-nautilus-4lxzf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Nov 8 18:17:47.017: INFO: stderr: "" Nov 8 18:17:47.017: INFO: stdout: "" Nov 8 18:17:47.017: INFO: update-demo-nautilus-4lxzf is created but not running Nov 8 18:17:52.018: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=kubectl-2855 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Nov 8 18:17:52.129: INFO: stderr: "" Nov 8 18:17:52.129: INFO: stdout: "update-demo-nautilus-4lxzf update-demo-nautilus-lwbm5 " Nov 8 18:17:52.130: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=kubectl-2855 get pods update-demo-nautilus-4lxzf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Nov 8 18:17:52.227: INFO: stderr: "" Nov 8 18:17:52.227: INFO: stdout: "" Nov 8 18:17:52.227: INFO: update-demo-nautilus-4lxzf is created but not running Nov 8 18:17:57.227: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=kubectl-2855 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Nov 8 18:17:57.322: INFO: stderr: "" Nov 8 18:17:57.322: INFO: stdout: "update-demo-nautilus-4lxzf update-demo-nautilus-lwbm5 " Nov 8 18:17:57.322: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=kubectl-2855 get pods update-demo-nautilus-4lxzf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Nov 8 18:17:57.414: INFO: stderr: "" Nov 8 18:17:57.414: INFO: stdout: "" Nov 8 18:17:57.414: INFO: update-demo-nautilus-4lxzf is created but not running Nov 8 18:18:02.415: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=kubectl-2855 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Nov 8 18:18:02.517: INFO: stderr: "" Nov 8 18:18:02.517: INFO: stdout: "update-demo-nautilus-4lxzf update-demo-nautilus-lwbm5 " Nov 8 18:18:02.517: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=kubectl-2855 get pods update-demo-nautilus-4lxzf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Nov 8 18:18:02.612: INFO: stderr: "" Nov 8 18:18:02.612: INFO: stdout: "" Nov 8 18:18:02.612: INFO: update-demo-nautilus-4lxzf is created but not running Nov 8 18:18:07.613: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=kubectl-2855 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Nov 8 18:18:07.716: INFO: stderr: "" Nov 8 18:18:07.716: INFO: stdout: "update-demo-nautilus-4lxzf update-demo-nautilus-lwbm5 " Nov 8 18:18:07.717: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=kubectl-2855 get pods update-demo-nautilus-4lxzf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Nov 8 18:18:07.816: INFO: stderr: "" Nov 8 18:18:07.816: INFO: stdout: "" Nov 8 18:18:07.816: INFO: update-demo-nautilus-4lxzf is created but not running Nov 8 18:18:12.817: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=kubectl-2855 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Nov 8 18:18:12.920: INFO: stderr: "" Nov 8 18:18:12.920: INFO: stdout: "update-demo-nautilus-4lxzf update-demo-nautilus-lwbm5 " Nov 8 18:18:12.920: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=kubectl-2855 get pods update-demo-nautilus-4lxzf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Nov 8 18:18:13.023: INFO: stderr: "" Nov 8 18:18:13.023: INFO: stdout: "" Nov 8 18:18:13.023: INFO: update-demo-nautilus-4lxzf is created but not running Nov 8 18:18:18.024: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=kubectl-2855 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Nov 8 18:18:18.130: INFO: stderr: "" Nov 8 18:18:18.130: INFO: stdout: "update-demo-nautilus-4lxzf update-demo-nautilus-lwbm5 " Nov 8 18:18:18.130: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=kubectl-2855 get pods update-demo-nautilus-4lxzf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Nov 8 18:18:18.234: INFO: stderr: "" Nov 8 18:18:18.234: INFO: stdout: "" Nov 8 18:18:18.234: INFO: update-demo-nautilus-4lxzf is created but not running ------------------------------ Automatically polling progress: [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] (Spec Runtime: 5m0.05s) test/e2e/kubectl/kubectl.go:339 In [It] (Node Runtime: 5m0.001s) test/e2e/kubectl/kubectl.go:339 At [By Step] waiting for all containers in name=update-demo pods to come up. (Step Runtime: 4m59.584s) test/e2e/kubectl/kubectl.go:2391 Spec Goroutine goroutine 578 [sleep] time.Sleep(0x12a05f200) /usr/local/go/src/runtime/time.go:195 > k8s.io/kubernetes/test/e2e/kubectl.validateController({0x7efa648, 0xc0034624e0}, {0xc0007b2ab0?, 0x0?}, 0x2, {0x74c4ed1, 0xb}, {0x74dccef, 0x10}, 0xc00332b170, ...) test/e2e/kubectl/kubectl.go:2393 | ginkgo.By(fmt.Sprintf("waiting for all containers in %s pods to come up.", testname)) //testname should be selector | waitLoop: > for start := time.Now(); time.Since(start) < framework.PodStartTimeout; time.Sleep(5 * time.Second) { | getPodsOutput := e2ekubectl.RunKubectlOrDie(ns, "get", "pods", "-o", "template", getPodsTemplate, "-l", testname) | pods := strings.Fields(getPodsOutput) > k8s.io/kubernetes/test/e2e/kubectl.glob..func1.6.2() test/e2e/kubectl/kubectl.go:344 | ginkgo.By("creating a replication controller") | e2ekubectl.RunKubectlOrDieInput(ns, nautilus, "create", "-f", "-") > validateController(c, nautilusImage, 2, "update-demo", updateDemoSelector, getUDData("nautilus.jpg", ns), ns) | }) | k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d04bbe, 0xc00170f980}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 8 18:18:23.235: FAIL: Timed out after 300 seconds waiting for name=update-demo pods to reach valid state Full Stack Trace k8s.io/kubernetes/test/e2e/kubectl.validateController({0x7efa648, 0xc0034624e0}, {0xc0007b2ab0?, 0x0?}, 0x2, {0x74c4ed1, 0xb}, {0x74dccef, 0x10}, 0xc00332b170, ...) test/e2e/kubectl/kubectl.go:2431 +0x49d k8s.io/kubernetes/test/e2e/kubectl.glob..func1.6.2() test/e2e/kubectl/kubectl.go:344 +0x1ec STEP: using delete to clean up resources 11/08/22 18:18:23.235 Nov 8 18:18:23.235: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=kubectl-2855 delete --grace-period=0 --force -f -' Nov 8 18:18:23.347: INFO: stderr: "Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Nov 8 18:18:23.347: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Nov 8 18:18:23.347: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=kubectl-2855 get rc,svc -l name=update-demo --no-headers' Nov 8 18:18:23.454: INFO: stderr: "No resources found in kubectl-2855 namespace.\n" Nov 8 18:18:23.454: INFO: stdout: "" Nov 8 18:18:23.454: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=kubectl-2855 get pods -l name=update-demo -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Nov 8 18:18:23.562: INFO: stderr: "" Nov 8 18:18:23.562: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client test/e2e/framework/node/init/init.go:32 Nov 8 18:18:23.563: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-cli] Kubectl client test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-cli] Kubectl client dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/08/22 18:18:23.578 STEP: Collecting events from namespace "kubectl-2855". 11/08/22 18:18:23.578 STEP: Found 20 events. 11/08/22 18:18:23.582 Nov 8 18:18:23.582: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for update-demo-nautilus-4lxzf: { } Scheduled: Successfully assigned kubectl-2855/update-demo-nautilus-4lxzf to 172.17.0.1 Nov 8 18:18:23.582: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for update-demo-nautilus-lwbm5: { } Scheduled: Successfully assigned kubectl-2855/update-demo-nautilus-lwbm5 to 172.17.0.1 Nov 8 18:18:23.582: INFO: At 2022-11-08 18:13:22 +0000 UTC - event for update-demo-nautilus: {replication-controller } SuccessfulCreate: Created pod: update-demo-nautilus-4lxzf Nov 8 18:18:23.582: INFO: At 2022-11-08 18:13:22 +0000 UTC - event for update-demo-nautilus: {replication-controller } SuccessfulCreate: Created pod: update-demo-nautilus-lwbm5 Nov 8 18:18:23.582: INFO: At 2022-11-08 18:13:25 +0000 UTC - event for update-demo-nautilus-4lxzf: {kubelet 172.17.0.1} Pulling: Pulling image "registry.k8s.io/e2e-test-images/nautilus:1.5" Nov 8 18:18:23.582: INFO: At 2022-11-08 18:13:25 +0000 UTC - event for update-demo-nautilus-lwbm5: {kubelet 172.17.0.1} Pulling: Pulling image "registry.k8s.io/e2e-test-images/nautilus:1.5" Nov 8 18:18:23.582: INFO: At 2022-11-08 18:13:28 +0000 UTC - event for update-demo-nautilus-4lxzf: {kubelet 172.17.0.1} Pulled: Successfully pulled image "registry.k8s.io/e2e-test-images/nautilus:1.5" in 127.073868ms (3.194808026s including waiting) Nov 8 18:18:23.582: INFO: At 2022-11-08 18:13:28 +0000 UTC - event for update-demo-nautilus-4lxzf: {kubelet 172.17.0.1} Failed: Error: failed to get sandbox container task: no running task found: task b7a62a14b2fea4ab85a4edc5da3b8524cfd9ae36d5073b52937402af628897ef not found: not found Nov 8 18:18:23.582: INFO: At 2022-11-08 18:13:28 +0000 UTC - event for update-demo-nautilus-4lxzf: {kubelet 172.17.0.1} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Nov 8 18:18:23.582: INFO: At 2022-11-08 18:13:28 +0000 UTC - event for update-demo-nautilus-lwbm5: {kubelet 172.17.0.1} Failed: Error: failed to get sandbox container task: no running task found: task 222551514b6344c04fbc4d586aec3923faa5ab1f76296795f8ad51a60c4d76f0 not found: not found Nov 8 18:18:23.582: INFO: At 2022-11-08 18:13:28 +0000 UTC - event for update-demo-nautilus-lwbm5: {kubelet 172.17.0.1} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Nov 8 18:18:23.582: INFO: At 2022-11-08 18:13:28 +0000 UTC - event for update-demo-nautilus-lwbm5: {kubelet 172.17.0.1} Pulled: Successfully pulled image "registry.k8s.io/e2e-test-images/nautilus:1.5" in 3.185088892s (3.185261017s including waiting) Nov 8 18:18:23.582: INFO: At 2022-11-08 18:13:30 +0000 UTC - event for update-demo-nautilus-4lxzf: {kubelet 172.17.0.1} Pulled: Container image "registry.k8s.io/e2e-test-images/nautilus:1.5" already present on machine Nov 8 18:18:23.582: INFO: At 2022-11-08 18:13:30 +0000 UTC - event for update-demo-nautilus-4lxzf: {kubelet 172.17.0.1} Created: Created container update-demo Nov 8 18:18:23.582: INFO: At 2022-11-08 18:13:31 +0000 UTC - event for update-demo-nautilus-4lxzf: {kubelet 172.17.0.1} Started: Started container update-demo Nov 8 18:18:23.582: INFO: At 2022-11-08 18:13:31 +0000 UTC - event for update-demo-nautilus-lwbm5: {kubelet 172.17.0.1} Pulled: Container image "registry.k8s.io/e2e-test-images/nautilus:1.5" already present on machine Nov 8 18:18:23.582: INFO: At 2022-11-08 18:13:31 +0000 UTC - event for update-demo-nautilus-lwbm5: {kubelet 172.17.0.1} Created: Created container update-demo Nov 8 18:18:23.582: INFO: At 2022-11-08 18:13:31 +0000 UTC - event for update-demo-nautilus-lwbm5: {kubelet 172.17.0.1} Started: Started container update-demo Nov 8 18:18:23.582: INFO: At 2022-11-08 18:13:37 +0000 UTC - event for update-demo-nautilus-4lxzf: {kubelet 172.17.0.1} BackOff: Back-off restarting failed container update-demo in pod update-demo-nautilus-4lxzf_kubectl-2855(1ecc05c3-561f-45bf-9861-68bc08030ed8) Nov 8 18:18:23.582: INFO: At 2022-11-08 18:13:38 +0000 UTC - event for update-demo-nautilus-lwbm5: {kubelet 172.17.0.1} BackOff: Back-off restarting failed container update-demo in pod update-demo-nautilus-lwbm5_kubectl-2855(0aaf8076-d752-4373-b6b2-49e3fc84248e) Nov 8 18:18:23.587: INFO: POD NODE PHASE GRACE CONDITIONS Nov 8 18:18:23.587: INFO: update-demo-nautilus-4lxzf 172.17.0.1 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-08 18:13:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-08 18:16:20 +0000 UTC ContainersNotReady containers with unready status: [update-demo]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-08 18:16:20 +0000 UTC ContainersNotReady containers with unready status: [update-demo]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-08 18:13:22 +0000 UTC }] Nov 8 18:18:23.587: INFO: update-demo-nautilus-lwbm5 172.17.0.1 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-08 18:13:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-08 18:16:20 +0000 UTC ContainersNotReady containers with unready status: [update-demo]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-08 18:16:20 +0000 UTC ContainersNotReady containers with unready status: [update-demo]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-08 18:13:22 +0000 UTC }] Nov 8 18:18:23.587: INFO: Nov 8 18:18:23.624: INFO: Logging node info for node 172.17.0.1 Nov 8 18:18:23.629: INFO: Node Info: &Node{ObjectMeta:{172.17.0.1 1c9ca6f0-ace7-4a33-a1cd-137d512be00a 893 0 2022-11-08 18:07:44 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:172.17.0.1 kubernetes.io/os:linux] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-11-08 18:07:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}} } {kubelet Update v1 2022-11-08 18:07:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {kubelet Update v1 2022-11-08 18:13:40 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{259962224640 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67441348608 0} {<nil>} 65860692Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{233966001789 0} {<nil>} 233966001789 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67336491008 0} {<nil>} 65758292Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-08 18:13:39 +0000 UTC,LastTransitionTime:2022-11-08 18:07:43 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-08 18:13:39 +0000 UTC,LastTransitionTime:2022-11-08 18:07:43 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-08 18:13:39 +0000 UTC,LastTransitionTime:2022-11-08 18:07:43 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-08 18:13:39 +0000 UTC,LastTransitionTime:2022-11-08 18:07:54 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.17.0.1,},NodeAddress{Type:Hostname,Address:172.17.0.1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:,SystemUUID:7d8834b1-ec1e-71b0-7148-50316089d154,BootID:99214993-e7b1-4bff-9db2-b9548be8d199,KernelVersion:5.4.0-1078-gke,OSImage:Debian GNU/Linux 10 (buster),ContainerRuntimeVersion:containerd://1.6.8,KubeletVersion:v1.26.0-alpha.3.387+504f252722dcc8,KubeProxyVersion:v1.26.0-alpha.3.387+504f252722dcc8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:af7e3857d87770ddb40f5ea4f89b5a2709504ab1ee31f9ea4ab5823c045f2146 registry.k8s.io/e2e-test-images/agnhost:2.40],SizeBytes:51155161,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nautilus@sha256:99c0d6f1ad24a1aa1905d9c6534d193f268f7b23f9add2ae6bb41f31094bdd5c registry.k8s.io/e2e-test-images/nautilus:1.5],SizeBytes:49642095,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 8 18:18:23.629: INFO: Logging kubelet events for node 172.17.0.1 Nov 8 18:18:23.635: INFO: Logging pods the kubelet thinks is on node 172.17.0.1 Nov 8 18:18:23.642: INFO: coredns-755454cbdc-s26wr started at 2022-11-08 18:07:54 +0000 UTC (0+1 container statuses recorded) Nov 8 18:18:23.642: INFO: Container coredns ready: false, restart count 6 Nov 8 18:18:23.642: INFO: update-demo-nautilus-lwbm5 started at 2022-11-08 18:13:22 +0000 UTC (0+1 container statuses recorded) Nov 8 18:18:23.642: INFO: Container update-demo ready: false, restart count 5 Nov 8 18:18:23.642: INFO: update-demo-nautilus-4lxzf started at 2022-11-08 18:13:22 +0000 UTC (0+1 container statuses recorded) Nov 8 18:18:23.642: INFO: Container update-demo ready: false, restart count 5 Nov 8 18:18:23.694: INFO: Latency metrics for node 172.17.0.1 [DeferCleanup (Each)] [sig-cli] Kubectl client tear down framework | framework.go:193 STEP: Destroying namespace "kubectl-2855" for this suite. 11/08/22 18:18:23.695
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-network\]\sDNS\sshould\sprovide\sDNS\sfor\spods\sfor\sHostname\s\[Conformance\]$'
test/e2e/network/dns_common.go:455 k8s.io/kubernetes/test/e2e/network.assertFilesContain({0xc003e39600?, 0x4?, 0x4?}, {0x74b5afe?, 0x7?}, 0xc002569400?, {0x7efa648?, 0xc003c58d00?}, 0x0?, {0x0, ...}) test/e2e/network/dns_common.go:455 +0x1dc k8s.io/kubernetes/test/e2e/network.assertFilesExist(...) test/e2e/network/dns_common.go:449 k8s.io/kubernetes/test/e2e/network.validateDNSResults(0xc000d08c30, 0xc002569400, {0xc003e39600, 0x4, 0x4}) test/e2e/network/dns_common.go:512 +0x452 k8s.io/kubernetes/test/e2e/network.glob..func2.7() test/e2e/network/dns.go:281 +0x8b4from junit_01.xml
[BeforeEach] [sig-network] DNS set up framework | framework.go:178 STEP: Creating a kubernetes client 11/08/22 18:29:28.85 Nov 8 18:29:28.850: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename dns 11/08/22 18:29:28.852 STEP: Waiting for a default service account to be provisioned in namespace 11/08/22 18:29:28.869 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/08/22 18:29:28.872 [BeforeEach] [sig-network] DNS test/e2e/framework/metrics/init/init.go:31 [It] should provide DNS for pods for Hostname [Conformance] test/e2e/network/dns.go:248 STEP: Creating a test headless service 11/08/22 18:29:28.876 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-8524.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-8524.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;sleep 1; done 11/08/22 18:29:28.883 STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-8524.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-8524.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;sleep 1; done 11/08/22 18:29:28.883 STEP: creating a pod to probe DNS 11/08/22 18:29:28.883 STEP: submitting the pod to kubernetes 11/08/22 18:29:28.883 Nov 8 18:29:28.896: INFO: Waiting up to 15m0s for pod "dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63" in namespace "dns-8524" to be "running" Nov 8 18:29:28.905: INFO: Pod "dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63": Phase="Pending", Reason="", readiness=false. Elapsed: 8.815443ms Nov 8 18:29:30.910: INFO: Pod "dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013898868s Nov 8 18:29:32.909: INFO: Pod "dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012992685s Nov 8 18:29:34.913: INFO: Pod "dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63": Phase="Pending", Reason="", readiness=false. Elapsed: 6.016692833s Nov 8 18:29:36.910: INFO: Pod "dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63": Phase="Pending", Reason="", readiness=false. Elapsed: 8.014134318s Nov 8 18:29:38.912: INFO: Pod "dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63": Phase="Pending", Reason="", readiness=false. Elapsed: 10.015647235s Nov 8 18:29:40.910: INFO: Pod "dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63": Phase="Pending", Reason="", readiness=false. Elapsed: 12.014386367s Nov 8 18:29:42.911: INFO: Pod "dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63": Phase="Running", Reason="", readiness=false. Elapsed: 14.014820357s Nov 8 18:29:42.911: INFO: Pod "dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63" satisfied condition "running" STEP: retrieving the pod 11/08/22 18:29:42.911 STEP: looking for the results for each expected name from probers 11/08/22 18:29:42.916 Nov 8 18:29:42.921: INFO: Unable to read wheezy_hosts@dns-querier-2.dns-test-service-2.dns-8524.svc.cluster.local from pod dns-8524/dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63: the server rejected our request for an unknown reason (get pods dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63) Nov 8 18:29:42.924: INFO: Unable to read wheezy_hosts@dns-querier-2 from pod dns-8524/dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63: the server rejected our request for an unknown reason (get pods dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63) Nov 8 18:29:42.928: INFO: Unable to read jessie_hosts@dns-querier-2.dns-test-service-2.dns-8524.svc.cluster.local from pod dns-8524/dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63: the server rejected our request for an unknown reason (get pods dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63) Nov 8 18:29:42.933: INFO: Unable to read jessie_hosts@dns-querier-2 from pod dns-8524/dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63: the server rejected our request for an unknown reason (get pods dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63) Nov 8 18:29:42.933: INFO: Lookups using dns-8524/dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63 failed for: [wheezy_hosts@dns-querier-2.dns-test-service-2.dns-8524.svc.cluster.local wheezy_hosts@dns-querier-2 jessie_hosts@dns-querier-2.dns-test-service-2.dns-8524.svc.cluster.local jessie_hosts@dns-querier-2] Nov 8 18:29:47.939: INFO: Unable to read wheezy_hosts@dns-querier-2.dns-test-service-2.dns-8524.svc.cluster.local from pod dns-8524/dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63: the server is currently unable to handle the request (get pods dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63) Nov 8 18:29:47.943: INFO: Unable to read wheezy_hosts@dns-querier-2 from pod dns-8524/dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63: the server is currently unable to handle the request (get pods dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63) Nov 8 18:29:47.948: INFO: Unable to read jessie_hosts@dns-querier-2.dns-test-service-2.dns-8524.svc.cluster.local from pod dns-8524/dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63: the server is currently unable to handle the request (get pods dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63) Nov 8 18:29:47.953: INFO: Unable to read jessie_hosts@dns-querier-2 from pod dns-8524/dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63: the server is currently unable to handle the request (get pods dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63) Nov 8 18:29:47.954: INFO: Lookups using dns-8524/dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63 failed for: [wheezy_hosts@dns-querier-2.dns-test-service-2.dns-8524.svc.cluster.local wheezy_hosts@dns-querier-2 jessie_hosts@dns-querier-2.dns-test-service-2.dns-8524.svc.cluster.local jessie_hosts@dns-querier-2] Nov 8 18:30:22.939: INFO: Unable to read wheezy_hosts@dns-querier-2.dns-test-service-2.dns-8524.svc.cluster.local from pod dns-8524/dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63: the server is currently unable to handle the request (get pods dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63) Nov 8 18:30:26.025: INFO: Unable to read wheezy_hosts@dns-querier-2 from pod dns-8524/dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63: the server is currently unable to handle the request (get pods dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63) Nov 8 18:30:26.039: INFO: Unable to read jessie_hosts@dns-querier-2.dns-test-service-2.dns-8524.svc.cluster.local from pod dns-8524/dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63: the server is currently unable to handle the request (get pods dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63) Nov 8 18:30:26.043: INFO: Unable to read jessie_hosts@dns-querier-2 from pod dns-8524/dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63: the server is currently unable to handle the request (get pods dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63) Nov 8 18:30:26.043: INFO: Lookups using dns-8524/dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63 failed for: [wheezy_hosts@dns-querier-2.dns-test-service-2.dns-8524.svc.cluster.local wheezy_hosts@dns-querier-2 jessie_hosts@dns-querier-2.dns-test-service-2.dns-8524.svc.cluster.local jessie_hosts@dns-querier-2] Nov 8 18:30:57.939: INFO: Unable to read wheezy_hosts@dns-querier-2.dns-test-service-2.dns-8524.svc.cluster.local from pod dns-8524/dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63: the server is currently unable to handle the request (get pods dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63) Nov 8 18:31:00.994: INFO: Unable to read wheezy_hosts@dns-querier-2 from pod dns-8524/dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63: the server is currently unable to handle the request (get pods dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63) Nov 8 18:31:01.007: INFO: Lookups using dns-8524/dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63 failed for: [wheezy_hosts@dns-querier-2.dns-test-service-2.dns-8524.svc.cluster.local wheezy_hosts@dns-querier-2] Nov 8 18:31:32.939: INFO: Unable to read wheezy_hosts@dns-querier-2.dns-test-service-2.dns-8524.svc.cluster.local from pod dns-8524/dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63: the server is currently unable to handle the request (get pods dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63) Nov 8 18:31:36.002: INFO: Unable to read wheezy_hosts@dns-querier-2 from pod dns-8524/dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63: the server is currently unable to handle the request (get pods dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63) Nov 8 18:31:39.073: INFO: Unable to read jessie_hosts@dns-querier-2.dns-test-service-2.dns-8524.svc.cluster.local from pod dns-8524/dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63: the server is currently unable to handle the request (get pods dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63) Nov 8 18:31:42.145: INFO: Unable to read jessie_hosts@dns-querier-2 from pod dns-8524/dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63: the server is currently unable to handle the request (get pods dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63) Nov 8 18:31:42.146: INFO: Lookups using dns-8524/dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63 failed for: [wheezy_hosts@dns-querier-2.dns-test-service-2.dns-8524.svc.cluster.local wheezy_hosts@dns-querier-2 jessie_hosts@dns-querier-2.dns-test-service-2.dns-8524.svc.cluster.local jessie_hosts@dns-querier-2] Nov 8 18:31:46.018: INFO: Unable to read wheezy_hosts@dns-querier-2.dns-test-service-2.dns-8524.svc.cluster.local from pod dns-8524/dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63: the server is currently unable to handle the request (get pods dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63) Nov 8 18:31:46.023: INFO: Unable to read wheezy_hosts@dns-querier-2 from pod dns-8524/dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63: the server is currently unable to handle the request (get pods dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63) Nov 8 18:31:46.028: INFO: Unable to read jessie_hosts@dns-querier-2.dns-test-service-2.dns-8524.svc.cluster.local from pod dns-8524/dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63: the server is currently unable to handle the request (get pods dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63) Nov 8 18:31:46.032: INFO: Unable to read jessie_hosts@dns-querier-2 from pod dns-8524/dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63: the server is currently unable to handle the request (get pods dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63) Nov 8 18:31:46.032: INFO: Lookups using dns-8524/dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63 failed for: [wheezy_hosts@dns-querier-2.dns-test-service-2.dns-8524.svc.cluster.local wheezy_hosts@dns-querier-2 jessie_hosts@dns-querier-2.dns-test-service-2.dns-8524.svc.cluster.local jessie_hosts@dns-querier-2] Nov 8 18:32:17.938: INFO: Unable to read wheezy_hosts@dns-querier-2.dns-test-service-2.dns-8524.svc.cluster.local from pod dns-8524/dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63: the server is currently unable to handle the request (get pods dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63) Nov 8 18:32:20.994: INFO: Unable to read wheezy_hosts@dns-querier-2 from pod dns-8524/dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63: the server is currently unable to handle the request (get pods dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63) Nov 8 18:32:20.998: INFO: Unable to read jessie_hosts@dns-querier-2.dns-test-service-2.dns-8524.svc.cluster.local from pod dns-8524/dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63: the server is currently unable to handle the request (get pods dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63) Nov 8 18:32:21.002: INFO: Unable to read jessie_hosts@dns-querier-2 from pod dns-8524/dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63: the server is currently unable to handle the request (get pods dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63) Nov 8 18:32:21.002: INFO: Lookups using dns-8524/dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63 failed for: [wheezy_hosts@dns-querier-2.dns-test-service-2.dns-8524.svc.cluster.local wheezy_hosts@dns-querier-2 jessie_hosts@dns-querier-2.dns-test-service-2.dns-8524.svc.cluster.local jessie_hosts@dns-querier-2] Nov 8 18:32:52.938: INFO: Unable to read wheezy_hosts@dns-querier-2.dns-test-service-2.dns-8524.svc.cluster.local from pod dns-8524/dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63: the server is currently unable to handle the request (get pods dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63) Nov 8 18:32:56.005: INFO: Unable to read wheezy_hosts@dns-querier-2 from pod dns-8524/dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63: the server is currently unable to handle the request (get pods dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63) Nov 8 18:32:59.074: INFO: Unable to read jessie_hosts@dns-querier-2.dns-test-service-2.dns-8524.svc.cluster.local from pod dns-8524/dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63: the server is currently unable to handle the request (get pods dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63) Nov 8 18:33:02.146: INFO: Unable to read jessie_hosts@dns-querier-2 from pod dns-8524/dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63: the server is currently unable to handle the request (get pods dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63) Nov 8 18:33:02.146: INFO: Lookups using dns-8524/dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63 failed for: [wheezy_hosts@dns-querier-2.dns-test-service-2.dns-8524.svc.cluster.local wheezy_hosts@dns-querier-2 jessie_hosts@dns-querier-2.dns-test-service-2.dns-8524.svc.cluster.local jessie_hosts@dns-querier-2] Nov 8 18:33:06.018: INFO: Unable to read wheezy_hosts@dns-querier-2.dns-test-service-2.dns-8524.svc.cluster.local from pod dns-8524/dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63: the server is currently unable to handle the request (get pods dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63) Nov 8 18:33:06.023: INFO: Unable to read wheezy_hosts@dns-querier-2 from pod dns-8524/dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63: the server is currently unable to handle the request (get pods dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63) Nov 8 18:33:06.027: INFO: Unable to read jessie_hosts@dns-querier-2.dns-test-service-2.dns-8524.svc.cluster.local from pod dns-8524/dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63: the server is currently unable to handle the request (get pods dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63) Nov 8 18:33:06.032: INFO: Unable to read jessie_hosts@dns-querier-2 from pod dns-8524/dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63: the server is currently unable to handle the request (get pods dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63) Nov 8 18:33:06.032: INFO: Lookups using dns-8524/dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63 failed for: [wheezy_hosts@dns-querier-2.dns-test-service-2.dns-8524.svc.cluster.local wheezy_hosts@dns-querier-2 jessie_hosts@dns-querier-2.dns-test-service-2.dns-8524.svc.cluster.local jessie_hosts@dns-querier-2] Nov 8 18:33:37.940: INFO: Unable to read wheezy_hosts@dns-querier-2.dns-test-service-2.dns-8524.svc.cluster.local from pod dns-8524/dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63: the server is currently unable to handle the request (get pods dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63) Nov 8 18:33:37.945: INFO: Unable to read wheezy_hosts@dns-querier-2 from pod dns-8524/dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63: the server is currently unable to handle the request (get pods dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63) Nov 8 18:33:37.950: INFO: Unable to read jessie_hosts@dns-querier-2.dns-test-service-2.dns-8524.svc.cluster.local from pod dns-8524/dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63: the server is currently unable to handle the request (get pods dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63) Nov 8 18:33:37.955: INFO: Unable to read jessie_hosts@dns-querier-2 from pod dns-8524/dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63: the server is currently unable to handle the request (get pods dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63) Nov 8 18:33:37.955: INFO: Lookups using dns-8524/dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63 failed for: [wheezy_hosts@dns-querier-2.dns-test-service-2.dns-8524.svc.cluster.local wheezy_hosts@dns-querier-2 jessie_hosts@dns-querier-2.dns-test-service-2.dns-8524.svc.cluster.local jessie_hosts@dns-querier-2] Nov 8 18:34:12.938: INFO: Unable to read wheezy_hosts@dns-querier-2.dns-test-service-2.dns-8524.svc.cluster.local from pod dns-8524/dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63: the server is currently unable to handle the request (get pods dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63) Nov 8 18:34:12.942: INFO: Unable to read wheezy_hosts@dns-querier-2 from pod dns-8524/dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63: the server is currently unable to handle the request (get pods dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63) Nov 8 18:34:12.946: INFO: Unable to read jessie_hosts@dns-querier-2.dns-test-service-2.dns-8524.svc.cluster.local from pod dns-8524/dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63: the server is currently unable to handle the request (get pods dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63) Nov 8 18:34:12.952: INFO: Unable to read jessie_hosts@dns-querier-2 from pod dns-8524/dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63: the server is currently unable to handle the request (get pods dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63) Nov 8 18:34:12.952: INFO: Lookups using dns-8524/dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63 failed for: [wheezy_hosts@dns-querier-2.dns-test-service-2.dns-8524.svc.cluster.local wheezy_hosts@dns-querier-2 jessie_hosts@dns-querier-2.dns-test-service-2.dns-8524.svc.cluster.local jessie_hosts@dns-querier-2] Nov 8 18:34:17.938: INFO: Unable to read wheezy_hosts@dns-querier-2.dns-test-service-2.dns-8524.svc.cluster.local from pod dns-8524/dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63: the server is currently unable to handle the request (get pods dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63) Nov 8 18:34:17.941: INFO: Unable to read wheezy_hosts@dns-querier-2 from pod dns-8524/dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63: the server is currently unable to handle the request (get pods dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63) Nov 8 18:34:17.945: INFO: Unable to read jessie_hosts@dns-querier-2.dns-test-service-2.dns-8524.svc.cluster.local from pod dns-8524/dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63: the server is currently unable to handle the request (get pods dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63) Nov 8 18:34:17.950: INFO: Unable to read jessie_hosts@dns-querier-2 from pod dns-8524/dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63: the server is currently unable to handle the request (get pods dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63) Nov 8 18:34:17.950: INFO: Lookups using dns-8524/dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63 failed for: [wheezy_hosts@dns-querier-2.dns-test-service-2.dns-8524.svc.cluster.local wheezy_hosts@dns-querier-2 jessie_hosts@dns-querier-2.dns-test-service-2.dns-8524.svc.cluster.local jessie_hosts@dns-querier-2] Nov 8 18:34:26.018: INFO: Unable to read wheezy_hosts@dns-querier-2.dns-test-service-2.dns-8524.svc.cluster.local from pod dns-8524/dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63: the server is currently unable to handle the request (get pods dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63) Nov 8 18:34:26.024: INFO: Unable to read wheezy_hosts@dns-querier-2 from pod dns-8524/dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63: the server is currently unable to handle the request (get pods dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63) Nov 8 18:34:26.030: INFO: Unable to read jessie_hosts@dns-querier-2.dns-test-service-2.dns-8524.svc.cluster.local from pod dns-8524/dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63: the server is currently unable to handle the request (get pods dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63) Nov 8 18:34:26.033: INFO: Unable to read jessie_hosts@dns-querier-2 from pod dns-8524/dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63: the server is currently unable to handle the request (get pods dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63) Nov 8 18:34:26.033: INFO: Lookups using dns-8524/dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63 failed for: [wheezy_hosts@dns-querier-2.dns-test-service-2.dns-8524.svc.cluster.local wheezy_hosts@dns-querier-2 jessie_hosts@dns-querier-2.dns-test-service-2.dns-8524.svc.cluster.local jessie_hosts@dns-querier-2] ------------------------------ Automatically polling progress: [sig-network] DNS should provide DNS for pods for Hostname [Conformance] (Spec Runtime: 5m0.027s) test/e2e/network/dns.go:248 In [It] (Node Runtime: 5m0.001s) test/e2e/network/dns.go:248 At [By Step] looking for the results for each expected name from probers (Step Runtime: 4m45.96s) test/e2e/network/dns_common.go:511 Spec Goroutine goroutine 2263 [select] k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*ClientConn).RoundTrip(0xc000af0d80, 0xc00441ec00) vendor/golang.org/x/net/http2/transport.go:1200 k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).RoundTripOpt(0xc003140800, 0xc00441ec00, {0xa0?}) vendor/golang.org/x/net/http2/transport.go:519 k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).RoundTrip(...) vendor/golang.org/x/net/http2/transport.go:480 k8s.io/kubernetes/vendor/golang.org/x/net/http2.noDialH2RoundTripper.RoundTrip({0xc00168f040?}, 0xc00441ec00?) vendor/golang.org/x/net/http2/transport.go:3020 net/http.(*Transport).roundTrip(0xc00168f040, 0xc00441ec00) /usr/local/go/src/net/http/transport.go:540 net/http.(*Transport).RoundTrip(0x6ee5440?, 0xc00332a660?) /usr/local/go/src/net/http/roundtrip.go:17 k8s.io/kubernetes/vendor/k8s.io/client-go/transport.(*userAgentRoundTripper).RoundTrip(0xc0004146c0, 0xc00441ea00) vendor/k8s.io/client-go/transport/round_trippers.go:168 net/http.send(0xc00441ea00, {0x7e8b940, 0xc0004146c0}, {0x73cd720?, 0x1?, 0x0?}) /usr/local/go/src/net/http/client.go:251 net/http.(*Client).send(0xc002c45a10, 0xc00441ea00, {0x7f963eb825b8?, 0x100?, 0x0?}) /usr/local/go/src/net/http/client.go:175 net/http.(*Client).do(0xc002c45a10, 0xc00441ea00) /usr/local/go/src/net/http/client.go:715 net/http.(*Client).Do(...) /usr/local/go/src/net/http/client.go:581 k8s.io/kubernetes/vendor/k8s.io/client-go/rest.(*Request).request(0xc00441e800, {0x7ebe6e0, 0xc002446240}, 0xc000f60540?) vendor/k8s.io/client-go/rest/request.go:964 k8s.io/kubernetes/vendor/k8s.io/client-go/rest.(*Request).Do(0xc00441e800, {0x7ebe6e0, 0xc002446240}) vendor/k8s.io/client-go/rest/request.go:1005 > k8s.io/kubernetes/test/e2e/network.assertFilesContain.func1() test/e2e/network/dns_common.go:468 | Name(pod.Name). | Suffix(fileDir, fileName). > Do(ctx).Raw() | | if err != nil { k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x26f2811, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7ebe6a8?, 0xc0001a8000?}, 0xc0042637c0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7ebe6a8, 0xc0001a8000}, 0xc0043c1db8, 0x2f7ec4a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7ebe6a8, 0xc0001a8000}, 0xb0?, 0x2f7d7e5?, 0x60?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7ebe6a8, 0xc0001a8000}, 0x7e6e5f8?, 0xc000f60900?, 0x25da967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x1e?, 0x1ff?, 0x0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.assertFilesContain({0xc003e39600, 0x4, 0x4}, {0x74b5afe, 0x7}, 0xc002569400, {0x7efa648?, 0xc003c58d00}, 0x0, {0x0, ...}) test/e2e/network/dns_common.go:455 | var failed []string | > framework.ExpectNoError(wait.PollImmediate(time.Second*5, time.Second*600, func() (bool, error) { | failed = []string{} | > k8s.io/kubernetes/test/e2e/network.assertFilesExist(...) test/e2e/network/dns_common.go:449 | | func assertFilesExist(fileNames []string, fileDir string, pod *v1.Pod, client clientset.Interface) { > assertFilesContain(fileNames, fileDir, pod, client, false, "") | } | > k8s.io/kubernetes/test/e2e/network.validateDNSResults(0xc000d08c30, 0xc002569400, {0xc003e39600, 0x4, 0x4}) test/e2e/network/dns_common.go:512 | // Try to find results for each expected name. | ginkgo.By("looking for the results for each expected name from probers") > assertFilesExist(fileNames, "results", pod, f.ClientSet) | | // TODO: probe from the host, too. > k8s.io/kubernetes/test/e2e/network.glob..func2.7() test/e2e/network/dns.go:281 | pod1.Spec.Subdomain = serviceName | > validateDNSResults(f, pod1, append(wheezyFileNames, jessieFileNames...)) | }) | k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc0029e1b90, 0xc0029b52c0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Automatically polling progress: [sig-network] DNS should provide DNS for pods for Hostname [Conformance] (Spec Runtime: 5m20.028s) test/e2e/network/dns.go:248 In [It] (Node Runtime: 5m20.003s) test/e2e/network/dns.go:248 At [By Step] looking for the results for each expected name from probers (Step Runtime: 5m5.962s) test/e2e/network/dns_common.go:511 Spec Goroutine goroutine 2263 [select] k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*ClientConn).RoundTrip(0xc000af0d80, 0xc00441ec00) vendor/golang.org/x/net/http2/transport.go:1200 k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).RoundTripOpt(0xc003140800, 0xc00441ec00, {0xa0?}) vendor/golang.org/x/net/http2/transport.go:519 k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).RoundTrip(...) vendor/golang.org/x/net/http2/transport.go:480 k8s.io/kubernetes/vendor/golang.org/x/net/http2.noDialH2RoundTripper.RoundTrip({0xc00168f040?}, 0xc00441ec00?) vendor/golang.org/x/net/http2/transport.go:3020 net/http.(*Transport).roundTrip(0xc00168f040, 0xc00441ec00) /usr/local/go/src/net/http/transport.go:540 net/http.(*Transport).RoundTrip(0x6ee5440?, 0xc00332a660?) /usr/local/go/src/net/http/roundtrip.go:17 k8s.io/kubernetes/vendor/k8s.io/client-go/transport.(*userAgentRoundTripper).RoundTrip(0xc0004146c0, 0xc00441ea00) vendor/k8s.io/client-go/transport/round_trippers.go:168 net/http.send(0xc00441ea00, {0x7e8b940, 0xc0004146c0}, {0x73cd720?, 0x1?, 0x0?}) /usr/local/go/src/net/http/client.go:251 net/http.(*Client).send(0xc002c45a10, 0xc00441ea00, {0x7f963eb825b8?, 0x100?, 0x0?}) /usr/local/go/src/net/http/client.go:175 net/http.(*Client).do(0xc002c45a10, 0xc00441ea00) /usr/local/go/src/net/http/client.go:715 net/http.(*Client).Do(...) /usr/local/go/src/net/http/client.go:581 k8s.io/kubernetes/vendor/k8s.io/client-go/rest.(*Request).request(0xc00441e800, {0x7ebe6e0, 0xc002446240}, 0xc000f60540?) vendor/k8s.io/client-go/rest/request.go:964 k8s.io/kubernetes/vendor/k8s.io/client-go/rest.(*Request).Do(0xc00441e800, {0x7ebe6e0, 0xc002446240}) vendor/k8s.io/client-go/rest/request.go:1005 > k8s.io/kubernetes/test/e2e/network.assertFilesContain.func1() test/e2e/network/dns_common.go:468 | Name(pod.Name). | Suffix(fileDir, fileName). > Do(ctx).Raw() | | if err != nil { k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x26f2811, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7ebe6a8?, 0xc0001a8000?}, 0xc0042637c0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7ebe6a8, 0xc0001a8000}, 0xc0043c1db8, 0x2f7ec4a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7ebe6a8, 0xc0001a8000}, 0xb0?, 0x2f7d7e5?, 0x60?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7ebe6a8, 0xc0001a8000}, 0x7e6e5f8?, 0xc000f60900?, 0x25da967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x1e?, 0x1ff?, 0x0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.assertFilesContain({0xc003e39600, 0x4, 0x4}, {0x74b5afe, 0x7}, 0xc002569400, {0x7efa648?, 0xc003c58d00}, 0x0, {0x0, ...}) test/e2e/network/dns_common.go:455 | var failed []string | > framework.ExpectNoError(wait.PollImmediate(time.Second*5, time.Second*600, func() (bool, error) { | failed = []string{} | > k8s.io/kubernetes/test/e2e/network.assertFilesExist(...) test/e2e/network/dns_common.go:449 | | func assertFilesExist(fileNames []string, fileDir string, pod *v1.Pod, client clientset.Interface) { > assertFilesContain(fileNames, fileDir, pod, client, false, "") | } | > k8s.io/kubernetes/test/e2e/network.validateDNSResults(0xc000d08c30, 0xc002569400, {0xc003e39600, 0x4, 0x4}) test/e2e/network/dns_common.go:512 | // Try to find results for each expected name. | ginkgo.By("looking for the results for each expected name from probers") > assertFilesExist(fileNames, "results", pod, f.ClientSet) | | // TODO: probe from the host, too. > k8s.io/kubernetes/test/e2e/network.glob..func2.7() test/e2e/network/dns.go:281 | pod1.Spec.Subdomain = serviceName | > validateDNSResults(f, pod1, append(wheezyFileNames, jessieFileNames...)) | }) | k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc0029e1b90, 0xc0029b52c0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 8 18:34:57.939: INFO: Unable to read wheezy_hosts@dns-querier-2.dns-test-service-2.dns-8524.svc.cluster.local from pod dns-8524/dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63: the server is currently unable to handle the request (get pods dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63) Nov 8 18:34:57.944: INFO: Unable to read wheezy_hosts@dns-querier-2 from pod dns-8524/dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63: the server is currently unable to handle the request (get pods dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63) Nov 8 18:34:57.947: INFO: Unable to read jessie_hosts@dns-querier-2.dns-test-service-2.dns-8524.svc.cluster.local from pod dns-8524/dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63: the server is currently unable to handle the request (get pods dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63) Nov 8 18:34:57.951: INFO: Unable to read jessie_hosts@dns-querier-2 from pod dns-8524/dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63: the server is currently unable to handle the request (get pods dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63) Nov 8 18:34:57.951: INFO: Lookups using dns-8524/dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63 failed for: [wheezy_hosts@dns-querier-2.dns-test-service-2.dns-8524.svc.cluster.local wheezy_hosts@dns-querier-2 jessie_hosts@dns-querier-2.dns-test-service-2.dns-8524.svc.cluster.local jessie_hosts@dns-querier-2] Nov 8 18:35:06.021: INFO: Unable to read wheezy_hosts@dns-querier-2.dns-test-service-2.dns-8524.svc.cluster.local from pod dns-8524/dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63: the server is currently unable to handle the request (get pods dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63) ------------------------------ Automatically polling progress: [sig-network] DNS should provide DNS for pods for Hostname [Conformance] (Spec Runtime: 5m40.031s) test/e2e/network/dns.go:248 In [It] (Node Runtime: 5m40.005s) test/e2e/network/dns.go:248 At [By Step] looking for the results for each expected name from probers (Step Runtime: 5m25.965s) test/e2e/network/dns_common.go:511 Spec Goroutine goroutine 2263 [select] k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*ClientConn).RoundTrip(0xc000af0d80, 0xc00441fc00) vendor/golang.org/x/net/http2/transport.go:1200 k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).RoundTripOpt(0xc003140800, 0xc00441fc00, {0xa0?}) vendor/golang.org/x/net/http2/transport.go:519 k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).RoundTrip(...) vendor/golang.org/x/net/http2/transport.go:480 k8s.io/kubernetes/vendor/golang.org/x/net/http2.noDialH2RoundTripper.RoundTrip({0xc00168f040?}, 0xc00441fc00?) vendor/golang.org/x/net/http2/transport.go:3020 net/http.(*Transport).roundTrip(0xc00168f040, 0xc00441fc00) /usr/local/go/src/net/http/transport.go:540 net/http.(*Transport).RoundTrip(0x6ee5440?, 0xc00332b440?) /usr/local/go/src/net/http/roundtrip.go:17 k8s.io/kubernetes/vendor/k8s.io/client-go/transport.(*userAgentRoundTripper).RoundTrip(0xc0004146c0, 0xc00441fb00) vendor/k8s.io/client-go/transport/round_trippers.go:168 net/http.send(0xc00441fb00, {0x7e8b940, 0xc0004146c0}, {0x73cd720?, 0x1?, 0x0?}) /usr/local/go/src/net/http/client.go:251 net/http.(*Client).send(0xc002c45a10, 0xc00441fb00, {0x7f963eb825b8?, 0x100?, 0x0?}) /usr/local/go/src/net/http/client.go:175 net/http.(*Client).do(0xc002c45a10, 0xc00441fb00) /usr/local/go/src/net/http/client.go:715 net/http.(*Client).Do(...) /usr/local/go/src/net/http/client.go:581 k8s.io/kubernetes/vendor/k8s.io/client-go/rest.(*Request).request(0xc00441f900, {0x7ebe6e0, 0xc00338e1e0}, 0xc000f60540?) vendor/k8s.io/client-go/rest/request.go:964 k8s.io/kubernetes/vendor/k8s.io/client-go/rest.(*Request).Do(0xc00441f900, {0x7ebe6e0, 0xc00338e1e0}) vendor/k8s.io/client-go/rest/request.go:1005 > k8s.io/kubernetes/test/e2e/network.assertFilesContain.func1() test/e2e/network/dns_common.go:468 | Name(pod.Name). | Suffix(fileDir, fileName). > Do(ctx).Raw() | | if err != nil { k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x26f2811, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7ebe6a8?, 0xc0001a8000?}, 0xc0042637c0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7ebe6a8, 0xc0001a8000}, 0xc0043c1db8, 0x2f7ec4a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7ebe6a8, 0xc0001a8000}, 0xb0?, 0x2f7d7e5?, 0x60?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7ebe6a8, 0xc0001a8000}, 0x7e6e5f8?, 0xc000f60900?, 0x25da967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x1e?, 0x1ff?, 0x0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.assertFilesContain({0xc003e39600, 0x4, 0x4}, {0x74b5afe, 0x7}, 0xc002569400, {0x7efa648?, 0xc003c58d00}, 0x0, {0x0, ...}) test/e2e/network/dns_common.go:455 | var failed []string | > framework.ExpectNoError(wait.PollImmediate(time.Second*5, time.Second*600, func() (bool, error) { | failed = []string{} | > k8s.io/kubernetes/test/e2e/network.assertFilesExist(...) test/e2e/network/dns_common.go:449 | | func assertFilesExist(fileNames []string, fileDir string, pod *v1.Pod, client clientset.Interface) { > assertFilesContain(fileNames, fileDir, pod, client, false, "") | } | > k8s.io/kubernetes/test/e2e/network.validateDNSResults(0xc000d08c30, 0xc002569400, {0xc003e39600, 0x4, 0x4}) test/e2e/network/dns_common.go:512 | // Try to find results for each expected name. | ginkgo.By("looking for the results for each expected name from probers") > assertFilesExist(fileNames, "results", pod, f.ClientSet) | | // TODO: probe from the host, too. > k8s.io/kubernetes/test/e2e/network.glob..func2.7() test/e2e/network/dns.go:281 | pod1.Spec.Subdomain = serviceName | > validateDNSResults(f, pod1, append(wheezyFileNames, jessieFileNames...)) | }) | k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc0029e1b90, 0xc0029b52c0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 8 18:35:09.090: INFO: Unable to read wheezy_hosts@dns-querier-2 from pod dns-8524/dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63: the server is currently unable to handle the request (get pods dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63) Nov 8 18:35:09.100: INFO: Lookups using dns-8524/dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63 failed for: [wheezy_hosts@dns-querier-2.dns-test-service-2.dns-8524.svc.cluster.local wheezy_hosts@dns-querier-2] Nov 8 18:35:12.942: INFO: Unable to read wheezy_hosts@dns-querier-2.dns-test-service-2.dns-8524.svc.cluster.local from pod dns-8524/dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63: the server is currently unable to handle the request (get pods dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63) Nov 8 18:35:12.949: INFO: Unable to read wheezy_hosts@dns-querier-2 from pod dns-8524/dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63: the server is currently unable to handle the request (get pods dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63) Nov 8 18:35:12.956: INFO: Unable to read jessie_hosts@dns-querier-2.dns-test-service-2.dns-8524.svc.cluster.local from pod dns-8524/dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63: the server is currently unable to handle the request (get pods dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63) Nov 8 18:35:12.962: INFO: Unable to read jessie_hosts@dns-querier-2 from pod dns-8524/dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63: the server is currently unable to handle the request (get pods dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63) Nov 8 18:35:12.962: INFO: Lookups using dns-8524/dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63 failed for: [wheezy_hosts@dns-querier-2.dns-test-service-2.dns-8524.svc.cluster.local wheezy_hosts@dns-querier-2 jessie_hosts@dns-querier-2.dns-test-service-2.dns-8524.svc.cluster.local jessie_hosts@dns-querier-2] Nov 8 18:35:20.994: INFO: Unable to read wheezy_hosts@dns-querier-2.dns-test-service-2.dns-8524.svc.cluster.local from pod dns-8524/dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63: the server is currently unable to handle the request (get pods dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63) Nov 8 18:35:20.999: INFO: Unable to read wheezy_hosts@dns-querier-2 from pod dns-8524/dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63: the server is currently unable to handle the request (get pods dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63) Nov 8 18:35:21.005: INFO: Unable to read jessie_hosts@dns-querier-2.dns-test-service-2.dns-8524.svc.cluster.local from pod dns-8524/dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63: the server is currently unable to handle the request (get pods dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63) Nov 8 18:35:21.012: INFO: Unable to read jessie_hosts@dns-querier-2 from pod dns-8524/dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63: the server is currently unable to handle the request (get pods dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63) Nov 8 18:35:21.012: INFO: Lookups using dns-8524/dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63 failed for: [wheezy_hosts@dns-querier-2.dns-test-service-2.dns-8524.svc.cluster.local wheezy_hosts@dns-querier-2 jessie_hosts@dns-querier-2.dns-test-service-2.dns-8524.svc.cluster.local jessie_hosts@dns-querier-2] ------------------------------ Automatically polling progress: [sig-network] DNS should provide DNS for pods for Hostname [Conformance] (Spec Runtime: 6m0.033s) test/e2e/network/dns.go:248 In [It] (Node Runtime: 6m0.007s) test/e2e/network/dns.go:248 At [By Step] looking for the results for each expected name from probers (Step Runtime: 5m45.967s) test/e2e/network/dns_common.go:511 Spec Goroutine goroutine 2263 [select] k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*ClientConn).RoundTrip(0xc000af0d80, 0xc00071c400) vendor/golang.org/x/net/http2/transport.go:1200 k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).RoundTripOpt(0xc003140800, 0xc00071c400, {0xa0?}) vendor/golang.org/x/net/http2/transport.go:519 k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).RoundTrip(...) vendor/golang.org/x/net/http2/transport.go:480 k8s.io/kubernetes/vendor/golang.org/x/net/http2.noDialH2RoundTripper.RoundTrip({0xc00168f040?}, 0xc00071c400?) vendor/golang.org/x/net/http2/transport.go:3020 net/http.(*Transport).roundTrip(0xc00168f040, 0xc00071c400) /usr/local/go/src/net/http/transport.go:540 net/http.(*Transport).RoundTrip(0x6ee5440?, 0xc00483f920?) /usr/local/go/src/net/http/roundtrip.go:17 k8s.io/kubernetes/vendor/k8s.io/client-go/transport.(*userAgentRoundTripper).RoundTrip(0xc0004146c0, 0xc00071c300) vendor/k8s.io/client-go/transport/round_trippers.go:168 net/http.send(0xc00071c300, {0x7e8b940, 0xc0004146c0}, {0x73cd720?, 0x1?, 0x0?}) /usr/local/go/src/net/http/client.go:251 net/http.(*Client).send(0xc002c45a10, 0xc00071c300, {0x7f963eb82108?, 0x100?, 0x0?}) /usr/local/go/src/net/http/client.go:175 net/http.(*Client).do(0xc002c45a10, 0xc00071c300) /usr/local/go/src/net/http/client.go:715 net/http.(*Client).Do(...) /usr/local/go/src/net/http/client.go:581 k8s.io/kubernetes/vendor/k8s.io/client-go/rest.(*Request).request(0xc00071c100, {0x7ebe6e0, 0xc003208420}, 0xc000f60540?) vendor/k8s.io/client-go/rest/request.go:964 k8s.io/kubernetes/vendor/k8s.io/client-go/rest.(*Request).Do(0xc00071c100, {0x7ebe6e0, 0xc003208420}) vendor/k8s.io/client-go/rest/request.go:1005 > k8s.io/kubernetes/test/e2e/network.assertFilesContain.func1() test/e2e/network/dns_common.go:468 | Name(pod.Name). | Suffix(fileDir, fileName). > Do(ctx).Raw() | | if err != nil { k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x26f2811, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7ebe6a8?, 0xc0001a8000?}, 0xc0042637c0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7ebe6a8, 0xc0001a8000}, 0xc0043c1db8, 0x2f7ec4a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7ebe6a8, 0xc0001a8000}, 0xb0?, 0x2f7d7e5?, 0x60?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7ebe6a8, 0xc0001a8000}, 0x7e6e5f8?, 0xc000f60900?, 0x25da967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x1e?, 0x1ff?, 0x0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.assertFilesContain({0xc003e39600, 0x4, 0x4}, {0x74b5afe, 0x7}, 0xc002569400, {0x7efa648?, 0xc003c58d00}, 0x0, {0x0, ...}) test/e2e/network/dns_common.go:455 | var failed []string | > framework.ExpectNoError(wait.PollImmediate(time.Second*5, time.Second*600, func() (bool, error) { | failed = []string{} | > k8s.io/kubernetes/test/e2e/network.assertFilesExist(...) test/e2e/network/dns_common.go:449 | | func assertFilesExist(fileNames []string, fileDir string, pod *v1.Pod, client clientset.Interface) { > assertFilesContain(fileNames, fileDir, pod, client, false, "") | } | > k8s.io/kubernetes/test/e2e/network.validateDNSResults(0xc000d08c30, 0xc002569400, {0xc003e39600, 0x4, 0x4}) test/e2e/network/dns_common.go:512 | // Try to find results for each expected name. | ginkgo.By("looking for the results for each expected name from probers") > assertFilesExist(fileNames, "results", pod, f.ClientSet) | | // TODO: probe from the host, too. > k8s.io/kubernetes/test/e2e/network.glob..func2.7() test/e2e/network/dns.go:281 | pod1.Spec.Subdomain = serviceName | > validateDNSResults(f, pod1, append(wheezyFileNames, jessieFileNames...)) | }) | k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc0029e1b90, 0xc0029b52c0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Automatically polling progress: [sig-network] DNS should provide DNS for pods for Hostname [Conformance] (Spec Runtime: 6m20.034s) test/e2e/network/dns.go:248 In [It] (Node Runtime: 6m20.009s) test/e2e/network/dns.go:248 At [By Step] looking for the results for each expected name from probers (Step Runtime: 6m5.968s) test/e2e/network/dns_common.go:511 Spec Goroutine goroutine 2263 [select] k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*ClientConn).RoundTrip(0xc000af0d80, 0xc00071c400) vendor/golang.org/x/net/http2/transport.go:1200 k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).RoundTripOpt(0xc003140800, 0xc00071c400, {0xa0?}) vendor/golang.org/x/net/http2/transport.go:519 k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).RoundTrip(...) vendor/golang.org/x/net/http2/transport.go:480 k8s.io/kubernetes/vendor/golang.org/x/net/http2.noDialH2RoundTripper.RoundTrip({0xc00168f040?}, 0xc00071c400?) vendor/golang.org/x/net/http2/transport.go:3020 net/http.(*Transport).roundTrip(0xc00168f040, 0xc00071c400) /usr/local/go/src/net/http/transport.go:540 net/http.(*Transport).RoundTrip(0x6ee5440?, 0xc00483f920?) /usr/local/go/src/net/http/roundtrip.go:17 k8s.io/kubernetes/vendor/k8s.io/client-go/transport.(*userAgentRoundTripper).RoundTrip(0xc0004146c0, 0xc00071c300) vendor/k8s.io/client-go/transport/round_trippers.go:168 net/http.send(0xc00071c300, {0x7e8b940, 0xc0004146c0}, {0x73cd720?, 0x1?, 0x0?}) /usr/local/go/src/net/http/client.go:251 net/http.(*Client).send(0xc002c45a10, 0xc00071c300, {0x7f963eb82108?, 0x100?, 0x0?}) /usr/local/go/src/net/http/client.go:175 net/http.(*Client).do(0xc002c45a10, 0xc00071c300) /usr/local/go/src/net/http/client.go:715 net/http.(*Client).Do(...) /usr/local/go/src/net/http/client.go:581 k8s.io/kubernetes/vendor/k8s.io/client-go/rest.(*Request).request(0xc00071c100, {0x7ebe6e0, 0xc003208420}, 0xc000f60540?) vendor/k8s.io/client-go/rest/request.go:964 k8s.io/kubernetes/vendor/k8s.io/client-go/rest.(*Request).Do(0xc00071c100, {0x7ebe6e0, 0xc003208420}) vendor/k8s.io/client-go/rest/request.go:1005 > k8s.io/kubernetes/test/e2e/network.assertFilesContain.func1() test/e2e/network/dns_common.go:468 | Name(pod.Name). | Suffix(fileDir, fileName). > Do(ctx).Raw() | | if err != nil { k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x26f2811, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7ebe6a8?, 0xc0001a8000?}, 0xc0042637c0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7ebe6a8, 0xc0001a8000}, 0xc0043c1db8, 0x2f7ec4a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7ebe6a8, 0xc0001a8000}, 0xb0?, 0x2f7d7e5?, 0x60?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7ebe6a8, 0xc0001a8000}, 0x7e6e5f8?, 0xc000f60900?, 0x25da967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x1e?, 0x1ff?, 0x0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.assertFilesContain({0xc003e39600, 0x4, 0x4}, {0x74b5afe, 0x7}, 0xc002569400, {0x7efa648?, 0xc003c58d00}, 0x0, {0x0, ...}) test/e2e/network/dns_common.go:455 | var failed []string | > framework.ExpectNoError(wait.PollImmediate(time.Second*5, time.Second*600, func() (bool, error) { | failed = []string{} | > k8s.io/kubernetes/test/e2e/network.assertFilesExist(...) test/e2e/network/dns_common.go:449 | | func assertFilesExist(fileNames []string, fileDir string, pod *v1.Pod, client clientset.Interface) { > assertFilesContain(fileNames, fileDir, pod, client, false, "") | } | > k8s.io/kubernetes/test/e2e/network.validateDNSResults(0xc000d08c30, 0xc002569400, {0xc003e39600, 0x4, 0x4}) test/e2e/network/dns_common.go:512 | // Try to find results for each expected name. | ginkgo.By("looking for the results for each expected name from probers") > assertFilesExist(fileNames, "results", pod, f.ClientSet) | | // TODO: probe from the host, too. > k8s.io/kubernetes/test/e2e/network.glob..func2.7() test/e2e/network/dns.go:281 | pod1.Spec.Subdomain = serviceName | > validateDNSResults(f, pod1, append(wheezyFileNames, jessieFileNames...)) | }) | k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc0029e1b90, 0xc0029b52c0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 8 18:35:52.940: INFO: Unable to read wheezy_hosts@dns-querier-2.dns-test-service-2.dns-8524.svc.cluster.local from pod dns-8524/dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63: the server is currently unable to handle the request (get pods dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63) Nov 8 18:35:56.002: INFO: Unable to read wheezy_hosts@dns-querier-2 from pod dns-8524/dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63: the server is currently unable to handle the request (get pods dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63) Nov 8 18:35:56.007: INFO: Unable to read jessie_hosts@dns-querier-2.dns-test-service-2.dns-8524.svc.cluster.local from pod dns-8524/dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63: the server is currently unable to handle the request (get pods dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63) Nov 8 18:35:56.012: INFO: Unable to read jessie_hosts@dns-querier-2 from pod dns-8524/dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63: the server is currently unable to handle the request (get pods dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63) Nov 8 18:35:56.012: INFO: Lookups using dns-8524/dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63 failed for: [wheezy_hosts@dns-querier-2.dns-test-service-2.dns-8524.svc.cluster.local wheezy_hosts@dns-querier-2 jessie_hosts@dns-querier-2.dns-test-service-2.dns-8524.svc.cluster.local jessie_hosts@dns-querier-2] ------------------------------ Automatically polling progress: [sig-network] DNS should provide DNS for pods for Hostname [Conformance] (Spec Runtime: 6m40.036s) test/e2e/network/dns.go:248 In [It] (Node Runtime: 6m40.011s) test/e2e/network/dns.go:248 At [By Step] looking for the results for each expected name from probers (Step Runtime: 6m25.97s) test/e2e/network/dns_common.go:511 Spec Goroutine goroutine 2263 [select] k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*ClientConn).RoundTrip(0xc000af0d80, 0xc00071cc00) vendor/golang.org/x/net/http2/transport.go:1200 k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).RoundTripOpt(0xc003140800, 0xc00071cc00, {0xa0?}) vendor/golang.org/x/net/http2/transport.go:519 k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).RoundTrip(...) vendor/golang.org/x/net/http2/transport.go:480 k8s.io/kubernetes/vendor/golang.org/x/net/http2.noDialH2RoundTripper.RoundTrip({0xc00168f040?}, 0xc00071cc00?) vendor/golang.org/x/net/http2/transport.go:3020 net/http.(*Transport).roundTrip(0xc00168f040, 0xc00071cc00) /usr/local/go/src/net/http/transport.go:540 net/http.(*Transport).RoundTrip(0x6ee5440?, 0xc00483ff20?) /usr/local/go/src/net/http/roundtrip.go:17 k8s.io/kubernetes/vendor/k8s.io/client-go/transport.(*userAgentRoundTripper).RoundTrip(0xc0004146c0, 0xc00071cb00) vendor/k8s.io/client-go/transport/round_trippers.go:168 net/http.send(0xc00071cb00, {0x7e8b940, 0xc0004146c0}, {0x73cd720?, 0x1?, 0x0?}) /usr/local/go/src/net/http/client.go:251 net/http.(*Client).send(0xc002c45a10, 0xc00071cb00, {0x7f963eb82108?, 0x100?, 0x0?}) /usr/local/go/src/net/http/client.go:175 net/http.(*Client).do(0xc002c45a10, 0xc00071cb00) /usr/local/go/src/net/http/client.go:715 net/http.(*Client).Do(...) /usr/local/go/src/net/http/client.go:581 k8s.io/kubernetes/vendor/k8s.io/client-go/rest.(*Request).request(0xc00071c900, {0x7ebe6e0, 0xc0032084e0}, 0xc000f60540?) vendor/k8s.io/client-go/rest/request.go:964 k8s.io/kubernetes/vendor/k8s.io/client-go/rest.(*Request).Do(0xc00071c900, {0x7ebe6e0, 0xc0032084e0}) vendor/k8s.io/client-go/rest/request.go:1005 > k8s.io/kubernetes/test/e2e/network.assertFilesContain.func1() test/e2e/network/dns_common.go:468 | Name(pod.Name). | Suffix(fileDir, fileName). > Do(ctx).Raw() | | if err != nil { k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x26f2811, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7ebe6a8?, 0xc0001a8000?}, 0xc0042637c0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7ebe6a8, 0xc0001a8000}, 0xc0043c1db8, 0x2f7ec4a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7ebe6a8, 0xc0001a8000}, 0xb0?, 0x2f7d7e5?, 0x60?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7ebe6a8, 0xc0001a8000}, 0x7e6e5f8?, 0xc000f60900?, 0x25da967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x1e?, 0x1ff?, 0x0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.assertFilesContain({0xc003e39600, 0x4, 0x4}, {0x74b5afe, 0x7}, 0xc002569400, {0x7efa648?, 0xc003c58d00}, 0x0, {0x0, ...}) test/e2e/network/dns_common.go:455 | var failed []string | > framework.ExpectNoError(wait.PollImmediate(time.Second*5, time.Second*600, func() (bool, error) { | failed = []string{} | > k8s.io/kubernetes/test/e2e/network.assertFilesExist(...) test/e2e/network/dns_common.go:449 | | func assertFilesExist(fileNames []string, fileDir string, pod *v1.Pod, client clientset.Interface) { > assertFilesContain(fileNames, fileDir, pod, client, false, "") | } | > k8s.io/kubernetes/test/e2e/network.validateDNSResults(0xc000d08c30, 0xc002569400, {0xc003e39600, 0x4, 0x4}) test/e2e/network/dns_common.go:512 | // Try to find results for each expected name. | ginkgo.By("looking for the results for each expected name from probers") > assertFilesExist(fileNames, "results", pod, f.ClientSet) | | // TODO: probe from the host, too. > k8s.io/kubernetes/test/e2e/network.glob..func2.7() test/e2e/network/dns.go:281 | pod1.Spec.Subdomain = serviceName | > validateDNSResults(f, pod1, append(wheezyFileNames, jessieFileNames...)) | }) | k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc0029e1b90, 0xc0029b52c0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 8 18:36:27.941: INFO: Unable to read wheezy_hosts@dns-querier-2.dns-test-service-2.dns-8524.svc.cluster.local from pod dns-8524/dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63: the server is currently unable to handle the request (get pods dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63) Nov 8 18:36:27.944: INFO: Unable to read wheezy_hosts@dns-querier-2 from pod dns-8524/dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63: the server is currently unable to handle the request (get pods dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63) Nov 8 18:36:27.949: INFO: Unable to read jessie_hosts@dns-querier-2.dns-test-service-2.dns-8524.svc.cluster.local from pod dns-8524/dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63: the server is currently unable to handle the request (get pods dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63) Nov 8 18:36:27.952: INFO: Unable to read jessie_hosts@dns-querier-2 from pod dns-8524/dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63: the server is currently unable to handle the request (get pods dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63) Nov 8 18:36:27.952: INFO: Lookups using dns-8524/dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63 failed for: [wheezy_hosts@dns-querier-2.dns-test-service-2.dns-8524.svc.cluster.local wheezy_hosts@dns-querier-2 jessie_hosts@dns-querier-2.dns-test-service-2.dns-8524.svc.cluster.local jessie_hosts@dns-querier-2] ------------------------------ Automatically polling progress: [sig-network] DNS should provide DNS for pods for Hostname [Conformance] (Spec Runtime: 7m0.038s) test/e2e/network/dns.go:248 In [It] (Node Runtime: 7m0.012s) test/e2e/network/dns.go:248 At [By Step] looking for the results for each expected name from probers (Step Runtime: 6m45.972s) test/e2e/network/dns_common.go:511 Spec Goroutine goroutine 2263 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7ebe6a8, 0xc0001a8000}, 0xc0043c1db8, 0x2f7ec4a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7ebe6a8, 0xc0001a8000}, 0xb0?, 0x2f7d7e5?, 0x60?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7ebe6a8, 0xc0001a8000}, 0x7e6e5f8?, 0xc000f60900?, 0x25da967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x1e?, 0x1ff?, 0x0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.assertFilesContain({0xc003e39600, 0x4, 0x4}, {0x74b5afe, 0x7}, 0xc002569400, {0x7efa648?, 0xc003c58d00}, 0x0, {0x0, ...}) test/e2e/network/dns_common.go:455 | var failed []string | > framework.ExpectNoError(wait.PollImmediate(time.Second*5, time.Second*600, func() (bool, error) { | failed = []string{} | > k8s.io/kubernetes/test/e2e/network.assertFilesExist(...) test/e2e/network/dns_common.go:449 | | func assertFilesExist(fileNames []string, fileDir string, pod *v1.Pod, client clientset.Interface) { > assertFilesContain(fileNames, fileDir, pod, client, false, "") | } | > k8s.io/kubernetes/test/e2e/network.validateDNSResults(0xc000d08c30, 0xc002569400, {0xc003e39600, 0x4, 0x4}) test/e2e/network/dns_common.go:512 | // Try to find results for each expected name. | ginkgo.By("looking for the results for each expected name from probers") > assertFilesExist(fileNames, "results", pod, f.ClientSet) | | // TODO: probe from the host, too. > k8s.io/kubernetes/test/e2e/network.glob..func2.7() test/e2e/network/dns.go:281 | pod1.Spec.Subdomain = serviceName | > validateDNSResults(f, pod1, append(wheezyFileNames, jessieFileNames...)) | }) | k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc0029e1b90, 0xc0029b52c0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 8 18:36:36.002: INFO: Unable to read wheezy_hosts@dns-querier-2.dns-test-service-2.dns-8524.svc.cluster.local from pod dns-8524/dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63: the server is currently unable to handle the request (get pods dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63) Nov 8 18:36:36.006: INFO: Unable to read wheezy_hosts@dns-querier-2 from pod dns-8524/dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63: the server is currently unable to handle the request (get pods dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63) Nov 8 18:36:36.011: INFO: Unable to read jessie_hosts@dns-querier-2.dns-test-service-2.dns-8524.svc.cluster.local from pod dns-8524/dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63: the server is currently unable to handle the request (get pods dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63) Nov 8 18:36:36.016: INFO: Unable to read jessie_hosts@dns-querier-2 from pod dns-8524/dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63: the server is currently unable to handle the request (get pods dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63) Nov 8 18:36:36.016: INFO: Lookups using dns-8524/dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63 failed for: [wheezy_hosts@dns-querier-2.dns-test-service-2.dns-8524.svc.cluster.local wheezy_hosts@dns-querier-2 jessie_hosts@dns-querier-2.dns-test-service-2.dns-8524.svc.cluster.local jessie_hosts@dns-querier-2] ------------------------------ Automatically polling progress: [sig-network] DNS should provide DNS for pods for Hostname [Conformance] (Spec Runtime: 7m20.039s) test/e2e/network/dns.go:248 In [It] (Node Runtime: 7m20.013s) test/e2e/network/dns.go:248 At [By Step] looking for the results for each expected name from probers (Step Runtime: 7m5.973s) test/e2e/network/dns_common.go:511 Spec Goroutine goroutine 2263 [select] k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*ClientConn).RoundTrip(0xc000af0d80, 0xc00071d600) vendor/golang.org/x/net/http2/transport.go:1200 k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).RoundTripOpt(0xc003140800, 0xc00071d600, {0xa0?}) vendor/golang.org/x/net/http2/transport.go:519 k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).RoundTrip(...) vendor/golang.org/x/net/http2/transport.go:480 k8s.io/kubernetes/vendor/golang.org/x/net/http2.noDialH2RoundTripper.RoundTrip({0xc00168f040?}, 0xc00071d600?) vendor/golang.org/x/net/http2/transport.go:3020 net/http.(*Transport).roundTrip(0xc00168f040, 0xc00071d600) /usr/local/go/src/net/http/transport.go:540 net/http.(*Transport).RoundTrip(0x6ee5440?, 0xc00483f890?) /usr/local/go/src/net/http/roundtrip.go:17 k8s.io/kubernetes/vendor/k8s.io/client-go/transport.(*userAgentRoundTripper).RoundTrip(0xc0004146c0, 0xc00071d500) vendor/k8s.io/client-go/transport/round_trippers.go:168 net/http.send(0xc00071d500, {0x7e8b940, 0xc0004146c0}, {0x73cd720?, 0x1?, 0x0?}) /usr/local/go/src/net/http/client.go:251 net/http.(*Client).send(0xc002c45a10, 0xc00071d500, {0x7f963eb825b8?, 0x100?, 0x0?}) /usr/local/go/src/net/http/client.go:175 net/http.(*Client).do(0xc002c45a10, 0xc00071d500) /usr/local/go/src/net/http/client.go:715 net/http.(*Client).Do(...) /usr/local/go/src/net/http/client.go:581 k8s.io/kubernetes/vendor/k8s.io/client-go/rest.(*Request).request(0xc00071d300, {0x7ebe6e0, 0xc003208540}, 0xc000f60540?) vendor/k8s.io/client-go/rest/request.go:964 k8s.io/kubernetes/vendor/k8s.io/client-go/rest.(*Request).Do(0xc00071d300, {0x7ebe6e0, 0xc003208540}) vendor/k8s.io/client-go/rest/request.go:1005 > k8s.io/kubernetes/test/e2e/network.assertFilesContain.func1() test/e2e/network/dns_common.go:468 | Name(pod.Name). | Suffix(fileDir, fileName). > Do(ctx).Raw() | | if err != nil { k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x26f2811, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7ebe6a8?, 0xc0001a8000?}, 0xc0042637c0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7ebe6a8, 0xc0001a8000}, 0xc0043c1db8, 0x2f7ec4a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7ebe6a8, 0xc0001a8000}, 0xb0?, 0x2f7d7e5?, 0x60?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7ebe6a8, 0xc0001a8000}, 0x7e6e5f8?, 0xc000f60900?, 0x25da967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x1e?, 0x1ff?, 0x0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.assertFilesContain({0xc003e39600, 0x4, 0x4}, {0x74b5afe, 0x7}, 0xc002569400, {0x7efa648?, 0xc003c58d00}, 0x0, {0x0, ...}) test/e2e/network/dns_common.go:455 | var failed []string | > framework.ExpectNoError(wait.PollImmediate(time.Second*5, time.Second*600, func() (bool, error) { | failed = []string{} | > k8s.io/kubernetes/test/e2e/network.assertFilesExist(...) test/e2e/network/dns_common.go:449 | | func assertFilesExist(fileNames []string, fileDir string, pod *v1.Pod, client clientset.Interface) { > assertFilesContain(fileNames, fileDir, pod, client, false, "") | } | > k8s.io/kubernetes/test/e2e/network.validateDNSResults(0xc000d08c30, 0xc002569400, {0xc003e39600, 0x4, 0x4}) test/e2e/network/dns_common.go:512 | // Try to find results for each expected name. | ginkgo.By("looking for the results for each expected name from probers") > assertFilesExist(fileNames, "results", pod, f.ClientSet) | | // TODO: probe from the host, too. > k8s.io/kubernetes/test/e2e/network.glob..func2.7() test/e2e/network/dns.go:281 | pod1.Spec.Subdomain = serviceName | > validateDNSResults(f, pod1, append(wheezyFileNames, jessieFileNames...)) | }) | k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc0029e1b90, 0xc0029b52c0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 8 18:37:07.939: INFO: Unable to read wheezy_hosts@dns-querier-2.dns-test-service-2.dns-8524.svc.cluster.local from pod dns-8524/dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63: the server is currently unable to handle the request (get pods dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63) Nov 8 18:37:07.945: INFO: Unable to read wheezy_hosts@dns-querier-2 from pod dns-8524/dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63: the server is currently unable to handle the request (get pods dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63) Nov 8 18:37:07.948: INFO: Unable to read jessie_hosts@dns-querier-2.dns-test-service-2.dns-8524.svc.cluster.local from pod dns-8524/dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63: the server is currently unable to handle the request (get pods dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63) Nov 8 18:37:07.953: INFO: Unable to read jessie_hosts@dns-querier-2 from pod dns-8524/dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63: the server is currently unable to handle the request (get pods dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63) Nov 8 18:37:07.953: INFO: Lookups using dns-8524/dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63 failed for: [wheezy_hosts@dns-querier-2.dns-test-service-2.dns-8524.svc.cluster.local wheezy_hosts@dns-querier-2 jessie_hosts@dns-querier-2.dns-test-service-2.dns-8524.svc.cluster.local jessie_hosts@dns-querier-2] ------------------------------ Automatically polling progress: [sig-network] DNS should provide DNS for pods for Hostname [Conformance] (Spec Runtime: 7m40.04s) test/e2e/network/dns.go:248 In [It] (Node Runtime: 7m40.015s) test/e2e/network/dns.go:248 At [By Step] looking for the results for each expected name from probers (Step Runtime: 7m25.974s) test/e2e/network/dns_common.go:511 Spec Goroutine goroutine 2263 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7ebe6a8, 0xc0001a8000}, 0xc0043c1db8, 0x2f7ec4a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7ebe6a8, 0xc0001a8000}, 0xb0?, 0x2f7d7e5?, 0x60?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7ebe6a8, 0xc0001a8000}, 0x7e6e5f8?, 0xc000f60900?, 0x25da967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x1e?, 0x1ff?, 0x0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.assertFilesContain({0xc003e39600, 0x4, 0x4}, {0x74b5afe, 0x7}, 0xc002569400, {0x7efa648?, 0xc003c58d00}, 0x0, {0x0, ...}) test/e2e/network/dns_common.go:455 | var failed []string | > framework.ExpectNoError(wait.PollImmediate(time.Second*5, time.Second*600, func() (bool, error) { | failed = []string{} | > k8s.io/kubernetes/test/e2e/network.assertFilesExist(...) test/e2e/network/dns_common.go:449 | | func assertFilesExist(fileNames []string, fileDir string, pod *v1.Pod, client clientset.Interface) { > assertFilesContain(fileNames, fileDir, pod, client, false, "") | } | > k8s.io/kubernetes/test/e2e/network.validateDNSResults(0xc000d08c30, 0xc002569400, {0xc003e39600, 0x4, 0x4}) test/e2e/network/dns_common.go:512 | // Try to find results for each expected name. | ginkgo.By("looking for the results for each expected name from probers") > assertFilesExist(fileNames, "results", pod, f.ClientSet) | | // TODO: probe from the host, too. > k8s.io/kubernetes/test/e2e/network.glob..func2.7() test/e2e/network/dns.go:281 | pod1.Spec.Subdomain = serviceName | > validateDNSResults(f, pod1, append(wheezyFileNames, jessieFileNames...)) | }) | k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc0029e1b90, 0xc0029b52c0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 8 18:37:16.002: INFO: Unable to read wheezy_hosts@dns-querier-2.dns-test-service-2.dns-8524.svc.cluster.local from pod dns-8524/dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63: the server is currently unable to handle the request (get pods dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63) Nov 8 18:37:16.007: INFO: Unable to read wheezy_hosts@dns-querier-2 from pod dns-8524/dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63: the server is currently unable to handle the request (get pods dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63) Nov 8 18:37:16.011: INFO: Unable to read jessie_hosts@dns-querier-2.dns-test-service-2.dns-8524.svc.cluster.local from pod dns-8524/dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63: the server is currently unable to handle the request (get pods dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63) Nov 8 18:37:16.015: INFO: Unable to read jessie_hosts@dns-querier-2 from pod dns-8524/dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63: the server is currently unable to handle the request (get pods dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63) Nov 8 18:37:16.015: INFO: Lookups using dns-8524/dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63 failed for: [wheezy_hosts@dns-querier-2.dns-test-service-2.dns-8524.svc.cluster.local wheezy_hosts@dns-querier-2 jessie_hosts@dns-querier-2.dns-test-service-2.dns-8524.svc.cluster.local jessie_hosts@dns-querier-2] ------------------------------ Automatically polling progress: [sig-network] DNS should provide DNS for pods for Hostname [Conformance] (Spec Runtime: 8m0.042s) test/e2e/network/dns.go:248 In [It] (Node Runtime: 8m0.017s) test/e2e/network/dns.go:248 At [By Step] looking for the results for each expected name from probers (Step Runtime: 7m45.976s) test/e2e/network/dns_common.go:511 Spec Goroutine goroutine 2263 [select] k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*ClientConn).RoundTrip(0xc000af0d80, 0xc001487b00) vendor/golang.org/x/net/http2/transport.go:1200 k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).RoundTripOpt(0xc003140800, 0xc001487b00, {0xa0?}) vendor/golang.org/x/net/http2/transport.go:519 k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).RoundTrip(...) vendor/golang.org/x/net/http2/transport.go:480 k8s.io/kubernetes/vendor/golang.org/x/net/http2.noDialH2RoundTripper.RoundTrip({0xc00168f040?}, 0xc001487b00?) vendor/golang.org/x/net/http2/transport.go:3020 net/http.(*Transport).roundTrip(0xc00168f040, 0xc001487b00) /usr/local/go/src/net/http/transport.go:540 net/http.(*Transport).RoundTrip(0x6ee5440?, 0xc002cc8870?) /usr/local/go/src/net/http/roundtrip.go:17 k8s.io/kubernetes/vendor/k8s.io/client-go/transport.(*userAgentRoundTripper).RoundTrip(0xc0004146c0, 0xc001487a00) vendor/k8s.io/client-go/transport/round_trippers.go:168 net/http.send(0xc001487a00, {0x7e8b940, 0xc0004146c0}, {0x73cd720?, 0x1?, 0x0?}) /usr/local/go/src/net/http/client.go:251 net/http.(*Client).send(0xc002c45a10, 0xc001487a00, {0x7f963eb825b8?, 0x100?, 0x0?}) /usr/local/go/src/net/http/client.go:175 net/http.(*Client).do(0xc002c45a10, 0xc001487a00) /usr/local/go/src/net/http/client.go:715 net/http.(*Client).Do(...) /usr/local/go/src/net/http/client.go:581 k8s.io/kubernetes/vendor/k8s.io/client-go/rest.(*Request).request(0xc001487300, {0x7ebe6e0, 0xc003208c00}, 0xc000f60540?) vendor/k8s.io/client-go/rest/request.go:964 k8s.io/kubernetes/vendor/k8s.io/client-go/rest.(*Request).Do(0xc001487300, {0x7ebe6e0, 0xc003208c00}) vendor/k8s.io/client-go/rest/request.go:1005 > k8s.io/kubernetes/test/e2e/network.assertFilesContain.func1() test/e2e/network/dns_common.go:468 | Name(pod.Name). | Suffix(fileDir, fileName). > Do(ctx).Raw() | | if err != nil { k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x26f2811, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7ebe6a8?, 0xc0001a8000?}, 0xc0042637c0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7ebe6a8, 0xc0001a8000}, 0xc0043c1db8, 0x2f7ec4a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7ebe6a8, 0xc0001a8000}, 0xb0?, 0x2f7d7e5?, 0x60?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7ebe6a8, 0xc0001a8000}, 0x7e6e5f8?, 0xc000f60900?, 0x25da967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x1e?, 0x1ff?, 0x0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.assertFilesContain({0xc003e39600, 0x4, 0x4}, {0x74b5afe, 0x7}, 0xc002569400, {0x7efa648?, 0xc003c58d00}, 0x0, {0x0, ...}) test/e2e/network/dns_common.go:455 | var failed []string | > framework.ExpectNoError(wait.PollImmediate(time.Second*5, time.Second*600, func() (bool, error) { | failed = []string{} | > k8s.io/kubernetes/test/e2e/network.assertFilesExist(...) test/e2e/network/dns_common.go:449 | | func assertFilesExist(fileNames []string, fileDir string, pod *v1.Pod, client clientset.Interface) { > assertFilesContain(fileNames, fileDir, pod, client, false, "") | } | > k8s.io/kubernetes/test/e2e/network.validateDNSResults(0xc000d08c30, 0xc002569400, {0xc003e39600, 0x4, 0x4}) test/e2e/network/dns_common.go:512 | // Try to find results for each expected name. | ginkgo.By("looking for the results for each expected name from probers") > assertFilesExist(fileNames, "results", pod, f.ClientSet) | | // TODO: probe from the host, too. > k8s.io/kubernetes/test/e2e/network.glob..func2.7() test/e2e/network/dns.go:281 | pod1.Spec.Subdomain = serviceName | > validateDNSResults(f, pod1, append(wheezyFileNames, jessieFileNames...)) | }) | k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc0029e1b90, 0xc0029b52c0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 8 18:37:47.939: INFO: Unable to read wheezy_hosts@dns-querier-2.dns-test-service-2.dns-8524.svc.cluster.local from pod dns-8524/dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63: the server is currently unable to handle the request (get pods dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63) Nov 8 18:37:47.944: INFO: Unable to read wheezy_hosts@dns-querier-2 from pod dns-8524/dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63: the server is currently unable to handle the request (get pods dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63) Nov 8 18:37:47.948: INFO: Unable to read jessie_hosts@dns-querier-2.dns-test-service-2.dns-8524.svc.cluster.local from pod dns-8524/dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63: the server is currently unable to handle the request (get pods dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63) Nov 8 18:37:47.952: INFO: Unable to read jessie_hosts@dns-querier-2 from pod dns-8524/dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63: the server is currently unable to handle the request (get pods dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63) Nov 8 18:37:47.952: INFO: Lookups using dns-8524/dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63 failed for: [wheezy_hosts@dns-querier-2.dns-test-service-2.dns-8524.svc.cluster.local wheezy_hosts@dns-querier-2 jessie_hosts@dns-querier-2.dns-test-service-2.dns-8524.svc.cluster.local jessie_hosts@dns-querier-2] ------------------------------ Automatically polling progress: [sig-network] DNS should provide DNS for pods for Hostname [Conformance] (Spec Runtime: 8m20.044s) test/e2e/network/dns.go:248 In [It] (Node Runtime: 8m20.018s) test/e2e/network/dns.go:248 At [By Step] looking for the results for each expected name from probers (Step Runtime: 8m5.978s) test/e2e/network/dns_common.go:511 Spec Goroutine goroutine 2263 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7ebe6a8, 0xc0001a8000}, 0xc0043c1db8, 0x2f7ec4a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7ebe6a8, 0xc0001a8000}, 0xb0?, 0x2f7d7e5?, 0x60?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7ebe6a8, 0xc0001a8000}, 0x7e6e5f8?, 0xc000f60900?, 0x25da967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x1e?, 0x1ff?, 0x0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.assertFilesContain({0xc003e39600, 0x4, 0x4}, {0x74b5afe, 0x7}, 0xc002569400, {0x7efa648?, 0xc003c58d00}, 0x0, {0x0, ...}) test/e2e/network/dns_common.go:455 | var failed []string | > framework.ExpectNoError(wait.PollImmediate(time.Second*5, time.Second*600, func() (bool, error) { | failed = []string{} | > k8s.io/kubernetes/test/e2e/network.assertFilesExist(...) test/e2e/network/dns_common.go:449 | | func assertFilesExist(fileNames []string, fileDir string, pod *v1.Pod, client clientset.Interface) { > assertFilesContain(fileNames, fileDir, pod, client, false, "") | } | > k8s.io/kubernetes/test/e2e/network.validateDNSResults(0xc000d08c30, 0xc002569400, {0xc003e39600, 0x4, 0x4}) test/e2e/network/dns_common.go:512 | // Try to find results for each expected name. | ginkgo.By("looking for the results for each expected name from probers") > assertFilesExist(fileNames, "results", pod, f.ClientSet) | | // TODO: probe from the host, too. > k8s.io/kubernetes/test/e2e/network.glob..func2.7() test/e2e/network/dns.go:281 | pod1.Spec.Subdomain = serviceName | > validateDNSResults(f, pod1, append(wheezyFileNames, jessieFileNames...)) | }) | k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc0029e1b90, 0xc0029b52c0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 8 18:37:56.002: INFO: Unable to read wheezy_hosts@dns-querier-2.dns-test-service-2.dns-8524.svc.cluster.local from pod dns-8524/dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63: the server is currently unable to handle the request (get pods dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63) Nov 8 18:37:56.006: INFO: Unable to read wheezy_hosts@dns-querier-2 from pod dns-8524/dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63: the server is currently unable to handle the request (get pods dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63) Nov 8 18:37:56.010: INFO: Unable to read jessie_hosts@dns-querier-2.dns-test-service-2.dns-8524.svc.cluster.local from pod dns-8524/dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63: the server is currently unable to handle the request (get pods dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63) Nov 8 18:37:56.014: INFO: Unable to read jessie_hosts@dns-querier-2 from pod dns-8524/dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63: the server is currently unable to handle the request (get pods dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63) Nov 8 18:37:56.015: INFO: Lookups using dns-8524/dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63 failed for: [wheezy_hosts@dns-querier-2.dns-test-service-2.dns-8524.svc.cluster.local wheezy_hosts@dns-querier-2 jessie_hosts@dns-querier-2.dns-test-service-2.dns-8524.svc.cluster.local jessie_hosts@dns-querier-2] ------------------------------ Automatically polling progress: [sig-network] DNS should provide DNS for pods for Hostname [Conformance] (Spec Runtime: 8m40.046s) test/e2e/network/dns.go:248 In [It] (Node Runtime: 8m40.02s) test/e2e/network/dns.go:248 At [By Step] looking for the results for each expected name from probers (Step Runtime: 8m25.98s) test/e2e/network/dns_common.go:511 Spec Goroutine goroutine 2263 [select] k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*ClientConn).RoundTrip(0xc000af0d80, 0xc0005c4600) vendor/golang.org/x/net/http2/transport.go:1200 k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).RoundTripOpt(0xc003140800, 0xc0005c4600, {0xa0?}) vendor/golang.org/x/net/http2/transport.go:519 k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).RoundTrip(...) vendor/golang.org/x/net/http2/transport.go:480 k8s.io/kubernetes/vendor/golang.org/x/net/http2.noDialH2RoundTripper.RoundTrip({0xc00168f040?}, 0xc0005c4600?) vendor/golang.org/x/net/http2/transport.go:3020 net/http.(*Transport).roundTrip(0xc00168f040, 0xc0005c4600) /usr/local/go/src/net/http/transport.go:540 net/http.(*Transport).RoundTrip(0x6ee5440?, 0xc003185470?) /usr/local/go/src/net/http/roundtrip.go:17 k8s.io/kubernetes/vendor/k8s.io/client-go/transport.(*userAgentRoundTripper).RoundTrip(0xc0004146c0, 0xc0005c4500) vendor/k8s.io/client-go/transport/round_trippers.go:168 net/http.send(0xc0005c4500, {0x7e8b940, 0xc0004146c0}, {0x73cd720?, 0x1?, 0x0?}) /usr/local/go/src/net/http/client.go:251 net/http.(*Client).send(0xc002c45a10, 0xc0005c4500, {0x7f963eb82a68?, 0x100?, 0x0?}) /usr/local/go/src/net/http/client.go:175 net/http.(*Client).do(0xc002c45a10, 0xc0005c4500) /usr/local/go/src/net/http/client.go:715 net/http.(*Client).Do(...) /usr/local/go/src/net/http/client.go:581 k8s.io/kubernetes/vendor/k8s.io/client-go/rest.(*Request).request(0xc0005c4300, {0x7ebe6e0, 0xc002446a20}, 0xc000f60540?) vendor/k8s.io/client-go/rest/request.go:964 k8s.io/kubernetes/vendor/k8s.io/client-go/rest.(*Request).Do(0xc0005c4300, {0x7ebe6e0, 0xc002446a20}) vendor/k8s.io/client-go/rest/request.go:1005 > k8s.io/kubernetes/test/e2e/network.assertFilesContain.func1() test/e2e/network/dns_common.go:468 | Name(pod.Name). | Suffix(fileDir, fileName). > Do(ctx).Raw() | | if err != nil { k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x26f2811, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7ebe6a8?, 0xc0001a8000?}, 0xc0042637c0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7ebe6a8, 0xc0001a8000}, 0xc0043c1db8, 0x2f7ec4a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7ebe6a8, 0xc0001a8000}, 0xb0?, 0x2f7d7e5?, 0x60?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7ebe6a8, 0xc0001a8000}, 0x7e6e5f8?, 0xc000f60900?, 0x25da967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x1e?, 0x1ff?, 0x0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.assertFilesContain({0xc003e39600, 0x4, 0x4}, {0x74b5afe, 0x7}, 0xc002569400, {0x7efa648?, 0xc003c58d00}, 0x0, {0x0, ...}) test/e2e/network/dns_common.go:455 | var failed []string | > framework.ExpectNoError(wait.PollImmediate(time.Second*5, time.Second*600, func() (bool, error) { | failed = []string{} | > k8s.io/kubernetes/test/e2e/network.assertFilesExist(...) test/e2e/network/dns_common.go:449 | | func assertFilesExist(fileNames []string, fileDir string, pod *v1.Pod, client clientset.Interface) { > assertFilesContain(fileNames, fileDir, pod, client, false, "") | } | > k8s.io/kubernetes/test/e2e/network.validateDNSResults(0xc000d08c30, 0xc002569400, {0xc003e39600, 0x4, 0x4}) test/e2e/network/dns_common.go:512 | // Try to find results for each expected name. | ginkgo.By("looking for the results for each expected name from probers") > assertFilesExist(fileNames, "results", pod, f.ClientSet) | | // TODO: probe from the host, too. > k8s.io/kubernetes/test/e2e/network.glob..func2.7() test/e2e/network/dns.go:281 | pod1.Spec.Subdomain = serviceName | > validateDNSResults(f, pod1, append(wheezyFileNames, jessieFileNames...)) | }) | k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc0029e1b90, 0xc0029b52c0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 8 18:38:27.939: INFO: Unable to read wheezy_hosts@dns-querier-2.dns-test-service-2.dns-8524.svc.cluster.local from pod dns-8524/dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63: the server is currently unable to handle the request (get pods dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63) Nov 8 18:38:27.945: INFO: Unable to read wheezy_hosts@dns-querier-2 from pod dns-8524/dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63: the server is currently unable to handle the request (get pods dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63) Nov 8 18:38:27.950: INFO: Unable to read jessie_hosts@dns-querier-2.dns-test-service-2.dns-8524.svc.cluster.local from pod dns-8524/dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63: the server is currently unable to handle the request (get pods dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63) Nov 8 18:38:27.954: INFO: Unable to read jessie_hosts@dns-querier-2 from pod dns-8524/dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63: the server is currently unable to handle the request (get pods dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63) Nov 8 18:38:27.954: INFO: Lookups using dns-8524/dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63 failed for: [wheezy_hosts@dns-querier-2.dns-test-service-2.dns-8524.svc.cluster.local wheezy_hosts@dns-querier-2 jessie_hosts@dns-querier-2.dns-test-service-2.dns-8524.svc.cluster.local jessie_hosts@dns-querier-2] ------------------------------ Automatically polling progress: [sig-network] DNS should provide DNS for pods for Hostname [Conformance] (Spec Runtime: 9m0.048s) test/e2e/network/dns.go:248 In [It] (Node Runtime: 9m0.022s) test/e2e/network/dns.go:248 At [By Step] looking for the results for each expected name from probers (Step Runtime: 8m45.982s) test/e2e/network/dns_common.go:511 Spec Goroutine goroutine 2263 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7ebe6a8, 0xc0001a8000}, 0xc0043c1db8, 0x2f7ec4a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7ebe6a8, 0xc0001a8000}, 0xb0?, 0x2f7d7e5?, 0x60?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7ebe6a8, 0xc0001a8000}, 0x7e6e5f8?, 0xc000f60900?, 0x25da967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x1e?, 0x1ff?, 0x0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.assertFilesContain({0xc003e39600, 0x4, 0x4}, {0x74b5afe, 0x7}, 0xc002569400, {0x7efa648?, 0xc003c58d00}, 0x0, {0x0, ...}) test/e2e/network/dns_common.go:455 | var failed []string | > framework.ExpectNoError(wait.PollImmediate(time.Second*5, time.Second*600, func() (bool, error) { | failed = []string{} | > k8s.io/kubernetes/test/e2e/network.assertFilesExist(...) test/e2e/network/dns_common.go:449 | | func assertFilesExist(fileNames []string, fileDir string, pod *v1.Pod, client clientset.Interface) { > assertFilesContain(fileNames, fileDir, pod, client, false, "") | } | > k8s.io/kubernetes/test/e2e/network.validateDNSResults(0xc000d08c30, 0xc002569400, {0xc003e39600, 0x4, 0x4}) test/e2e/network/dns_common.go:512 | // Try to find results for each expected name. | ginkgo.By("looking for the results for each expected name from probers") > assertFilesExist(fileNames, "results", pod, f.ClientSet) | | // TODO: probe from the host, too. > k8s.io/kubernetes/test/e2e/network.glob..func2.7() test/e2e/network/dns.go:281 | pod1.Spec.Subdomain = serviceName | > validateDNSResults(f, pod1, append(wheezyFileNames, jessieFileNames...)) | }) | k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc0029e1b90, 0xc0029b52c0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 8 18:38:36.002: INFO: Unable to read wheezy_hosts@dns-querier-2.dns-test-service-2.dns-8524.svc.cluster.local from pod dns-8524/dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63: the server is currently unable to handle the request (get pods dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63) Nov 8 18:38:36.007: INFO: Unable to read wheezy_hosts@dns-querier-2 from pod dns-8524/dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63: the server is currently unable to handle the request (get pods dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63) Nov 8 18:38:36.011: INFO: Unable to read jessie_hosts@dns-querier-2.dns-test-service-2.dns-8524.svc.cluster.local from pod dns-8524/dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63: the server is currently unable to handle the request (get pods dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63) Nov 8 18:38:36.015: INFO: Unable to read jessie_hosts@dns-querier-2 from pod dns-8524/dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63: the server is currently unable to handle the request (get pods dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63) Nov 8 18:38:36.015: INFO: Lookups using dns-8524/dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63 failed for: [wheezy_hosts@dns-querier-2.dns-test-service-2.dns-8524.svc.cluster.local wheezy_hosts@dns-querier-2 jessie_hosts@dns-querier-2.dns-test-service-2.dns-8524.svc.cluster.local jessie_hosts@dns-querier-2] ------------------------------ Automatically polling progress: [sig-network] DNS should provide DNS for pods for Hostname [Conformance] (Spec Runtime: 9m20.05s) test/e2e/network/dns.go:248 In [It] (Node Runtime: 9m20.024s) test/e2e/network/dns.go:248 At [By Step] looking for the results for each expected name from probers (Step Runtime: 9m5.984s) test/e2e/network/dns_common.go:511 Spec Goroutine goroutine 2263 [select] k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*ClientConn).RoundTrip(0xc000af0d80, 0xc0005c5200) vendor/golang.org/x/net/http2/transport.go:1200 k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).RoundTripOpt(0xc003140800, 0xc0005c5200, {0xa0?}) vendor/golang.org/x/net/http2/transport.go:519 k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).RoundTrip(...) vendor/golang.org/x/net/http2/transport.go:480 k8s.io/kubernetes/vendor/golang.org/x/net/http2.noDialH2RoundTripper.RoundTrip({0xc00168f040?}, 0xc0005c5200?) vendor/golang.org/x/net/http2/transport.go:3020 net/http.(*Transport).roundTrip(0xc00168f040, 0xc0005c5200) /usr/local/go/src/net/http/transport.go:540 net/http.(*Transport).RoundTrip(0x6ee5440?, 0xc002cc92f0?) /usr/local/go/src/net/http/roundtrip.go:17 k8s.io/kubernetes/vendor/k8s.io/client-go/transport.(*userAgentRoundTripper).RoundTrip(0xc0004146c0, 0xc0005c5100) vendor/k8s.io/client-go/transport/round_trippers.go:168 net/http.send(0xc0005c5100, {0x7e8b940, 0xc0004146c0}, {0x73cd720?, 0x1?, 0x0?}) /usr/local/go/src/net/http/client.go:251 net/http.(*Client).send(0xc002c45a10, 0xc0005c5100, {0x7f963eb825b8?, 0x100?, 0x0?}) /usr/local/go/src/net/http/client.go:175 net/http.(*Client).do(0xc002c45a10, 0xc0005c5100) /usr/local/go/src/net/http/client.go:715 net/http.(*Client).Do(...) /usr/local/go/src/net/http/client.go:581 k8s.io/kubernetes/vendor/k8s.io/client-go/rest.(*Request).request(0xc0005c4f00, {0x7ebe6e0, 0xc00338eba0}, 0xc000f60540?) vendor/k8s.io/client-go/rest/request.go:964 k8s.io/kubernetes/vendor/k8s.io/client-go/rest.(*Request).Do(0xc0005c4f00, {0x7ebe6e0, 0xc00338eba0}) vendor/k8s.io/client-go/rest/request.go:1005 > k8s.io/kubernetes/test/e2e/network.assertFilesContain.func1() test/e2e/network/dns_common.go:468 | Name(pod.Name). | Suffix(fileDir, fileName). > Do(ctx).Raw() | | if err != nil { k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x26f2811, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7ebe6a8?, 0xc0001a8000?}, 0xc0042637c0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7ebe6a8, 0xc0001a8000}, 0xc0043c1db8, 0x2f7ec4a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7ebe6a8, 0xc0001a8000}, 0xb0?, 0x2f7d7e5?, 0x60?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7ebe6a8, 0xc0001a8000}, 0x7e6e5f8?, 0xc000f60900?, 0x25da967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x1e?, 0x1ff?, 0x0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.assertFilesContain({0xc003e39600, 0x4, 0x4}, {0x74b5afe, 0x7}, 0xc002569400, {0x7efa648?, 0xc003c58d00}, 0x0, {0x0, ...}) test/e2e/network/dns_common.go:455 | var failed []string | > framework.ExpectNoError(wait.PollImmediate(time.Second*5, time.Second*600, func() (bool, error) { | failed = []string{} | > k8s.io/kubernetes/test/e2e/network.assertFilesExist(...) test/e2e/network/dns_common.go:449 | | func assertFilesExist(fileNames []string, fileDir string, pod *v1.Pod, client clientset.Interface) { > assertFilesContain(fileNames, fileDir, pod, client, false, "") | } | > k8s.io/kubernetes/test/e2e/network.validateDNSResults(0xc000d08c30, 0xc002569400, {0xc003e39600, 0x4, 0x4}) test/e2e/network/dns_common.go:512 | // Try to find results for each expected name. | ginkgo.By("looking for the results for each expected name from probers") > assertFilesExist(fileNames, "results", pod, f.ClientSet) | | // TODO: probe from the host, too. > k8s.io/kubernetes/test/e2e/network.glob..func2.7() test/e2e/network/dns.go:281 | pod1.Spec.Subdomain = serviceName | > validateDNSResults(f, pod1, append(wheezyFileNames, jessieFileNames...)) | }) | k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc0029e1b90, 0xc0029b52c0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 8 18:39:07.938: INFO: Unable to read wheezy_hosts@dns-querier-2.dns-test-service-2.dns-8524.svc.cluster.local from pod dns-8524/dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63: the server is currently unable to handle the request (get pods dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63) Nov 8 18:39:07.943: INFO: Unable to read wheezy_hosts@dns-querier-2 from pod dns-8524/dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63: the server is currently unable to handle the request (get pods dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63) Nov 8 18:39:07.948: INFO: Unable to read jessie_hosts@dns-querier-2.dns-test-service-2.dns-8524.svc.cluster.local from pod dns-8524/dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63: the server is currently unable to handle the request (get pods dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63) Nov 8 18:39:07.953: INFO: Unable to read jessie_hosts@dns-querier-2 from pod dns-8524/dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63: the server is currently unable to handle the request (get pods dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63) Nov 8 18:39:07.953: INFO: Lookups using dns-8524/dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63 failed for: [wheezy_hosts@dns-querier-2.dns-test-service-2.dns-8524.svc.cluster.local wheezy_hosts@dns-querier-2 jessie_hosts@dns-querier-2.dns-test-service-2.dns-8524.svc.cluster.local jessie_hosts@dns-querier-2] ------------------------------ Automatically polling progress: [sig-network] DNS should provide DNS for pods for Hostname [Conformance] (Spec Runtime: 9m40.051s) test/e2e/network/dns.go:248 In [It] (Node Runtime: 9m40.025s) test/e2e/network/dns.go:248 At [By Step] looking for the results for each expected name from probers (Step Runtime: 9m25.985s) test/e2e/network/dns_common.go:511 Spec Goroutine goroutine 2263 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7ebe6a8, 0xc0001a8000}, 0xc0043c1db8, 0x2f7ec4a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7ebe6a8, 0xc0001a8000}, 0xb0?, 0x2f7d7e5?, 0x60?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7ebe6a8, 0xc0001a8000}, 0x7e6e5f8?, 0xc000f60900?, 0x25da967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x1e?, 0x1ff?, 0x0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.assertFilesContain({0xc003e39600, 0x4, 0x4}, {0x74b5afe, 0x7}, 0xc002569400, {0x7efa648?, 0xc003c58d00}, 0x0, {0x0, ...}) test/e2e/network/dns_common.go:455 | var failed []string | > framework.ExpectNoError(wait.PollImmediate(time.Second*5, time.Second*600, func() (bool, error) { | failed = []string{} | > k8s.io/kubernetes/test/e2e/network.assertFilesExist(...) test/e2e/network/dns_common.go:449 | | func assertFilesExist(fileNames []string, fileDir string, pod *v1.Pod, client clientset.Interface) { > assertFilesContain(fileNames, fileDir, pod, client, false, "") | } | > k8s.io/kubernetes/test/e2e/network.validateDNSResults(0xc000d08c30, 0xc002569400, {0xc003e39600, 0x4, 0x4}) test/e2e/network/dns_common.go:512 | // Try to find results for each expected name. | ginkgo.By("looking for the results for each expected name from probers") > assertFilesExist(fileNames, "results", pod, f.ClientSet) | | // TODO: probe from the host, too. > k8s.io/kubernetes/test/e2e/network.glob..func2.7() test/e2e/network/dns.go:281 | pod1.Spec.Subdomain = serviceName | > validateDNSResults(f, pod1, append(wheezyFileNames, jessieFileNames...)) | }) | k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc0029e1b90, 0xc0029b52c0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 8 18:39:16.002: INFO: Unable to read wheezy_hosts@dns-querier-2.dns-test-service-2.dns-8524.svc.cluster.local from pod dns-8524/dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63: the server is currently unable to handle the request (get pods dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63) Nov 8 18:39:16.007: INFO: Unable to read wheezy_hosts@dns-querier-2 from pod dns-8524/dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63: the server is currently unable to handle the request (get pods dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63) Nov 8 18:39:16.012: INFO: Unable to read jessie_hosts@dns-querier-2.dns-test-service-2.dns-8524.svc.cluster.local from pod dns-8524/dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63: the server is currently unable to handle the request (get pods dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63) Nov 8 18:39:16.016: INFO: Unable to read jessie_hosts@dns-querier-2 from pod dns-8524/dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63: the server is currently unable to handle the request (get pods dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63) Nov 8 18:39:16.016: INFO: Lookups using dns-8524/dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63 failed for: [wheezy_hosts@dns-querier-2.dns-test-service-2.dns-8524.svc.cluster.local wheezy_hosts@dns-querier-2 jessie_hosts@dns-querier-2.dns-test-service-2.dns-8524.svc.cluster.local jessie_hosts@dns-querier-2] ------------------------------ Automatically polling progress: [sig-network] DNS should provide DNS for pods for Hostname [Conformance] (Spec Runtime: 10m0.053s) test/e2e/network/dns.go:248 In [It] (Node Runtime: 10m0.028s) test/e2e/network/dns.go:248 At [By Step] looking for the results for each expected name from probers (Step Runtime: 9m45.987s) test/e2e/network/dns_common.go:511 Spec Goroutine goroutine 2263 [select] k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*ClientConn).RoundTrip(0xc000af0d80, 0xc0005fc000) vendor/golang.org/x/net/http2/transport.go:1200 k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).RoundTripOpt(0xc003140800, 0xc0005fc000, {0xa0?}) vendor/golang.org/x/net/http2/transport.go:519 k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).RoundTrip(...) vendor/golang.org/x/net/http2/transport.go:480 k8s.io/kubernetes/vendor/golang.org/x/net/http2.noDialH2RoundTripper.RoundTrip({0xc00168f040?}, 0xc0005fc000?) vendor/golang.org/x/net/http2/transport.go:3020 net/http.(*Transport).roundTrip(0xc00168f040, 0xc0005fc000) /usr/local/go/src/net/http/transport.go:540 net/http.(*Transport).RoundTrip(0x6ee5440?, 0xc00332a900?) /usr/local/go/src/net/http/roundtrip.go:17 k8s.io/kubernetes/vendor/k8s.io/client-go/transport.(*userAgentRoundTripper).RoundTrip(0xc0004146c0, 0xc001487f00) vendor/k8s.io/client-go/transport/round_trippers.go:168 net/http.send(0xc001487f00, {0x7e8b940, 0xc0004146c0}, {0x73cd720?, 0x1?, 0x0?}) /usr/local/go/src/net/http/client.go:251 net/http.(*Client).send(0xc002c45a10, 0xc001487f00, {0x7f963eb82a68?, 0x100?, 0x0?}) /usr/local/go/src/net/http/client.go:175 net/http.(*Client).do(0xc002c45a10, 0xc001487f00) /usr/local/go/src/net/http/client.go:715 net/http.(*Client).Do(...) /usr/local/go/src/net/http/client.go:581 k8s.io/kubernetes/vendor/k8s.io/client-go/rest.(*Request).request(0xc001487d00, {0x7ebe6e0, 0xc003208180}, 0xc000f60540?) vendor/k8s.io/client-go/rest/request.go:964 k8s.io/kubernetes/vendor/k8s.io/client-go/rest.(*Request).Do(0xc001487d00, {0x7ebe6e0, 0xc003208180}) vendor/k8s.io/client-go/rest/request.go:1005 > k8s.io/kubernetes/test/e2e/network.assertFilesContain.func1() test/e2e/network/dns_common.go:468 | Name(pod.Name). | Suffix(fileDir, fileName). > Do(ctx).Raw() | | if err != nil { k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x26f2811, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7ebe6a8?, 0xc0001a8000?}, 0xc0042637c0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7ebe6a8, 0xc0001a8000}, 0xc0043c1db8, 0x2f7ec4a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7ebe6a8, 0xc0001a8000}, 0xb0?, 0x2f7d7e5?, 0x60?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7ebe6a8, 0xc0001a8000}, 0x7e6e5f8?, 0xc000f60900?, 0x25da967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x1e?, 0x1ff?, 0x0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.assertFilesContain({0xc003e39600, 0x4, 0x4}, {0x74b5afe, 0x7}, 0xc002569400, {0x7efa648?, 0xc003c58d00}, 0x0, {0x0, ...}) test/e2e/network/dns_common.go:455 | var failed []string | > framework.ExpectNoError(wait.PollImmediate(time.Second*5, time.Second*600, func() (bool, error) { | failed = []string{} | > k8s.io/kubernetes/test/e2e/network.assertFilesExist(...) test/e2e/network/dns_common.go:449 | | func assertFilesExist(fileNames []string, fileDir string, pod *v1.Pod, client clientset.Interface) { > assertFilesContain(fileNames, fileDir, pod, client, false, "") | } | > k8s.io/kubernetes/test/e2e/network.validateDNSResults(0xc000d08c30, 0xc002569400, {0xc003e39600, 0x4, 0x4}) test/e2e/network/dns_common.go:512 | // Try to find results for each expected name. | ginkgo.By("looking for the results for each expected name from probers") > assertFilesExist(fileNames, "results", pod, f.ClientSet) | | // TODO: probe from the host, too. > k8s.io/kubernetes/test/e2e/network.glob..func2.7() test/e2e/network/dns.go:281 | pod1.Spec.Subdomain = serviceName | > validateDNSResults(f, pod1, append(wheezyFileNames, jessieFileNames...)) | }) | k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc0029e1b90, 0xc0029b52c0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 8 18:39:47.940: INFO: Unable to read wheezy_hosts@dns-querier-2.dns-test-service-2.dns-8524.svc.cluster.local from pod dns-8524/dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63: the server is currently unable to handle the request (get pods dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63) ------------------------------ Automatically polling progress: [sig-network] DNS should provide DNS for pods for Hostname [Conformance] (Spec Runtime: 10m20.056s) test/e2e/network/dns.go:248 In [It] (Node Runtime: 10m20.03s) test/e2e/network/dns.go:248 At [By Step] looking for the results for each expected name from probers (Step Runtime: 10m5.99s) test/e2e/network/dns_common.go:511 Spec Goroutine goroutine 2263 [select] k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*ClientConn).RoundTrip(0xc000af0d80, 0xc0005fc400) vendor/golang.org/x/net/http2/transport.go:1200 k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).RoundTripOpt(0xc003140800, 0xc0005fc400, {0xa0?}) vendor/golang.org/x/net/http2/transport.go:519 k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).RoundTrip(...) vendor/golang.org/x/net/http2/transport.go:480 k8s.io/kubernetes/vendor/golang.org/x/net/http2.noDialH2RoundTripper.RoundTrip({0xc00168f040?}, 0xc0005fc400?) vendor/golang.org/x/net/http2/transport.go:3020 net/http.(*Transport).roundTrip(0xc00168f040, 0xc0005fc400) /usr/local/go/src/net/http/transport.go:540 net/http.(*Transport).RoundTrip(0x6ee5440?, 0xc00332aba0?) /usr/local/go/src/net/http/roundtrip.go:17 k8s.io/kubernetes/vendor/k8s.io/client-go/transport.(*userAgentRoundTripper).RoundTrip(0xc0004146c0, 0xc0005fc300) vendor/k8s.io/client-go/transport/round_trippers.go:168 net/http.send(0xc0005fc300, {0x7e8b940, 0xc0004146c0}, {0x73cd720?, 0x1?, 0x0?}) /usr/local/go/src/net/http/client.go:251 net/http.(*Client).send(0xc002c45a10, 0xc0005fc300, {0x7f963eb82a68?, 0x100?, 0x0?}) /usr/local/go/src/net/http/client.go:175 net/http.(*Client).do(0xc002c45a10, 0xc0005fc300) /usr/local/go/src/net/http/client.go:715 net/http.(*Client).Do(...) /usr/local/go/src/net/http/client.go:581 k8s.io/kubernetes/vendor/k8s.io/client-go/rest.(*Request).request(0xc0005fc100, {0x7ebe6e0, 0xc003208180}, 0xc000f60540?) vendor/k8s.io/client-go/rest/request.go:964 k8s.io/kubernetes/vendor/k8s.io/client-go/rest.(*Request).Do(0xc0005fc100, {0x7ebe6e0, 0xc003208180}) vendor/k8s.io/client-go/rest/request.go:1005 > k8s.io/kubernetes/test/e2e/network.assertFilesContain.func1() test/e2e/network/dns_common.go:468 | Name(pod.Name). | Suffix(fileDir, fileName). > Do(ctx).Raw() | | if err != nil { k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x26f2811, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7ebe6a8?, 0xc0001a8000?}, 0xc0042637c0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7ebe6a8, 0xc0001a8000}, 0xc0043c1db8, 0x2f7ec4a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7ebe6a8, 0xc0001a8000}, 0xb0?, 0x2f7d7e5?, 0x60?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7ebe6a8, 0xc0001a8000}, 0x7e6e5f8?, 0xc000f60900?, 0x25da967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x1e?, 0x1ff?, 0x0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.assertFilesContain({0xc003e39600, 0x4, 0x4}, {0x74b5afe, 0x7}, 0xc002569400, {0x7efa648?, 0xc003c58d00}, 0x0, {0x0, ...}) test/e2e/network/dns_common.go:455 | var failed []string | > framework.ExpectNoError(wait.PollImmediate(time.Second*5, time.Second*600, func() (bool, error) { | failed = []string{} | > k8s.io/kubernetes/test/e2e/network.assertFilesExist(...) test/e2e/network/dns_common.go:449 | | func assertFilesExist(fileNames []string, fileDir string, pod *v1.Pod, client clientset.Interface) { > assertFilesContain(fileNames, fileDir, pod, client, false, "") | } | > k8s.io/kubernetes/test/e2e/network.validateDNSResults(0xc000d08c30, 0xc002569400, {0xc003e39600, 0x4, 0x4}) test/e2e/network/dns_common.go:512 | // Try to find results for each expected name. | ginkgo.By("looking for the results for each expected name from probers") > assertFilesExist(fileNames, "results", pod, f.ClientSet) | | // TODO: probe from the host, too. > k8s.io/kubernetes/test/e2e/network.glob..func2.7() test/e2e/network/dns.go:281 | pod1.Spec.Subdomain = serviceName | > validateDNSResults(f, pod1, append(wheezyFileNames, jessieFileNames...)) | }) | k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc0029e1b90, 0xc0029b52c0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 8 18:39:51.014: INFO: Unable to read wheezy_hosts@dns-querier-2 from pod dns-8524/dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63: the server is currently unable to handle the request (get pods dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63) Nov 8 18:39:54.086: INFO: Unable to read jessie_hosts@dns-querier-2.dns-test-service-2.dns-8524.svc.cluster.local from pod dns-8524/dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63: the server is currently unable to handle the request (get pods dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63) Nov 8 18:39:57.155: INFO: Unable to read jessie_hosts@dns-querier-2 from pod dns-8524/dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63: the server is currently unable to handle the request (get pods dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63) Nov 8 18:39:57.155: INFO: Lookups using dns-8524/dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63 failed for: [wheezy_hosts@dns-querier-2.dns-test-service-2.dns-8524.svc.cluster.local wheezy_hosts@dns-querier-2 jessie_hosts@dns-querier-2.dns-test-service-2.dns-8524.svc.cluster.local jessie_hosts@dns-querier-2] Nov 8 18:39:57.160: INFO: Unable to read wheezy_hosts@dns-querier-2.dns-test-service-2.dns-8524.svc.cluster.local from pod dns-8524/dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63: the server is currently unable to handle the request (get pods dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63) Nov 8 18:39:57.165: INFO: Unable to read wheezy_hosts@dns-querier-2 from pod dns-8524/dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63: the server is currently unable to handle the request (get pods dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63) Nov 8 18:39:57.169: INFO: Unable to read jessie_hosts@dns-querier-2.dns-test-service-2.dns-8524.svc.cluster.local from pod dns-8524/dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63: the server is currently unable to handle the request (get pods dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63) Nov 8 18:39:57.175: INFO: Unable to read jessie_hosts@dns-querier-2 from pod dns-8524/dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63: the server is currently unable to handle the request (get pods dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63) Nov 8 18:39:57.175: INFO: Lookups using dns-8524/dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63 failed for: [wheezy_hosts@dns-querier-2.dns-test-service-2.dns-8524.svc.cluster.local wheezy_hosts@dns-querier-2 jessie_hosts@dns-querier-2.dns-test-service-2.dns-8524.svc.cluster.local jessie_hosts@dns-querier-2] Nov 8 18:39:57.175: INFO: Unexpected error: <*errors.errorString | 0xc000285cb0>: { s: "timed out waiting for the condition", } Nov 8 18:39:57.175: FAIL: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/network.assertFilesContain({0xc003e39600?, 0x4?, 0x4?}, {0x74b5afe?, 0x7?}, 0xc002569400?, {0x7efa648?, 0xc003c58d00?}, 0x0?, {0x0, ...}) test/e2e/network/dns_common.go:455 +0x1dc k8s.io/kubernetes/test/e2e/network.assertFilesExist(...) test/e2e/network/dns_common.go:449 k8s.io/kubernetes/test/e2e/network.validateDNSResults(0xc000d08c30, 0xc002569400, {0xc003e39600, 0x4, 0x4}) test/e2e/network/dns_common.go:512 +0x452 k8s.io/kubernetes/test/e2e/network.glob..func2.7() test/e2e/network/dns.go:281 +0x8b4 STEP: deleting the pod 11/08/22 18:39:57.175 STEP: deleting the test headless service 11/08/22 18:39:57.196 [AfterEach] [sig-network] DNS test/e2e/framework/node/init/init.go:32 Nov 8 18:39:57.213: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-network] DNS test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-network] DNS dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/08/22 18:39:57.219 STEP: Collecting events from namespace "dns-8524". 11/08/22 18:39:57.219 STEP: Found 17 events. 11/08/22 18:39:57.222 Nov 8 18:39:57.223: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63: { } Scheduled: Successfully assigned dns-8524/dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63 to 172.17.0.1 Nov 8 18:39:57.223: INFO: At 2022-11-08 18:29:31 +0000 UTC - event for dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63: {kubelet 172.17.0.1} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.40" already present on machine Nov 8 18:39:57.223: INFO: At 2022-11-08 18:29:31 +0000 UTC - event for dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63: {kubelet 172.17.0.1} Created: Created container webserver Nov 8 18:39:57.223: INFO: At 2022-11-08 18:29:31 +0000 UTC - event for dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63: {kubelet 172.17.0.1} Started: Started container webserver Nov 8 18:39:57.223: INFO: At 2022-11-08 18:29:31 +0000 UTC - event for dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63: {kubelet 172.17.0.1} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.40" already present on machine Nov 8 18:39:57.223: INFO: At 2022-11-08 18:29:31 +0000 UTC - event for dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63: {kubelet 172.17.0.1} Created: Created container querier Nov 8 18:39:57.223: INFO: At 2022-11-08 18:29:31 +0000 UTC - event for dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63: {kubelet 172.17.0.1} Started: Started container querier Nov 8 18:39:57.223: INFO: At 2022-11-08 18:29:31 +0000 UTC - event for dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63: {kubelet 172.17.0.1} Pulling: Pulling image "registry.k8s.io/e2e-test-images/jessie-dnsutils:1.5" Nov 8 18:39:57.223: INFO: At 2022-11-08 18:29:38 +0000 UTC - event for dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63: {kubelet 172.17.0.1} Pulled: Successfully pulled image "registry.k8s.io/e2e-test-images/jessie-dnsutils:1.5" in 6.840057118s (6.840079972s including waiting) Nov 8 18:39:57.223: INFO: At 2022-11-08 18:29:38 +0000 UTC - event for dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63: {kubelet 172.17.0.1} Failed: Error: failed to get sandbox container task: no running task found: task 916a7e448aacf022a4f71a1f08d8ca0cfa76044e7d328bf9e469c86d716a444d not found: not found Nov 8 18:39:57.223: INFO: At 2022-11-08 18:29:38 +0000 UTC - event for dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63: {kubelet 172.17.0.1} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Nov 8 18:39:57.223: INFO: At 2022-11-08 18:29:41 +0000 UTC - event for dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63: {kubelet 172.17.0.1} Pulled: Container image "registry.k8s.io/e2e-test-images/jessie-dnsutils:1.5" already present on machine Nov 8 18:39:57.223: INFO: At 2022-11-08 18:29:41 +0000 UTC - event for dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63: {kubelet 172.17.0.1} Created: Created container jessie-querier Nov 8 18:39:57.223: INFO: At 2022-11-08 18:29:41 +0000 UTC - event for dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63: {kubelet 172.17.0.1} Failed: Error: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: can't get final child's PID from pipe: EOF: unknown Nov 8 18:39:57.223: INFO: At 2022-11-08 18:29:44 +0000 UTC - event for dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63: {kubelet 172.17.0.1} BackOff: Back-off restarting failed container webserver in pod dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63_dns-8524(b077c7f6-02d0-4f61-885d-e2415f682408) Nov 8 18:39:57.223: INFO: At 2022-11-08 18:29:44 +0000 UTC - event for dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63: {kubelet 172.17.0.1} BackOff: Back-off restarting failed container querier in pod dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63_dns-8524(b077c7f6-02d0-4f61-885d-e2415f682408) Nov 8 18:39:57.223: INFO: At 2022-11-08 18:29:44 +0000 UTC - event for dns-test-a1e44f1d-d62a-48f4-8690-1e720abbeb63: {kubelet 172.17.0.1} Started: Started container jessie-querier Nov 8 18:39:57.226: INFO: POD NODE PHASE GRACE CONDITIONS Nov 8 18:39:57.226: INFO: Nov 8 18:39:57.230: INFO: Logging node info for node 172.17.0.1 Nov 8 18:39:57.233: INFO: Node Info: &Node{ObjectMeta:{172.17.0.1 1c9ca6f0-ace7-4a33-a1cd-137d512be00a 3940 0 2022-11-08 18:07:44 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:172.17.0.1 kubernetes.io/os:linux] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-11-08 18:07:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}} } {kubelet Update v1 2022-11-08 18:07:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {kubelet Update v1 2022-11-08 18:35:16 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{259962224640 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67441348608 0} {<nil>} 65860692Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{233966001789 0} {<nil>} 233966001789 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67336491008 0} {<nil>} 65758292Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-08 18:35:16 +0000 UTC,LastTransitionTime:2022-11-08 18:07:43 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-08 18:35:16 +0000 UTC,LastTransitionTime:2022-11-08 18:07:43 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-08 18:35:16 +0000 UTC,LastTransitionTime:2022-11-08 18:07:43 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-08 18:35:16 +0000 UTC,LastTransitionTime:2022-11-08 18:07:54 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.17.0.1,},NodeAddress{Type:Hostname,Address:172.17.0.1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:,SystemUUID:7d8834b1-ec1e-71b0-7148-50316089d154,BootID:99214993-e7b1-4bff-9db2-b9548be8d199,KernelVersion:5.4.0-1078-gke,OSImage:Debian GNU/Linux 10 (buster),ContainerRuntimeVersion:containerd://1.6.8,KubeletVersion:v1.26.0-alpha.3.387+504f252722dcc8,KubeProxyVersion:v1.26.0-alpha.3.387+504f252722dcc8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:11e6a66017ba4e4b938c1612b7a54a3befcefd354796c04e1dba76873a13518e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.5],SizeBytes:112030526,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:af7e3857d87770ddb40f5ea4f89b5a2709504ab1ee31f9ea4ab5823c045f2146 registry.k8s.io/e2e-test-images/agnhost:2.40],SizeBytes:51155161,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nautilus@sha256:99c0d6f1ad24a1aa1905d9c6534d193f268f7b23f9add2ae6bb41f31094bdd5c registry.k8s.io/e2e-test-images/nautilus:1.5],SizeBytes:49642095,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 registry.k8s.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:40764680,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 registry.k8s.io/e2e-test-images/nginx:1.14-2],SizeBytes:6979041,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 8 18:39:57.233: INFO: Logging kubelet events for node 172.17.0.1 Nov 8 18:39:57.237: INFO: Logging pods the kubelet thinks is on node 172.17.0.1 Nov 8 18:39:57.254: INFO: coredns-755454cbdc-s26wr started at 2022-11-08 18:07:54 +0000 UTC (0+1 container statuses recorded) Nov 8 18:39:57.254: INFO: Container coredns ready: false, restart count 11 Nov 8 18:39:57.297: INFO: Latency metrics for node 172.17.0.1 [DeferCleanup (Each)] [sig-network] DNS tear down framework | framework.go:193 STEP: Destroying namespace "dns-8524" for this suite. 11/08/22 18:39:57.297
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-network\]\sHostPort\svalidates\sthat\sthere\sis\sno\sconflict\sbetween\spods\swith\ssame\shostPort\sbut\sdifferent\shostIP\sand\sprotocol\s\[LinuxOnly\]\s\[Conformance\]$'
test/e2e/network/hostport.go:161 k8s.io/kubernetes/test/e2e/network.glob..func12.2() test/e2e/network/hostport.go:161 +0x14defrom junit_01.xml
[BeforeEach] [sig-network] HostPort set up framework | framework.go:178 STEP: Creating a kubernetes client 11/08/22 18:57:19.664 Nov 8 18:57:19.664: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename hostport 11/08/22 18:57:19.666 STEP: Waiting for a default service account to be provisioned in namespace 11/08/22 18:57:19.699 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/08/22 18:57:19.704 [BeforeEach] [sig-network] HostPort test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-network] HostPort test/e2e/network/hostport.go:49 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance] test/e2e/network/hostport.go:63 STEP: Trying to create a pod(pod1) with hostport 54323 and hostIP 127.0.0.1 and expect scheduled 11/08/22 18:57:19.712 Nov 8 18:57:19.725: INFO: Waiting up to 5m0s for pod "pod1" in namespace "hostport-9018" to be "running and ready" Nov 8 18:57:19.728: INFO: Pod "pod1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.881538ms Nov 8 18:57:19.728: INFO: The phase of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Nov 8 18:57:21.734: INFO: Pod "pod1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008798352s Nov 8 18:57:21.734: INFO: The phase of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Nov 8 18:57:23.733: INFO: Pod "pod1": Phase="Running", Reason="", readiness=true. Elapsed: 4.008178879s Nov 8 18:57:23.733: INFO: The phase of Pod pod1 is Running (Ready = true) Nov 8 18:57:23.733: INFO: Pod "pod1" satisfied condition "running and ready" STEP: Trying to create another pod(pod2) with hostport 54323 but hostIP 172.17.0.1 on the node which pod1 resides and expect scheduled 11/08/22 18:57:23.733 Nov 8 18:57:23.742: INFO: Waiting up to 5m0s for pod "pod2" in namespace "hostport-9018" to be "running and ready" Nov 8 18:57:23.746: INFO: Pod "pod2": Phase="Pending", Reason="", readiness=false. Elapsed: 3.758168ms Nov 8 18:57:23.746: INFO: The phase of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Nov 8 18:57:25.750: INFO: Pod "pod2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008126648s Nov 8 18:57:25.751: INFO: The phase of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Nov 8 18:57:27.751: INFO: Pod "pod2": Phase="Running", Reason="", readiness=false. Elapsed: 4.009082336s Nov 8 18:57:27.751: INFO: The phase of Pod pod2 is Running (Ready = false) Nov 8 18:57:29.750: INFO: Pod "pod2": Phase="Running", Reason="", readiness=false. Elapsed: 6.008074328s Nov 8 18:57:29.750: INFO: The phase of Pod pod2 is Running (Ready = false) Nov 8 18:57:31.760: INFO: Pod "pod2": Phase="Running", Reason="", readiness=false. Elapsed: 8.017881876s Nov 8 18:57:31.760: INFO: The phase of Pod pod2 is Running (Ready = false) Nov 8 18:57:33.752: INFO: Pod "pod2": Phase="Running", Reason="", readiness=false. Elapsed: 10.009206325s Nov 8 18:57:33.752: INFO: The phase of Pod pod2 is Running (Ready = false) Nov 8 18:57:35.751: INFO: Pod "pod2": Phase="Running", Reason="", readiness=false. Elapsed: 12.009043383s Nov 8 18:57:35.751: INFO: The phase of Pod pod2 is Running (Ready = false) Nov 8 18:57:37.750: INFO: Pod "pod2": Phase="Running", Reason="", readiness=false. Elapsed: 14.008078244s Nov 8 18:57:37.750: INFO: The phase of Pod pod2 is Running (Ready = false) Nov 8 18:57:39.750: INFO: Pod "pod2": Phase="Running", Reason="", readiness=false. Elapsed: 16.007656877s Nov 8 18:57:39.750: INFO: The phase of Pod pod2 is Running (Ready = false) Nov 8 18:57:41.752: INFO: Pod "pod2": Phase="Running", Reason="", readiness=false. Elapsed: 18.009373917s Nov 8 18:57:41.752: INFO: The phase of Pod pod2 is Running (Ready = false) Nov 8 18:57:43.750: INFO: Pod "pod2": Phase="Running", Reason="", readiness=true. Elapsed: 20.00785142s Nov 8 18:57:43.750: INFO: The phase of Pod pod2 is Running (Ready = true) Nov 8 18:57:43.750: INFO: Pod "pod2" satisfied condition "running and ready" STEP: Trying to create a third pod(pod3) with hostport 54323, hostIP 172.17.0.1 but use UDP protocol on the node which pod2 resides 11/08/22 18:57:43.75 Nov 8 18:57:43.758: INFO: Waiting up to 5m0s for pod "pod3" in namespace "hostport-9018" to be "running and ready" Nov 8 18:57:43.761: INFO: Pod "pod3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.831708ms Nov 8 18:57:43.761: INFO: The phase of Pod pod3 is Pending, waiting for it to be Running (with Ready = true) Nov 8 18:57:45.765: INFO: Pod "pod3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007087831s Nov 8 18:57:45.765: INFO: The phase of Pod pod3 is Pending, waiting for it to be Running (with Ready = true) Nov 8 18:57:47.765: INFO: Pod "pod3": Phase="Running", Reason="", readiness=true. Elapsed: 4.007265665s Nov 8 18:57:47.765: INFO: The phase of Pod pod3 is Running (Ready = true) Nov 8 18:57:47.765: INFO: Pod "pod3" satisfied condition "running and ready" Nov 8 18:57:47.772: INFO: Waiting up to 5m0s for pod "e2e-host-exec" in namespace "hostport-9018" to be "running and ready" Nov 8 18:57:47.775: INFO: Pod "e2e-host-exec": Phase="Pending", Reason="", readiness=false. Elapsed: 3.56526ms Nov 8 18:57:47.775: INFO: The phase of Pod e2e-host-exec is Pending, waiting for it to be Running (with Ready = true) Nov 8 18:57:49.781: INFO: Pod "e2e-host-exec": Phase="Running", Reason="", readiness=true. Elapsed: 2.009676225s Nov 8 18:57:49.781: INFO: The phase of Pod e2e-host-exec is Running (Ready = true) Nov 8 18:57:49.781: INFO: Pod "e2e-host-exec" satisfied condition "running and ready" STEP: checking connectivity from pod e2e-host-exec to serverIP: 127.0.0.1, port: 54323 11/08/22 18:57:49.784 Nov 8 18:57:49.785: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 --interface 172.17.0.1 http://127.0.0.1:54323/hostname] Namespace:hostport-9018 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 8 18:57:49.785: INFO: >>> kubeConfig: /workspace/.kube/config Nov 8 18:57:49.786: INFO: ExecWithOptions: Clientset creation Nov 8 18:57:49.786: INFO: ExecWithOptions: execute(POST https://localhost:6443/api/v1/namespaces/hostport-9018/pods/e2e-host-exec/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+--connect-timeout+5+--interface+172.17.0.1+http%3A%2F%2F127.0.0.1%3A54323%2Fhostname&container=e2e-host-exec&container=e2e-host-exec&stderr=true&stdout=true) Nov 8 18:57:49.813: INFO: Can not connect from e2e-host-exec to pod(pod1) to serverIP: 127.0.0.1, port: 54323 STEP: checking connectivity from pod e2e-host-exec to serverIP: 127.0.0.1, port: 54323 11/08/22 18:57:49.813 Nov 8 18:57:49.813: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 --interface 172.17.0.1 http://127.0.0.1:54323/hostname] Namespace:hostport-9018 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 8 18:57:49.813: INFO: >>> kubeConfig: /workspace/.kube/config Nov 8 18:57:49.814: INFO: ExecWithOptions: Clientset creation Nov 8 18:57:49.814: INFO: ExecWithOptions: execute(POST https://localhost:6443/api/v1/namespaces/hostport-9018/pods/e2e-host-exec/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+--connect-timeout+5+--interface+172.17.0.1+http%3A%2F%2F127.0.0.1%3A54323%2Fhostname&container=e2e-host-exec&container=e2e-host-exec&stderr=true&stdout=true) Nov 8 18:57:49.835: INFO: Can not connect from e2e-host-exec to pod(pod1) to serverIP: 127.0.0.1, port: 54323 STEP: checking connectivity from pod e2e-host-exec to serverIP: 127.0.0.1, port: 54323 11/08/22 18:57:49.835 Nov 8 18:57:49.835: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 --interface 172.17.0.1 http://127.0.0.1:54323/hostname] Namespace:hostport-9018 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 8 18:57:49.835: INFO: >>> kubeConfig: /workspace/.kube/config Nov 8 18:57:49.836: INFO: ExecWithOptions: Clientset creation Nov 8 18:57:49.836: INFO: ExecWithOptions: execute(POST https://localhost:6443/api/v1/namespaces/hostport-9018/pods/e2e-host-exec/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+--connect-timeout+5+--interface+172.17.0.1+http%3A%2F%2F127.0.0.1%3A54323%2Fhostname&container=e2e-host-exec&container=e2e-host-exec&stderr=true&stdout=true) Nov 8 18:57:49.861: INFO: Can not connect from e2e-host-exec to pod(pod1) to serverIP: 127.0.0.1, port: 54323 STEP: checking connectivity from pod e2e-host-exec to serverIP: 127.0.0.1, port: 54323 11/08/22 18:57:49.861 Nov 8 18:57:49.861: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 --interface 172.17.0.1 http://127.0.0.1:54323/hostname] Namespace:hostport-9018 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 8 18:57:49.861: INFO: >>> kubeConfig: /workspace/.kube/config Nov 8 18:57:49.861: INFO: ExecWithOptions: Clientset creation Nov 8 18:57:49.861: INFO: ExecWithOptions: execute(POST https://localhost:6443/api/v1/namespaces/hostport-9018/pods/e2e-host-exec/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+--connect-timeout+5+--interface+172.17.0.1+http%3A%2F%2F127.0.0.1%3A54323%2Fhostname&container=e2e-host-exec&container=e2e-host-exec&stderr=true&stdout=true) Nov 8 18:57:49.883: INFO: Can not connect from e2e-host-exec to pod(pod1) to serverIP: 127.0.0.1, port: 54323 STEP: checking connectivity from pod e2e-host-exec to serverIP: 127.0.0.1, port: 54323 11/08/22 18:57:49.884 Nov 8 18:57:49.884: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 --interface 172.17.0.1 http://127.0.0.1:54323/hostname] Namespace:hostport-9018 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 8 18:57:49.884: INFO: >>> kubeConfig: /workspace/.kube/config Nov 8 18:57:49.884: INFO: ExecWithOptions: Clientset creation Nov 8 18:57:49.884: INFO: ExecWithOptions: execute(POST https://localhost:6443/api/v1/namespaces/hostport-9018/pods/e2e-host-exec/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+--connect-timeout+5+--interface+172.17.0.1+http%3A%2F%2F127.0.0.1%3A54323%2Fhostname&container=e2e-host-exec&container=e2e-host-exec&stderr=true&stdout=true) Nov 8 18:57:49.906: INFO: Can not connect from e2e-host-exec to pod(pod1) to serverIP: 127.0.0.1, port: 54323 Nov 8 18:57:49.906: FAIL: Failed to connect to exposed host ports Full Stack Trace k8s.io/kubernetes/test/e2e/network.glob..func12.2() test/e2e/network/hostport.go:161 +0x14de [AfterEach] [sig-network] HostPort test/e2e/framework/node/init/init.go:32 Nov 8 18:57:49.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-network] HostPort test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-network] HostPort dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/08/22 18:57:49.91 STEP: Collecting events from namespace "hostport-9018". 11/08/22 18:57:49.91 STEP: Found 20 events. 11/08/22 18:57:49.915 Nov 8 18:57:49.915: INFO: At 2022-11-08 18:57:22 +0000 UTC - event for pod1: {kubelet 172.17.0.1} Created: Created container agnhost Nov 8 18:57:49.915: INFO: At 2022-11-08 18:57:22 +0000 UTC - event for pod1: {kubelet 172.17.0.1} Started: Started container agnhost Nov 8 18:57:49.915: INFO: At 2022-11-08 18:57:22 +0000 UTC - event for pod1: {kubelet 172.17.0.1} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.40" already present on machine Nov 8 18:57:49.915: INFO: At 2022-11-08 18:57:24 +0000 UTC - event for pod1: {kubelet 172.17.0.1} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Nov 8 18:57:49.915: INFO: At 2022-11-08 18:57:26 +0000 UTC - event for pod2: {kubelet 172.17.0.1} Started: Started container agnhost Nov 8 18:57:49.915: INFO: At 2022-11-08 18:57:26 +0000 UTC - event for pod2: {kubelet 172.17.0.1} Unhealthy: Readiness probe failed: Get "http://10.88.5.149:8080/hostname": dial tcp 10.88.5.149:8080: connect: connection refused Nov 8 18:57:49.915: INFO: At 2022-11-08 18:57:26 +0000 UTC - event for pod2: {kubelet 172.17.0.1} Created: Created container agnhost Nov 8 18:57:49.915: INFO: At 2022-11-08 18:57:26 +0000 UTC - event for pod2: {kubelet 172.17.0.1} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.40" already present on machine Nov 8 18:57:49.915: INFO: At 2022-11-08 18:57:28 +0000 UTC - event for pod2: {kubelet 172.17.0.1} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Nov 8 18:57:49.915: INFO: At 2022-11-08 18:57:31 +0000 UTC - event for pod1: {kubelet 172.17.0.1} BackOff: Back-off restarting failed container agnhost in pod pod1_hostport-9018(c4e4c9a6-940c-4709-8561-3d668c0b51bc) Nov 8 18:57:49.915: INFO: At 2022-11-08 18:57:31 +0000 UTC - event for pod2: {kubelet 172.17.0.1} Killing: Stopping container agnhost Nov 8 18:57:49.915: INFO: At 2022-11-08 18:57:32 +0000 UTC - event for pod2: {kubelet 172.17.0.1} Unhealthy: Readiness probe failed: Get "http://10.88.5.149:8080/hostname": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Nov 8 18:57:49.915: INFO: At 2022-11-08 18:57:34 +0000 UTC - event for pod2: {kubelet 172.17.0.1} BackOff: Back-off restarting failed container agnhost in pod pod2_hostport-9018(10d91dd5-f7f9-472f-9814-414273f4a47a) Nov 8 18:57:49.915: INFO: At 2022-11-08 18:57:46 +0000 UTC - event for pod3: {kubelet 172.17.0.1} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.40" already present on machine Nov 8 18:57:49.915: INFO: At 2022-11-08 18:57:46 +0000 UTC - event for pod3: {kubelet 172.17.0.1} Created: Created container agnhost Nov 8 18:57:49.915: INFO: At 2022-11-08 18:57:46 +0000 UTC - event for pod3: {kubelet 172.17.0.1} Started: Started container agnhost Nov 8 18:57:49.915: INFO: At 2022-11-08 18:57:47 +0000 UTC - event for pod3: {kubelet 172.17.0.1} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Nov 8 18:57:49.915: INFO: At 2022-11-08 18:57:48 +0000 UTC - event for e2e-host-exec: {kubelet 172.17.0.1} Created: Created container e2e-host-exec Nov 8 18:57:49.915: INFO: At 2022-11-08 18:57:48 +0000 UTC - event for e2e-host-exec: {kubelet 172.17.0.1} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.40" already present on machine Nov 8 18:57:49.915: INFO: At 2022-11-08 18:57:48 +0000 UTC - event for e2e-host-exec: {kubelet 172.17.0.1} Started: Started container e2e-host-exec Nov 8 18:57:49.919: INFO: POD NODE PHASE GRACE CONDITIONS Nov 8 18:57:49.920: INFO: e2e-host-exec 172.17.0.1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-08 18:57:47 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-11-08 18:57:48 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-11-08 18:57:48 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-08 18:57:47 +0000 UTC }] Nov 8 18:57:49.920: INFO: pod1 172.17.0.1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-08 18:57:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-08 18:57:39 +0000 UTC ContainersNotReady containers with unready status: [agnhost]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-08 18:57:39 +0000 UTC ContainersNotReady containers with unready status: [agnhost]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-08 18:57:19 +0000 UTC }] Nov 8 18:57:49.920: INFO: pod2 172.17.0.1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-08 18:57:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-08 18:57:43 +0000 UTC ContainersNotReady containers with unready status: [agnhost]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-08 18:57:43 +0000 UTC ContainersNotReady containers with unready status: [agnhost]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-08 18:57:23 +0000 UTC }] Nov 8 18:57:49.920: INFO: pod3 172.17.0.1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-08 18:57:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-08 18:57:47 +0000 UTC ContainersNotReady containers with unready status: [agnhost]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-08 18:57:47 +0000 UTC ContainersNotReady containers with unready status: [agnhost]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-08 18:57:43 +0000 UTC }] Nov 8 18:57:49.920: INFO: Nov 8 18:57:49.950: INFO: Logging node info for node 172.17.0.1 Nov 8 18:57:49.954: INFO: Node Info: &Node{ObjectMeta:{172.17.0.1 1c9ca6f0-ace7-4a33-a1cd-137d512be00a 5979 0 2022-11-08 18:07:44 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:172.17.0.1 kubernetes.io/os:linux] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-11-08 18:07:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}} } {kubelet Update v1 2022-11-08 18:07:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {kubelet Update v1 2022-11-08 18:54:49 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{259962224640 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67441348608 0} {<nil>} 65860692Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{233966001789 0} {<nil>} 233966001789 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67336491008 0} {<nil>} 65758292Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-08 18:54:49 +0000 UTC,LastTransitionTime:2022-11-08 18:07:43 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-08 18:54:49 +0000 UTC,LastTransitionTime:2022-11-08 18:07:43 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-08 18:54:49 +0000 UTC,LastTransitionTime:2022-11-08 18:07:43 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-08 18:54:49 +0000 UTC,LastTransitionTime:2022-11-08 18:07:54 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.17.0.1,},NodeAddress{Type:Hostname,Address:172.17.0.1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:,SystemUUID:7d8834b1-ec1e-71b0-7148-50316089d154,BootID:99214993-e7b1-4bff-9db2-b9548be8d199,KernelVersion:5.4.0-1078-gke,OSImage:Debian GNU/Linux 10 (buster),ContainerRuntimeVersion:containerd://1.6.8,KubeletVersion:v1.26.0-alpha.3.387+504f252722dcc8,KubeProxyVersion:v1.26.0-alpha.3.387+504f252722dcc8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:11e6a66017ba4e4b938c1612b7a54a3befcefd354796c04e1dba76873a13518e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.5],SizeBytes:112030526,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:af7e3857d87770ddb40f5ea4f89b5a2709504ab1ee31f9ea4ab5823c045f2146 registry.k8s.io/e2e-test-images/agnhost:2.40],SizeBytes:51155161,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nautilus@sha256:99c0d6f1ad24a1aa1905d9c6534d193f268f7b23f9add2ae6bb41f31094bdd5c registry.k8s.io/e2e-test-images/nautilus:1.5],SizeBytes:49642095,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 registry.k8s.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:40764680,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 registry.k8s.io/e2e-test-images/nginx:1.14-2],SizeBytes:6979041,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 8 18:57:49.955: INFO: Logging kubelet events for node 172.17.0.1 Nov 8 18:57:49.964: INFO: Logging pods the kubelet thinks is on node 172.17.0.1 Nov 8 18:57:49.973: INFO: pod1 started at 2022-11-08 18:57:19 +0000 UTC (0+1 container statuses recorded) Nov 8 18:57:49.973: INFO: Container agnhost ready: false, restart count 2 Nov 8 18:57:49.973: INFO: coredns-755454cbdc-s26wr started at 2022-11-08 18:07:54 +0000 UTC (0+1 container statuses recorded) Nov 8 18:57:49.973: INFO: Container coredns ready: false, restart count 14 Nov 8 18:57:49.973: INFO: pod2 started at 2022-11-08 18:57:23 +0000 UTC (0+1 container statuses recorded) Nov 8 18:57:49.973: INFO: Container agnhost ready: false, restart count 2 Nov 8 18:57:49.973: INFO: pod3 started at 2022-11-08 18:57:43 +0000 UTC (0+1 container statuses recorded) Nov 8 18:57:49.973: INFO: Container agnhost ready: false, restart count 0 Nov 8 18:57:49.973: INFO: e2e-host-exec started at 2022-11-08 18:57:47 +0000 UTC (0+1 container statuses recorded) Nov 8 18:57:49.973: INFO: Container e2e-host-exec ready: true, restart count 0 Nov 8 18:57:50.014: INFO: Latency metrics for node 172.17.0.1 [DeferCleanup (Each)] [sig-network] HostPort tear down framework | framework.go:193 STEP: Destroying namespace "hostport-9018" for this suite. 11/08/22 18:57:50.014
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-network\]\sNetworking\sGranular\sChecks\:\sPods\sshould\sfunction\sfor\sintra\-pod\scommunication\:\sudp\s\[NodeConformance\]\s\[Conformance\]$'
test/e2e/framework/network/utils.go:866 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).createNetProxyPods(0xc00024e380, {0x74bc9cc, 0x9}, 0xc001948570) test/e2e/framework/network/utils.go:866 +0x1d0 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setupCore(0xc00024e380, 0x46?) test/e2e/framework/network/utils.go:763 +0x55 k8s.io/kubernetes/test/e2e/framework/network.NewCoreNetworkingTestConfig(0xc0003f03c0, 0x0) test/e2e/framework/network/utils.go:144 +0xfb k8s.io/kubernetes/test/e2e/common/network.glob..func1.1.3() test/e2e/common/network/networking.go:94 +0x28from junit_01.xml
[BeforeEach] [sig-network] Networking set up framework | framework.go:178 STEP: Creating a kubernetes client 11/08/22 18:08:15.871 Nov 8 18:08:15.871: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename pod-network-test 11/08/22 18:08:15.873 STEP: Waiting for a default service account to be provisioned in namespace 11/08/22 18:08:15.895 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/08/22 18:08:15.9 [BeforeEach] [sig-network] Networking test/e2e/framework/metrics/init/init.go:31 [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] test/e2e/common/network/networking.go:93 STEP: Performing setup for networking test in namespace pod-network-test-445 11/08/22 18:08:15.906 STEP: creating a selector 11/08/22 18:08:15.906 STEP: Creating the service pods in kubernetes 11/08/22 18:08:15.906 Nov 8 18:08:15.906: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Nov 8 18:08:15.926: INFO: Waiting up to 5m0s for pod "netserver-0" in namespace "pod-network-test-445" to be "running and ready" Nov 8 18:08:15.931: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.75543ms Nov 8 18:08:15.931: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 8 18:08:17.936: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009714712s Nov 8 18:08:17.936: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 8 18:08:19.938: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011189614s Nov 8 18:08:19.938: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 8 18:08:21.936: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.009164092s Nov 8 18:08:21.936: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 8 18:08:23.937: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 8.010339493s Nov 8 18:08:23.937: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 8 18:08:25.937: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 10.010277612s Nov 8 18:08:25.937: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 8 18:08:27.936: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 12.009506413s Nov 8 18:08:27.936: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 8 18:08:29.937: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 14.010916667s Nov 8 18:08:29.937: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 8 18:08:31.938: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 16.011521825s Nov 8 18:08:31.938: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 8 18:08:33.937: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 18.010918001s Nov 8 18:08:33.937: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 8 18:08:35.936: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 20.009828267s Nov 8 18:08:35.936: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 8 18:08:37.937: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 22.010658267s Nov 8 18:08:37.937: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 8 18:08:39.937: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 24.010297239s Nov 8 18:08:39.937: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 8 18:08:41.936: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 26.00976948s Nov 8 18:08:41.936: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 8 18:08:43.935: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 28.00897507s Nov 8 18:08:43.935: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 8 18:08:45.936: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 30.009316397s Nov 8 18:08:45.936: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 8 18:08:47.937: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 32.010653248s Nov 8 18:08:47.937: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 8 18:08:49.937: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 34.010463078s Nov 8 18:08:49.937: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 8 18:08:51.935: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 36.008729506s Nov 8 18:08:51.935: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 8 18:08:53.935: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 38.008755228s Nov 8 18:08:53.935: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 8 18:08:55.935: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 40.009046135s Nov 8 18:08:55.936: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 8 18:08:57.936: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 42.009191435s Nov 8 18:08:57.936: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 8 18:08:59.937: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 44.010805145s Nov 8 18:08:59.937: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 8 18:09:01.937: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 46.010269637s Nov 8 18:09:01.937: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 8 18:09:03.936: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 48.009116544s Nov 8 18:09:03.936: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 8 18:09:05.936: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 50.009134413s Nov 8 18:09:05.936: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 8 18:09:07.936: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 52.009213533s Nov 8 18:09:07.936: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 8 18:09:09.937: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 54.010700769s Nov 8 18:09:09.937: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 8 18:09:11.938: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 56.011103085s Nov 8 18:09:11.938: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 8 18:09:13.936: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 58.009412036s Nov 8 18:09:13.936: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 8 18:09:15.937: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m0.010585087s Nov 8 18:09:15.937: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 8 18:09:17.935: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m2.008669064s Nov 8 18:09:17.935: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 8 18:09:19.936: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m4.010049477s Nov 8 18:09:19.937: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 8 18:09:21.938: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m6.011487352s Nov 8 18:09:21.938: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 8 18:09:23.937: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m8.010488188s Nov 8 18:09:23.937: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 8 18:09:25.936: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m10.009257365s Nov 8 18:09:25.936: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 8 18:09:27.937: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m12.0102003s Nov 8 18:09:27.937: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 8 18:09:29.935: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m14.008670679s Nov 8 18:09:29.935: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 8 18:09:31.938: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m16.011590483s Nov 8 18:09:31.938: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 8 18:09:33.937: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m18.010973014s Nov 8 18:09:33.937: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 8 18:09:35.939: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m20.012397837s Nov 8 18:09:35.939: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 8 18:09:37.937: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m22.01102006s Nov 8 18:09:37.938: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 8 18:09:39.936: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m24.009923539s Nov 8 18:09:39.936: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 8 18:09:41.937: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m26.010763195s Nov 8 18:09:41.937: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 8 18:09:43.937: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m28.010356675s Nov 8 18:09:43.937: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 8 18:09:45.937: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m30.010287383s Nov 8 18:09:45.937: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 8 18:09:47.936: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m32.010079184s Nov 8 18:09:47.937: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 8 18:09:49.936: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m34.009919949s Nov 8 18:09:49.936: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 8 18:09:51.936: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m36.009373555s Nov 8 18:09:51.936: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 8 18:09:53.937: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m38.010593724s Nov 8 18:09:53.937: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 8 18:09:55.938: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m40.011230742s Nov 8 18:09:55.938: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 8 18:09:57.938: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m42.011687222s Nov 8 18:09:57.938: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 8 18:09:59.938: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m44.011658273s Nov 8 18:09:59.938: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 8 18:10:01.937: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m46.010279898s Nov 8 18:10:01.937: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 8 18:10:03.938: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m48.012059169s Nov 8 18:10:03.939: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 8 18:10:05.938: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m50.011761228s Nov 8 18:10:05.938: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 8 18:10:07.936: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m52.009687798s Nov 8 18:10:07.936: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 8 18:10:09.940: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m54.013200209s Nov 8 18:10:09.940: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 8 18:10:11.936: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m56.009213541s Nov 8 18:10:11.936: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 8 18:10:13.936: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m58.009742033s Nov 8 18:10:13.936: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 8 18:10:15.937: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m0.010294858s Nov 8 18:10:15.937: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 8 18:10:17.936: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m2.009774881s Nov 8 18:10:17.936: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 8 18:10:19.937: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m4.010573973s Nov 8 18:10:19.937: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 8 18:10:21.935: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m6.009096255s Nov 8 18:10:21.936: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 8 18:10:23.936: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m8.00915681s Nov 8 18:10:23.936: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 8 18:10:25.935: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m10.008332089s Nov 8 18:10:25.935: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 8 18:10:27.936: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m12.009663081s Nov 8 18:10:27.936: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 8 18:10:29.936: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m14.009147051s Nov 8 18:10:29.936: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 8 18:10:31.935: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m16.008534779s Nov 8 18:10:31.935: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 8 18:10:33.937: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m18.01043079s Nov 8 18:10:33.937: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 8 18:10:35.936: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m20.009735134s Nov 8 18:10:35.936: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 8 18:10:37.937: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m22.010178399s Nov 8 18:10:37.937: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 8 18:10:39.935: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m24.008997368s Nov 8 18:10:39.935: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 8 18:10:41.935: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m26.008607868s Nov 8 18:10:41.935: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 8 18:10:43.936: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m28.009885485s Nov 8 18:10:43.936: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 8 18:10:45.935: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m30.008693783s Nov 8 18:10:45.935: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 8 18:10:47.936: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m32.00975078s Nov 8 18:10:47.936: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 8 18:10:49.935: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m34.008991556s Nov 8 18:10:49.935: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 8 18:10:51.936: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m36.009305676s Nov 8 18:10:51.936: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 8 18:10:53.935: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m38.009069666s Nov 8 18:10:53.936: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 8 18:10:55.938: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m40.011138518s Nov 8 18:10:55.938: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 8 18:10:57.936: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m42.009595917s Nov 8 18:10:57.936: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 8 18:10:59.936: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m44.009265722s Nov 8 18:10:59.936: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 8 18:11:01.936: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m46.009982927s Nov 8 18:11:01.936: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 8 18:11:03.936: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m48.009644486s Nov 8 18:11:03.936: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 8 18:11:05.936: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m50.010096255s Nov 8 18:11:05.937: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 8 18:11:07.937: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m52.010806884s Nov 8 18:11:07.937: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 8 18:11:09.936: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m54.009684299s Nov 8 18:11:09.936: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 8 18:11:11.937: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m56.010172831s Nov 8 18:11:11.937: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 8 18:11:13.936: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m58.009233793s Nov 8 18:11:13.936: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 8 18:11:15.935: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m0.008933335s Nov 8 18:11:15.935: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 8 18:11:17.936: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m2.009541002s Nov 8 18:11:17.936: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 8 18:11:19.936: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m4.009132093s Nov 8 18:11:19.936: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 8 18:11:21.935: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m6.008936353s Nov 8 18:11:21.935: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 8 18:11:23.936: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m8.009224904s Nov 8 18:11:23.936: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 8 18:11:25.935: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m10.008546004s Nov 8 18:11:25.935: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 8 18:11:27.937: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m12.010391758s Nov 8 18:11:27.937: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 8 18:11:29.936: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m14.00958901s Nov 8 18:11:29.936: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 8 18:11:31.937: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m16.010378858s Nov 8 18:11:31.937: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 8 18:11:33.935: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m18.008810948s Nov 8 18:11:33.935: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 8 18:11:35.937: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m20.010407973s Nov 8 18:11:35.937: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 8 18:11:37.936: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m22.009975611s Nov 8 18:11:37.936: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 8 18:11:39.936: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m24.009489793s Nov 8 18:11:39.936: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 8 18:11:41.935: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m26.008542646s Nov 8 18:11:41.935: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 8 18:11:43.935: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m28.008744045s Nov 8 18:11:43.935: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 8 18:11:45.936: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m30.009992687s Nov 8 18:11:45.936: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 8 18:11:47.936: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m32.009102781s Nov 8 18:11:47.936: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 8 18:11:49.936: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m34.00990759s Nov 8 18:11:49.936: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 8 18:11:51.935: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m36.00871595s Nov 8 18:11:51.935: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 8 18:11:53.936: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m38.009482172s Nov 8 18:11:53.936: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 8 18:11:55.936: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m40.009399546s Nov 8 18:11:55.936: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 8 18:11:57.936: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m42.009918435s Nov 8 18:11:57.936: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 8 18:11:59.936: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m44.009402545s Nov 8 18:11:59.936: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 8 18:12:01.935: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m46.009069823s Nov 8 18:12:01.936: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 8 18:12:03.935: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m48.008826235s Nov 8 18:12:03.935: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 8 18:12:05.936: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m50.009793878s Nov 8 18:12:05.936: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 8 18:12:07.936: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m52.010038463s Nov 8 18:12:07.936: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 8 18:12:09.937: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m54.010857401s Nov 8 18:12:09.937: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 8 18:12:11.935: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m56.008443688s Nov 8 18:12:11.935: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 8 18:12:13.937: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m58.010127323s Nov 8 18:12:13.937: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 8 18:12:15.935: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m0.008687975s Nov 8 18:12:15.935: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 8 18:12:17.936: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m2.009175112s Nov 8 18:12:17.936: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 8 18:12:19.936: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m4.009437452s Nov 8 18:12:19.936: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 8 18:12:21.935: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m6.008783648s Nov 8 18:12:21.935: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 8 18:12:23.936: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m8.009432006s Nov 8 18:12:23.936: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 8 18:12:25.936: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m10.009682274s Nov 8 18:12:25.936: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 8 18:12:27.936: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m12.009827005s Nov 8 18:12:27.936: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 8 18:12:29.935: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m14.008941512s Nov 8 18:12:29.935: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 8 18:12:31.937: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m16.01065167s Nov 8 18:12:31.937: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 8 18:12:33.936: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m18.009235055s Nov 8 18:12:33.936: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 8 18:12:35.936: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m20.009782006s Nov 8 18:12:35.936: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 8 18:12:37.935: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m22.008946972s Nov 8 18:12:37.935: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 8 18:12:39.935: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m24.008958952s Nov 8 18:12:39.935: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 8 18:12:41.936: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m26.009185872s Nov 8 18:12:41.936: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 8 18:12:43.936: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m28.009640549s Nov 8 18:12:43.936: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 8 18:12:45.936: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m30.009313498s Nov 8 18:12:45.936: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 8 18:12:47.936: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m32.009438883s Nov 8 18:12:47.936: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 8 18:12:49.937: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m34.010142767s Nov 8 18:12:49.937: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 8 18:12:51.935: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m36.008725278s Nov 8 18:12:51.935: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 8 18:12:53.936: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m38.009168792s Nov 8 18:12:53.936: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 8 18:12:55.935: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m40.009001242s Nov 8 18:12:55.935: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 8 18:12:57.935: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m42.00844707s Nov 8 18:12:57.935: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 8 18:12:59.936: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m44.009433106s Nov 8 18:12:59.936: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 8 18:13:01.936: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m46.00937228s Nov 8 18:13:01.936: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 8 18:13:03.936: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m48.009754521s Nov 8 18:13:03.936: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 8 18:13:05.937: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m50.010171916s Nov 8 18:13:05.937: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 8 18:13:07.936: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m52.009831667s Nov 8 18:13:07.936: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 8 18:13:09.936: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m54.009390596s Nov 8 18:13:09.936: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 8 18:13:11.937: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m56.010353614s Nov 8 18:13:11.937: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 8 18:13:13.937: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m58.01107335s Nov 8 18:13:13.938: INFO: The phase of Pod netserver-0 is Running (Ready = false) ------------------------------ Automatically polling progress: [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] (Spec Runtime: 5m0.035s) test/e2e/common/network/networking.go:93 In [It] (Node Runtime: 5m0.001s) test/e2e/common/network/networking.go:93 At [By Step] Creating the service pods in kubernetes (Step Runtime: 5m0s) test/e2e/framework/network/utils.go:761 Spec Goroutine goroutine 239 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7ebe6a8, 0xc0001a8000}, 0xc0007135d8, 0x2f7ec4a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7ebe6a8, 0xc0001a8000}, 0xb8?, 0x2f7d7e5?, 0x70?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7ebe6a8, 0xc0001a8000}, 0x74aadba?, 0xc0019ef908?, 0x25da967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x74acb16?, 0x4?, 0x75e619a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/pod.WaitForPodCondition({0x7efa648?, 0xc0012eeb60}, {0xc00379cae0, 0x14}, {0xc00389e643, 0xb}, {0x74e3a16, 0x11}, 0xc0038a8801?, 0x7781d70) test/e2e/framework/pod/wait.go:289 k8s.io/kubernetes/test/e2e/framework/pod.WaitTimeoutForPodReadyInNamespace({0x7efa648?, 0xc0012eeb60?}, {0xc00389e643?, 0xc00389e380?}, {0xc00379cae0?, 0x0?}, 0x0?) test/e2e/framework/pod/wait.go:500 > k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).createNetProxyPods(0xc00024e380, {0x74bc9cc, 0x9}, 0xc001948570) test/e2e/framework/network/utils.go:866 | runningPods := make([]*v1.Pod, 0, len(nodes)) | for _, p := range createdPods { > framework.ExpectNoError(e2epod.WaitTimeoutForPodReadyInNamespace(config.f.ClientSet, p.Name, config.f.Namespace.Name, framework.PodStartTimeout)) | rp, err := config.getPodClient().Get(context.TODO(), p.Name, metav1.GetOptions{}) | framework.ExpectNoError(err) > k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setupCore(0xc00024e380, 0x46?) test/e2e/framework/network/utils.go:763 | ginkgo.By("Creating the service pods in kubernetes") | podName := "netserver" > config.EndpointPods = config.createNetProxyPods(podName, selector) | | ginkgo.By("Creating test pods") > k8s.io/kubernetes/test/e2e/framework/network.NewCoreNetworkingTestConfig(0xc0003f03c0, 0x0) test/e2e/framework/network/utils.go:144 | } | ginkgo.By(fmt.Sprintf("Performing setup for networking test in namespace %v", config.Namespace)) > config.setupCore(getServiceSelector()) | return config | } > k8s.io/kubernetes/test/e2e/common/network.glob..func1.1.3() test/e2e/common/network/networking.go:94 | */ | framework.ConformanceIt("should function for intra-pod communication: udp [NodeConformance]", func() { > config := e2enetwork.NewCoreNetworkingTestConfig(f, false) | checkPodToPodConnectivity(config, "udp", e2enetwork.EndpointUDPPort) | }) k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d04bbe, 0xc000b65080}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 8 18:13:15.936: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 5m0.010077697s Nov 8 18:13:15.937: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 8 18:13:15.941: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 5m0.014469386s Nov 8 18:13:15.941: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 8 18:13:15.943: INFO: Unexpected error: <*pod.timeoutError | 0xc001637410>: { msg: "timed out while waiting for pod pod-network-test-445/netserver-0 to be running and ready", observedObjects: [ <*v1.Pod | 0xc000c52400>{ TypeMeta: {Kind: "", APIVersion: ""}, ObjectMeta: { Name: "netserver-0", GenerateName: "", Namespace: "pod-network-test-445", SelfLink: "", UID: "2a8f6150-86a7-412c-b621-269bca4d38ed", ResourceVersion: "747", Generation: 0, CreationTimestamp: { Time: { wall: 0, ext: 63803527695, loc: { name: "Local", zone: [ {name: "UTC", offset: 0, isDST: false}, ], tx: [ { when: -576460752303423488, index: 0, isstd: false, isutc: false, }, ], extend: "UTC0", cacheStart: 9223372036854775807, cacheEnd: 9223372036854775807, cacheZone: {name: "UTC", offset: 0, isDST: false}, }, }, }, DeletionTimestamp: nil, DeletionGracePeriodSeconds: nil, Labels: { "selector-7034dffe-3ae3-48b3-85d9-6f8fbd06b18d": "true", }, Annotations: nil, OwnerReferences: nil, Finalizers: nil, ManagedFields: [ { Manager: "e2e.test", Operation: "Update", APIVersion: "v1", Time: { Time: { wall: 0, ext: 63803527695, loc: { name: "Local", zone: [...], tx: [...], extend: "UTC0", cacheStart: 9223372036854775807, cacheEnd: 9223372036854775807, cacheZone: {name: ..., offset: ..., isDST: ...}, }, }, }, FieldsType: "FieldsV1", FieldsV1: { Raw: "{\"f:metadata\":{\"f:labels\":{\".\":{},\"f:selector-7034dffe-3ae3-48b3-85d9-6f8fbd06b18d\":{}}},\"f:spec\":{\"f:containers\":{\"k:{\\\"name\\\":\\\"webserver\\\"}\":{\".\":{},\"f:args\":{},\"f:image\":{},\"f:imagePullPolicy\":{},\"f:livenessProbe\":{\".\":{},\"f:failureThreshold\":{},\"f:httpGet\":{\".\":{},\"f:path\":{},\"f:port\":{},\"f:scheme\":{}},\"f:initialDelaySeconds\":{},\"f:periodSeconds\":{},\"f:successThreshold\":{},\"f:timeoutSeconds\":{}},\"f:name\":{},\"f:ports\":{\".\":{},\"k:{\\\"containerPort\\\":8081,\\\"protocol\\\":\\\"UDP\\\"}\":{\".\":{},\"f:containerPort\":{},\"f:name\":{},\"f:protocol\":{}},\"k:{\\\"containerPort\\\":8083,\\\"protocol\\\":\\\"TCP\\\"}\":{\".\":{},\"f:containerPort\":{},\"f:name\":{},\"f:protocol\":{}}},\"f:readinessProbe\":{\".\":{},\"f:failureThreshold\":{},\"f:httpGet\":{\".\":{},\"f:path\":{},\"f:port\":{},\"f:scheme\":{}},\"f:initialDelaySeconds\":{},\"f:periodSeconds\":{},\"f:successThreshold\":{},\"f:timeoutSeconds\":{}},\"f:resources\":{},\"f:terminationMessagePath\":{},\"f:terminationMessagePolicy\":{}}},\"f:dnsPolicy\":{},\"f:enableServiceLinks\":{},\"f:nodeSelector\":{},\"f:restartPolicy\":{},\"f:schedulerName\":{},\"f:securityContext\":{},\"f:terminationGracePeriodSeconds\":{}}}", }, Subresource: "", }, { Manager: "kubelet", Operation: "Update", APIVersion: "v1", Time: { Time: { wall: 0, ext: 63803527989, loc: { name: "Local", zone: [...], tx: [...], extend: "UTC0", cacheStart: 9223372036854775807, cacheEnd: 9223372036854775807, cacheZone: {name: ..., offset: ..., isDST: ...}, }, }, }, FieldsType: "FieldsV1", FieldsV1: { Raw: "{\"f:status\":{\"f:conditions\":{\"k:{\\\"type\\\":\\\"ContainersReady\\\"}\":{\".\":{},\"f:lastProbeTime\":{},\"f:lastTransitionTime\":{},\"f:message\":{},\"f:reason\":{},\"f:status\":{},\"f:type\":{}},\"k:{\\\"type\\\":\\\"Initialized\\\"}\":{\".\":{},\"f:lastProbeTime\":{},\"f:lastTransitionTime\":{},\"f:status\":{},\"f:type\":{}},\"k:{\\\"type\\\":\\\"Ready\\\"}\":{\".\":{},\"f:lastProbeTime\":{},\"f:lastTransitionTime\":{},\"f:message\":{},\"f:reason\":{},\"f:status\":{},\"f:type\":{}}},\"f:containerStatuses\":{},\"f:hostIP\":{},\"f:phase\":{},\"f:podIP\":{},\"f:podIPs\":{\".\":{},\"k:{\\\"ip\\\":\\\"10.88.0.153\\\"}\":{\".\":{},\"f:ip\":{}},\"k:{\\\"ip\\\":\\\"2001:4860:4860::99\\\"}\":{\".\":{},\"f:ip\":{}}},\"f:startTime\":{}}}", }, Subresource: "status", }, ], }, Spec: { Volumes: [ { Name: "kube-api-access-xljsg", VolumeSource: { HostPath: nil, EmptyDir: nil, GCEPersistentDisk: nil, AWSElasticBlockStore: nil, GitRepo: nil, Secret: nil, NFS: nil, ISCSI: nil, Glusterfs: nil, PersistentVolumeClaim: nil, RBD: nil, FlexVolume: nil, Cinder: nil, CephFS: nil, Flocker: nil, DownwardAPI: nil, FC: nil, AzureFile: nil, ConfigMap: nil, VsphereVolume: nil, Quobyte: nil, AzureDisk: nil, PhotonPersistentDisk: nil, Projected: { Sources: [ { Secret: ..., DownwardAPI: .... Gomega truncated this representation as it exceeds 'format.MaxLength'. Consider having the object provide a custom 'GomegaStringer' representation or adjust the parameters in Gomega's 'format' package. Learn more here: https://onsi.github.io/gomega/#adjusting-output Nov 8 18:13:15.943: FAIL: timed out while waiting for pod pod-network-test-445/netserver-0 to be running and ready Full Stack Trace k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).createNetProxyPods(0xc00024e380, {0x74bc9cc, 0x9}, 0xc001948570) test/e2e/framework/network/utils.go:866 +0x1d0 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setupCore(0xc00024e380, 0x46?) test/e2e/framework/network/utils.go:763 +0x55 k8s.io/kubernetes/test/e2e/framework/network.NewCoreNetworkingTestConfig(0xc0003f03c0, 0x0) test/e2e/framework/network/utils.go:144 +0xfb k8s.io/kubernetes/test/e2e/common/network.glob..func1.1.3() test/e2e/common/network/networking.go:94 +0x28 [AfterEach] [sig-network] Networking test/e2e/framework/node/init/init.go:32 Nov 8 18:13:15.944: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-network] Networking test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-network] Networking dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/08/22 18:13:15.948 STEP: Collecting events from namespace "pod-network-test-445". 11/08/22 18:13:15.948 STEP: Found 9 events. 11/08/22 18:13:15.953 Nov 8 18:13:15.954: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for netserver-0: { } Scheduled: Successfully assigned pod-network-test-445/netserver-0 to 172.17.0.1 Nov 8 18:13:15.954: INFO: At 2022-11-08 18:08:17 +0000 UTC - event for netserver-0: {kubelet 172.17.0.1} Pulling: Pulling image "registry.k8s.io/e2e-test-images/agnhost:2.40" Nov 8 18:13:15.954: INFO: At 2022-11-08 18:08:21 +0000 UTC - event for netserver-0: {kubelet 172.17.0.1} Pulled: Successfully pulled image "registry.k8s.io/e2e-test-images/agnhost:2.40" in 3.178635427s (3.178669723s including waiting) Nov 8 18:13:15.954: INFO: At 2022-11-08 18:08:21 +0000 UTC - event for netserver-0: {kubelet 172.17.0.1} Failed: Error: failed to get sandbox container task: no running task found: task 9aa14189bf542def6b8a5745c95effef8e3edf8423ba54d00023c1e6528c61cb not found: not found Nov 8 18:13:15.954: INFO: At 2022-11-08 18:08:21 +0000 UTC - event for netserver-0: {kubelet 172.17.0.1} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Nov 8 18:13:15.954: INFO: At 2022-11-08 18:08:23 +0000 UTC - event for netserver-0: {kubelet 172.17.0.1} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.40" already present on machine Nov 8 18:13:15.954: INFO: At 2022-11-08 18:08:23 +0000 UTC - event for netserver-0: {kubelet 172.17.0.1} Created: Created container webserver Nov 8 18:13:15.954: INFO: At 2022-11-08 18:08:24 +0000 UTC - event for netserver-0: {kubelet 172.17.0.1} Started: Started container webserver Nov 8 18:13:15.954: INFO: At 2022-11-08 18:08:32 +0000 UTC - event for netserver-0: {kubelet 172.17.0.1} BackOff: Back-off restarting failed container webserver in pod netserver-0_pod-network-test-445(2a8f6150-86a7-412c-b621-269bca4d38ed) Nov 8 18:13:15.961: INFO: POD NODE PHASE GRACE CONDITIONS Nov 8 18:13:15.961: INFO: netserver-0 172.17.0.1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-08 18:08:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-08 18:08:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-08 18:08:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-08 18:08:15 +0000 UTC }] Nov 8 18:13:15.961: INFO: Nov 8 18:13:15.987: INFO: Logging node info for node 172.17.0.1 Nov 8 18:13:15.991: INFO: Node Info: &Node{ObjectMeta:{172.17.0.1 1c9ca6f0-ace7-4a33-a1cd-137d512be00a 378 0 2022-11-08 18:07:44 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:172.17.0.1 kubernetes.io/os:linux] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-11-08 18:07:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}} } {kubelet Update v1 2022-11-08 18:07:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {kubelet Update v1 2022-11-08 18:08:35 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{259962224640 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67441348608 0} {<nil>} 65860692Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{233966001789 0} {<nil>} 233966001789 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67336491008 0} {<nil>} 65758292Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-08 18:08:35 +0000 UTC,LastTransitionTime:2022-11-08 18:07:43 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-08 18:08:35 +0000 UTC,LastTransitionTime:2022-11-08 18:07:43 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-08 18:08:35 +0000 UTC,LastTransitionTime:2022-11-08 18:07:43 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-08 18:08:35 +0000 UTC,LastTransitionTime:2022-11-08 18:07:54 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.17.0.1,},NodeAddress{Type:Hostname,Address:172.17.0.1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:,SystemUUID:7d8834b1-ec1e-71b0-7148-50316089d154,BootID:99214993-e7b1-4bff-9db2-b9548be8d199,KernelVersion:5.4.0-1078-gke,OSImage:Debian GNU/Linux 10 (buster),ContainerRuntimeVersion:containerd://1.6.8,KubeletVersion:v1.26.0-alpha.3.387+504f252722dcc8,KubeProxyVersion:v1.26.0-alpha.3.387+504f252722dcc8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:af7e3857d87770ddb40f5ea4f89b5a2709504ab1ee31f9ea4ab5823c045f2146 registry.k8s.io/e2e-test-images/agnhost:2.40],SizeBytes:51155161,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 8 18:13:15.992: INFO: Logging kubelet events for node 172.17.0.1 Nov 8 18:13:15.996: INFO: Logging pods the kubelet thinks is on node 172.17.0.1 Nov 8 18:13:16.005: INFO: netserver-0 started at 2022-11-08 18:08:15 +0000 UTC (0+1 container statuses recorded) Nov 8 18:13:16.005: INFO: Container webserver ready: false, restart count 5 Nov 8 18:13:16.005: INFO: coredns-755454cbdc-s26wr started at 2022-11-08 18:07:54 +0000 UTC (0+1 container statuses recorded) Nov 8 18:13:16.005: INFO: Container coredns ready: false, restart count 5 Nov 8 18:13:16.052: INFO: Latency metrics for node 172.17.0.1 [DeferCleanup (Each)] [sig-network] Networking tear down framework | framework.go:193 STEP: Destroying namespace "pod-network-test-445" for this suite. 11/08/22 18:13:16.052
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-network\]\sProxy\sversion\sv1\sshould\sproxy\sthrough\sa\sservice\sand\sa\spod\s\s\[Conformance\]$'
test/e2e/network/proxy.go:180 k8s.io/kubernetes/test/e2e/network.glob..func25.1.3() test/e2e/network/proxy.go:180 +0xab0from junit_01.xml
[BeforeEach] version v1 set up framework | framework.go:178 STEP: Creating a kubernetes client 11/08/22 19:07:38.464 Nov 8 19:07:38.464: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename proxy 11/08/22 19:07:38.466 STEP: Waiting for a default service account to be provisioned in namespace 11/08/22 19:07:38.491 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/08/22 19:07:38.499 [BeforeEach] version v1 test/e2e/framework/metrics/init/init.go:31 [It] should proxy through a service and a pod [Conformance] test/e2e/network/proxy.go:101 STEP: starting an echo server on multiple ports 11/08/22 19:07:38.521 STEP: creating replication controller proxy-service-52rtt in namespace proxy-5241 11/08/22 19:07:38.522 I1108 19:07:38.537574 148400 runners.go:193] Created replication controller with name: proxy-service-52rtt, namespace: proxy-5241, replica count: 1 I1108 19:07:39.589263 148400 runners.go:193] proxy-service-52rtt Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1108 19:07:40.590176 148400 runners.go:193] proxy-service-52rtt Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1108 19:07:41.591295 148400 runners.go:193] proxy-service-52rtt Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1108 19:07:42.591580 148400 runners.go:193] proxy-service-52rtt Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1108 19:07:43.592751 148400 runners.go:193] proxy-service-52rtt Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1108 19:07:44.593186 148400 runners.go:193] proxy-service-52rtt Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1108 19:07:45.593387 148400 runners.go:193] proxy-service-52rtt Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1108 19:07:46.593637 148400 runners.go:193] proxy-service-52rtt Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1108 19:07:47.594676 148400 runners.go:193] proxy-service-52rtt Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1108 19:07:48.595738 148400 runners.go:193] proxy-service-52rtt Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1108 19:07:49.596776 148400 runners.go:193] proxy-service-52rtt Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1108 19:07:50.597723 148400 runners.go:193] proxy-service-52rtt Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1108 19:07:51.598003 148400 runners.go:193] proxy-service-52rtt Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1108 19:07:52.598342 148400 runners.go:193] proxy-service-52rtt Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1108 19:07:53.599325 148400 runners.go:193] proxy-service-52rtt Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1108 19:07:54.600286 148400 runners.go:193] proxy-service-52rtt Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1108 19:07:55.601238 148400 runners.go:193] proxy-service-52rtt Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1108 19:07:55.601284 148400 runners.go:193] Logging node info for node 172.17.0.1 I1108 19:07:55.605467 148400 runners.go:193] Node Info: &Node{ObjectMeta:{172.17.0.1 1c9ca6f0-ace7-4a33-a1cd-137d512be00a 8011 0 2022-11-08 18:07:44 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:172.17.0.1 kubernetes.io/os:linux] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-11-08 18:07:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}} } {kubelet Update v1 2022-11-08 18:07:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {kubelet Update v1 2022-11-08 19:05:01 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{259962224640 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67441348608 0} {<nil>} 65860692Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{233966001789 0} {<nil>} 233966001789 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67336491008 0} {<nil>} 65758292Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-08 19:05:01 +0000 UTC,LastTransitionTime:2022-11-08 18:07:43 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-08 19:05:01 +0000 UTC,LastTransitionTime:2022-11-08 18:07:43 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-08 19:05:01 +0000 UTC,LastTransitionTime:2022-11-08 18:07:43 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-08 19:05:01 +0000 UTC,LastTransitionTime:2022-11-08 18:07:54 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.17.0.1,},NodeAddress{Type:Hostname,Address:172.17.0.1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:,SystemUUID:7d8834b1-ec1e-71b0-7148-50316089d154,BootID:99214993-e7b1-4bff-9db2-b9548be8d199,KernelVersion:5.4.0-1078-gke,OSImage:Debian GNU/Linux 10 (buster),ContainerRuntimeVersion:containerd://1.6.8,KubeletVersion:v1.26.0-alpha.3.387+504f252722dcc8,KubeProxyVersion:v1.26.0-alpha.3.387+504f252722dcc8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:11e6a66017ba4e4b938c1612b7a54a3befcefd354796c04e1dba76873a13518e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.5],SizeBytes:112030526,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:af7e3857d87770ddb40f5ea4f89b5a2709504ab1ee31f9ea4ab5823c045f2146 registry.k8s.io/e2e-test-images/agnhost:2.40],SizeBytes:51155161,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nautilus@sha256:99c0d6f1ad24a1aa1905d9c6534d193f268f7b23f9add2ae6bb41f31094bdd5c registry.k8s.io/e2e-test-images/nautilus:1.5],SizeBytes:49642095,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 registry.k8s.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:40764680,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 registry.k8s.io/e2e-test-images/nginx:1.14-2],SizeBytes:6979041,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} I1108 19:07:55.605817 148400 runners.go:193] Logging kubelet events for node 172.17.0.1 I1108 19:07:55.610021 148400 runners.go:193] Logging pods the kubelet thinks is on node 172.17.0.1 I1108 19:07:55.618827 148400 runners.go:193] coredns-755454cbdc-s26wr started at 2022-11-08 18:07:54 +0000 UTC (0+1 container statuses recorded) I1108 19:07:55.618890 148400 runners.go:193] Container coredns ready: false, restart count 16 I1108 19:07:55.618896 148400 runners.go:193] proxy-service-52rtt-xxnlt started at 2022-11-08 19:07:38 +0000 UTC (0+1 container statuses recorded) I1108 19:07:55.618902 148400 runners.go:193] Container proxy-service-52rtt ready: false, restart count 2 I1108 19:07:55.666162 148400 runners.go:193] Latency metrics for node 172.17.0.1 I1108 19:07:55.671538 148400 runners.go:193] Running kubectl logs on non-ready containers in proxy-5241 Nov 8 19:07:55.681: INFO: Logs of proxy-5241/proxy-service-52rtt-xxnlt:proxy-service-52rtt on node 172.17.0.1 Nov 8 19:07:55.681: INFO: : STARTLOG ENDLOG for container proxy-5241:proxy-service-52rtt-xxnlt:proxy-service-52rtt Nov 8 19:07:55.682: INFO: Unexpected error: <*errors.errorString | 0xc00383a680>: { s: "2 containers failed which is more than allowed 1", } Nov 8 19:07:55.682: FAIL: 2 containers failed which is more than allowed 1 Full Stack Trace k8s.io/kubernetes/test/e2e/network.glob..func25.1.3() test/e2e/network/proxy.go:180 +0xab0 [AfterEach] version v1 test/e2e/framework/node/init/init.go:32 Nov 8 19:07:55.683: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] version v1 test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] version v1 dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/08/22 19:07:55.686 STEP: Collecting events from namespace "proxy-5241". 11/08/22 19:07:55.686 STEP: Found 8 events. 11/08/22 19:07:55.691 Nov 8 19:07:55.691: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for proxy-service-52rtt-xxnlt: { } Scheduled: Successfully assigned proxy-5241/proxy-service-52rtt-xxnlt to 172.17.0.1 Nov 8 19:07:55.691: INFO: At 2022-11-08 19:07:38 +0000 UTC - event for proxy-service-52rtt: {replication-controller } SuccessfulCreate: Created pod: proxy-service-52rtt-xxnlt Nov 8 19:07:55.691: INFO: At 2022-11-08 19:07:40 +0000 UTC - event for proxy-service-52rtt-xxnlt: {kubelet 172.17.0.1} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.40" already present on machine Nov 8 19:07:55.691: INFO: At 2022-11-08 19:07:40 +0000 UTC - event for proxy-service-52rtt-xxnlt: {kubelet 172.17.0.1} Created: Created container proxy-service-52rtt Nov 8 19:07:55.691: INFO: At 2022-11-08 19:07:40 +0000 UTC - event for proxy-service-52rtt-xxnlt: {kubelet 172.17.0.1} Started: Started container proxy-service-52rtt Nov 8 19:07:55.691: INFO: At 2022-11-08 19:07:41 +0000 UTC - event for proxy-service-52rtt-xxnlt: {kubelet 172.17.0.1} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Nov 8 19:07:55.691: INFO: At 2022-11-08 19:07:43 +0000 UTC - event for proxy-service-52rtt-xxnlt: {kubelet 172.17.0.1} Failed: Error: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to create new parent process: namespace path: lstat /proc/832732/ns/ipc: no such file or directory: unknown Nov 8 19:07:55.691: INFO: At 2022-11-08 19:07:47 +0000 UTC - event for proxy-service-52rtt-xxnlt: {kubelet 172.17.0.1} BackOff: Back-off restarting failed container proxy-service-52rtt in pod proxy-service-52rtt-xxnlt_proxy-5241(3517908d-88c6-4afe-a83d-aa02b79f4ba7) Nov 8 19:07:55.698: INFO: POD NODE PHASE GRACE CONDITIONS Nov 8 19:07:55.698: INFO: proxy-service-52rtt-xxnlt 172.17.0.1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-08 19:07:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-08 19:07:38 +0000 UTC ContainersNotReady containers with unready status: [proxy-service-52rtt]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-08 19:07:38 +0000 UTC ContainersNotReady containers with unready status: [proxy-service-52rtt]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-08 19:07:38 +0000 UTC }] Nov 8 19:07:55.698: INFO: Nov 8 19:07:55.698: INFO: proxy-service-52rtt-xxnlt[proxy-5241].container[proxy-service-52rtt]=failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to create new parent process: namespace path: lstat /proc/832732/ns/ipc: no such file or directory: unknown Nov 8 19:07:55.708: INFO: Logging node info for node 172.17.0.1 Nov 8 19:07:55.712: INFO: Node Info: &Node{ObjectMeta:{172.17.0.1 1c9ca6f0-ace7-4a33-a1cd-137d512be00a 8011 0 2022-11-08 18:07:44 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:172.17.0.1 kubernetes.io/os:linux] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-11-08 18:07:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}} } {kubelet Update v1 2022-11-08 18:07:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {kubelet Update v1 2022-11-08 19:05:01 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{259962224640 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67441348608 0} {<nil>} 65860692Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{233966001789 0} {<nil>} 233966001789 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67336491008 0} {<nil>} 65758292Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-08 19:05:01 +0000 UTC,LastTransitionTime:2022-11-08 18:07:43 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-08 19:05:01 +0000 UTC,LastTransitionTime:2022-11-08 18:07:43 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-08 19:05:01 +0000 UTC,LastTransitionTime:2022-11-08 18:07:43 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-08 19:05:01 +0000 UTC,LastTransitionTime:2022-11-08 18:07:54 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.17.0.1,},NodeAddress{Type:Hostname,Address:172.17.0.1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:,SystemUUID:7d8834b1-ec1e-71b0-7148-50316089d154,BootID:99214993-e7b1-4bff-9db2-b9548be8d199,KernelVersion:5.4.0-1078-gke,OSImage:Debian GNU/Linux 10 (buster),ContainerRuntimeVersion:containerd://1.6.8,KubeletVersion:v1.26.0-alpha.3.387+504f252722dcc8,KubeProxyVersion:v1.26.0-alpha.3.387+504f252722dcc8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:11e6a66017ba4e4b938c1612b7a54a3befcefd354796c04e1dba76873a13518e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.5],SizeBytes:112030526,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:af7e3857d87770ddb40f5ea4f89b5a2709504ab1ee31f9ea4ab5823c045f2146 registry.k8s.io/e2e-test-images/agnhost:2.40],SizeBytes:51155161,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nautilus@sha256:99c0d6f1ad24a1aa1905d9c6534d193f268f7b23f9add2ae6bb41f31094bdd5c registry.k8s.io/e2e-test-images/nautilus:1.5],SizeBytes:49642095,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 registry.k8s.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:40764680,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 registry.k8s.io/e2e-test-images/nginx:1.14-2],SizeBytes:6979041,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 8 19:07:55.712: INFO: Logging kubelet events for node 172.17.0.1 Nov 8 19:07:55.716: INFO: Logging pods the kubelet thinks is on node 172.17.0.1 Nov 8 19:07:55.722: INFO: proxy-service-52rtt-xxnlt started at 2022-11-08 19:07:38 +0000 UTC (0+1 container statuses recorded) Nov 8 19:07:55.722: INFO: Container proxy-service-52rtt ready: false, restart count 2 Nov 8 19:07:55.722: INFO: coredns-755454cbdc-s26wr started at 2022-11-08 18:07:54 +0000 UTC (0+1 container statuses recorded) Nov 8 19:07:55.722: INFO: Container coredns ready: false, restart count 16 Nov 8 19:07:55.758: INFO: Latency metrics for node 172.17.0.1 [DeferCleanup (Each)] version v1 tear down framework | framework.go:193 STEP: Destroying namespace "proxy-5241" for this suite. 11/08/22 19:07:55.758
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-network\]\sServices\sshould\sbe\sable\sto\schange\sthe\stype\sfrom\sClusterIP\sto\sExternalName\s\[Conformance\]$'
test/e2e/network/service.go:4026 k8s.io/kubernetes/test/e2e/network.createAndGetExternalServiceFQDN({0x7efa648, 0xc003a9c820}, {0xc00091ffb0, 0xd}, {0x74c3d80, 0xb}) test/e2e/network/service.go:4026 +0x117 k8s.io/kubernetes/test/e2e/network.glob..func26.18() test/e2e/network/service.go:1544 +0x1b0from junit_01.xml
[BeforeEach] [sig-network] Services set up framework | framework.go:178 STEP: Creating a kubernetes client 11/08/22 19:01:51.113 Nov 8 19:01:51.113: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename services 11/08/22 19:01:51.114 STEP: Waiting for a default service account to be provisioned in namespace 11/08/22 19:01:51.134 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/08/22 19:01:51.137 [BeforeEach] [sig-network] Services test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-network] Services test/e2e/network/service.go:767 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] test/e2e/network/service.go:1528 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-7792 11/08/22 19:01:51.141 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service 11/08/22 19:01:51.156 STEP: creating service externalsvc in namespace services-7792 11/08/22 19:01:51.156 STEP: creating replication controller externalsvc in namespace services-7792 11/08/22 19:01:51.167 I1108 19:01:51.177705 148400 runners.go:193] Created replication controller with name: externalsvc, namespace: services-7792, replica count: 2 I1108 19:01:54.228353 148400 runners.go:193] externalsvc Pods: 2 out of 2 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1108 19:01:57.228730 148400 runners.go:193] externalsvc Pods: 2 out of 2 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1108 19:01:57.228779 148400 runners.go:193] Logging node info for node 172.17.0.1 I1108 19:01:57.233742 148400 runners.go:193] Node Info: &Node{ObjectMeta:{172.17.0.1 1c9ca6f0-ace7-4a33-a1cd-137d512be00a 7083 0 2022-11-08 18:07:44 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:172.17.0.1 kubernetes.io/os:linux] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-11-08 18:07:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}} } {kubelet Update v1 2022-11-08 18:07:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {kubelet Update v1 2022-11-08 18:59:54 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{259962224640 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67441348608 0} {<nil>} 65860692Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{233966001789 0} {<nil>} 233966001789 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67336491008 0} {<nil>} 65758292Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-08 18:59:54 +0000 UTC,LastTransitionTime:2022-11-08 18:07:43 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-08 18:59:54 +0000 UTC,LastTransitionTime:2022-11-08 18:07:43 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-08 18:59:54 +0000 UTC,LastTransitionTime:2022-11-08 18:07:43 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-08 18:59:54 +0000 UTC,LastTransitionTime:2022-11-08 18:07:54 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.17.0.1,},NodeAddress{Type:Hostname,Address:172.17.0.1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:,SystemUUID:7d8834b1-ec1e-71b0-7148-50316089d154,BootID:99214993-e7b1-4bff-9db2-b9548be8d199,KernelVersion:5.4.0-1078-gke,OSImage:Debian GNU/Linux 10 (buster),ContainerRuntimeVersion:containerd://1.6.8,KubeletVersion:v1.26.0-alpha.3.387+504f252722dcc8,KubeProxyVersion:v1.26.0-alpha.3.387+504f252722dcc8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:11e6a66017ba4e4b938c1612b7a54a3befcefd354796c04e1dba76873a13518e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.5],SizeBytes:112030526,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:af7e3857d87770ddb40f5ea4f89b5a2709504ab1ee31f9ea4ab5823c045f2146 registry.k8s.io/e2e-test-images/agnhost:2.40],SizeBytes:51155161,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nautilus@sha256:99c0d6f1ad24a1aa1905d9c6534d193f268f7b23f9add2ae6bb41f31094bdd5c registry.k8s.io/e2e-test-images/nautilus:1.5],SizeBytes:49642095,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 registry.k8s.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:40764680,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 registry.k8s.io/e2e-test-images/nginx:1.14-2],SizeBytes:6979041,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} I1108 19:01:57.234076 148400 runners.go:193] Logging kubelet events for node 172.17.0.1 I1108 19:01:57.238927 148400 runners.go:193] Logging pods the kubelet thinks is on node 172.17.0.1 I1108 19:01:57.248746 148400 runners.go:193] coredns-755454cbdc-s26wr started at 2022-11-08 18:07:54 +0000 UTC (0+1 container statuses recorded) I1108 19:01:57.248792 148400 runners.go:193] Container coredns ready: false, restart count 15 I1108 19:01:57.248798 148400 runners.go:193] externalsvc-27bnr started at 2022-11-08 19:01:51 +0000 UTC (0+1 container statuses recorded) I1108 19:01:57.248803 148400 runners.go:193] Container externalsvc ready: true, restart count 1 I1108 19:01:57.248807 148400 runners.go:193] externalsvc-8nr8c started at 2022-11-08 19:01:51 +0000 UTC (0+1 container statuses recorded) I1108 19:01:57.248811 148400 runners.go:193] Container externalsvc ready: false, restart count 0 I1108 19:01:57.279632 148400 runners.go:193] Latency metrics for node 172.17.0.1 I1108 19:01:57.282863 148400 runners.go:193] Running kubectl logs on non-ready containers in services-7792 Nov 8 19:01:57.287: INFO: Logs of services-7792/externalsvc-8nr8c:externalsvc on node 172.17.0.1 Nov 8 19:01:57.287: INFO: : STARTLOG I1108 19:01:53.880026 1 log.go:195] Serving on port 9376. ENDLOG for container services-7792:externalsvc-8nr8c:externalsvc Nov 8 19:01:57.287: INFO: Unexpected error: Expected Service externalsvc to be running: <*errors.errorString | 0xc002d1c3e0>: { s: "1 containers failed which is more than allowed 0", } Nov 8 19:01:57.287: FAIL: Expected Service externalsvc to be running: 1 containers failed which is more than allowed 0 Full Stack Trace k8s.io/kubernetes/test/e2e/network.createAndGetExternalServiceFQDN({0x7efa648, 0xc003a9c820}, {0xc00091ffb0, 0xd}, {0x74c3d80, 0xb}) test/e2e/network/service.go:4026 +0x117 k8s.io/kubernetes/test/e2e/network.glob..func26.18() test/e2e/network/service.go:1544 +0x1b0 Nov 8 19:01:57.288: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services test/e2e/framework/node/init/init.go:32 Nov 8 19:01:57.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-network] Services test/e2e/network/service.go:771 Nov 8 19:01:57.306: INFO: Output of kubectl describe svc: Nov 8 19:01:57.306: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=services-7792 describe svc --namespace=services-7792' Nov 8 19:01:57.414: INFO: stderr: "" Nov 8 19:01:57.414: INFO: stdout: "Name: externalsvc\nNamespace: services-7792\nLabels: <none>\nAnnotations: <none>\nSelector: name=externalsvc\nType: ClusterIP\nIP Family Policy: SingleStack\nIP Families: IPv4\nIP: 10.0.0.186\nIPs: 10.0.0.186\nPort: <unset> 80/TCP\nTargetPort: 9376/TCP\nEndpoints: 10.88.6.103:9376\nSession Affinity: None\nEvents: <none>\n" Nov 8 19:01:57.414: INFO: Name: externalsvc Namespace: services-7792 Labels: <none> Annotations: <none> Selector: name=externalsvc Type: ClusterIP IP Family Policy: SingleStack IP Families: IPv4 IP: 10.0.0.186 IPs: 10.0.0.186 Port: <unset> 80/TCP TargetPort: 9376/TCP Endpoints: 10.88.6.103:9376 Session Affinity: None Events: <none> [DeferCleanup (Each)] [sig-network] Services test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-network] Services dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/08/22 19:01:57.414 STEP: Collecting events from namespace "services-7792". 11/08/22 19:01:57.414 STEP: Found 12 events. 11/08/22 19:01:57.419 Nov 8 19:01:57.419: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for externalsvc-27bnr: { } Scheduled: Successfully assigned services-7792/externalsvc-27bnr to 172.17.0.1 Nov 8 19:01:57.419: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for externalsvc-8nr8c: { } Scheduled: Successfully assigned services-7792/externalsvc-8nr8c to 172.17.0.1 Nov 8 19:01:57.419: INFO: At 2022-11-08 19:01:51 +0000 UTC - event for externalsvc: {replication-controller } SuccessfulCreate: Created pod: externalsvc-27bnr Nov 8 19:01:57.419: INFO: At 2022-11-08 19:01:51 +0000 UTC - event for externalsvc: {replication-controller } SuccessfulCreate: Created pod: externalsvc-8nr8c Nov 8 19:01:57.419: INFO: At 2022-11-08 19:01:53 +0000 UTC - event for externalsvc-27bnr: {kubelet 172.17.0.1} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.40" already present on machine Nov 8 19:01:57.419: INFO: At 2022-11-08 19:01:53 +0000 UTC - event for externalsvc-27bnr: {kubelet 172.17.0.1} Created: Created container externalsvc Nov 8 19:01:57.419: INFO: At 2022-11-08 19:01:53 +0000 UTC - event for externalsvc-27bnr: {kubelet 172.17.0.1} Started: Started container externalsvc Nov 8 19:01:57.419: INFO: At 2022-11-08 19:01:53 +0000 UTC - event for externalsvc-27bnr: {kubelet 172.17.0.1} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Nov 8 19:01:57.419: INFO: At 2022-11-08 19:01:53 +0000 UTC - event for externalsvc-8nr8c: {kubelet 172.17.0.1} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.40" already present on machine Nov 8 19:01:57.419: INFO: At 2022-11-08 19:01:53 +0000 UTC - event for externalsvc-8nr8c: {kubelet 172.17.0.1} Created: Created container externalsvc Nov 8 19:01:57.419: INFO: At 2022-11-08 19:01:53 +0000 UTC - event for externalsvc-8nr8c: {kubelet 172.17.0.1} Started: Started container externalsvc Nov 8 19:01:57.419: INFO: At 2022-11-08 19:01:55 +0000 UTC - event for externalsvc-8nr8c: {kubelet 172.17.0.1} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Nov 8 19:01:57.422: INFO: POD NODE PHASE GRACE CONDITIONS Nov 8 19:01:57.422: INFO: externalsvc-27bnr 172.17.0.1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-08 19:01:51 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-11-08 19:01:55 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-11-08 19:01:55 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-08 19:01:51 +0000 UTC }] Nov 8 19:01:57.422: INFO: externalsvc-8nr8c 172.17.0.1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-08 19:01:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-08 19:01:55 +0000 UTC ContainersNotReady containers with unready status: [externalsvc]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-08 19:01:55 +0000 UTC ContainersNotReady containers with unready status: [externalsvc]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-08 19:01:51 +0000 UTC }] Nov 8 19:01:57.422: INFO: Nov 8 19:01:57.439: INFO: Logging node info for node 172.17.0.1 Nov 8 19:01:57.442: INFO: Node Info: &Node{ObjectMeta:{172.17.0.1 1c9ca6f0-ace7-4a33-a1cd-137d512be00a 7083 0 2022-11-08 18:07:44 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:172.17.0.1 kubernetes.io/os:linux] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-11-08 18:07:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}} } {kubelet Update v1 2022-11-08 18:07:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {kubelet Update v1 2022-11-08 18:59:54 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{259962224640 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67441348608 0} {<nil>} 65860692Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{233966001789 0} {<nil>} 233966001789 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67336491008 0} {<nil>} 65758292Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-08 18:59:54 +0000 UTC,LastTransitionTime:2022-11-08 18:07:43 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-08 18:59:54 +0000 UTC,LastTransitionTime:2022-11-08 18:07:43 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-08 18:59:54 +0000 UTC,LastTransitionTime:2022-11-08 18:07:43 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-08 18:59:54 +0000 UTC,LastTransitionTime:2022-11-08 18:07:54 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.17.0.1,},NodeAddress{Type:Hostname,Address:172.17.0.1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:,SystemUUID:7d8834b1-ec1e-71b0-7148-50316089d154,BootID:99214993-e7b1-4bff-9db2-b9548be8d199,KernelVersion:5.4.0-1078-gke,OSImage:Debian GNU/Linux 10 (buster),ContainerRuntimeVersion:containerd://1.6.8,KubeletVersion:v1.26.0-alpha.3.387+504f252722dcc8,KubeProxyVersion:v1.26.0-alpha.3.387+504f252722dcc8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:11e6a66017ba4e4b938c1612b7a54a3befcefd354796c04e1dba76873a13518e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.5],SizeBytes:112030526,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:af7e3857d87770ddb40f5ea4f89b5a2709504ab1ee31f9ea4ab5823c045f2146 registry.k8s.io/e2e-test-images/agnhost:2.40],SizeBytes:51155161,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nautilus@sha256:99c0d6f1ad24a1aa1905d9c6534d193f268f7b23f9add2ae6bb41f31094bdd5c registry.k8s.io/e2e-test-images/nautilus:1.5],SizeBytes:49642095,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 registry.k8s.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:40764680,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 registry.k8s.io/e2e-test-images/nginx:1.14-2],SizeBytes:6979041,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 8 19:01:57.443: INFO: Logging kubelet events for node 172.17.0.1 Nov 8 19:01:57.446: INFO: Logging pods the kubelet thinks is on node 172.17.0.1 Nov 8 19:01:57.452: INFO: externalsvc-8nr8c started at 2022-11-08 19:01:51 +0000 UTC (0+1 container statuses recorded) Nov 8 19:01:57.452: INFO: Container externalsvc ready: false, restart count 0 Nov 8 19:01:57.452: INFO: coredns-755454cbdc-s26wr started at 2022-11-08 18:07:54 +0000 UTC (0+1 container statuses recorded) Nov 8 19:01:57.452: INFO: Container coredns ready: false, restart count 15 Nov 8 19:01:57.452: INFO: externalsvc-27bnr started at 2022-11-08 19:01:51 +0000 UTC (0+1 container statuses recorded) Nov 8 19:01:57.452: INFO: Container externalsvc ready: true, restart count 1 Nov 8 19:01:57.486: INFO: Latency metrics for node 172.17.0.1 [DeferCleanup (Each)] [sig-network] Services tear down framework | framework.go:193 STEP: Destroying namespace "services-7792" for this suite. 11/08/22 19:01:57.486
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-network\]\sServices\sshould\sbe\sable\sto\schange\sthe\stype\sfrom\sNodePort\sto\sExternalName\s\[Conformance\]$'
test/e2e/network/service.go:1604 k8s.io/kubernetes/test/e2e/network.glob..func26.19() test/e2e/network/service.go:1604 +0x34afrom junit_01.xml
[BeforeEach] [sig-network] Services set up framework | framework.go:178 STEP: Creating a kubernetes client 11/08/22 18:59:13.971 Nov 8 18:59:13.971: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename services 11/08/22 18:59:13.972 STEP: Waiting for a default service account to be provisioned in namespace 11/08/22 18:59:13.995 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/08/22 18:59:14.002 [BeforeEach] [sig-network] Services test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-network] Services test/e2e/network/service.go:767 [It] should be able to change the type from NodePort to ExternalName [Conformance] test/e2e/network/service.go:1570 STEP: creating a service nodeport-service with the type=NodePort in namespace services-6480 11/08/22 18:59:14.009 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service 11/08/22 18:59:14.029 STEP: creating service externalsvc in namespace services-6480 11/08/22 18:59:14.03 STEP: creating replication controller externalsvc in namespace services-6480 11/08/22 18:59:14.046 I1108 18:59:14.057997 148400 runners.go:193] Created replication controller with name: externalsvc, namespace: services-6480, replica count: 2 I1108 18:59:17.109570 148400 runners.go:193] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName 11/08/22 18:59:17.113 Nov 8 18:59:17.133: INFO: Creating new exec pod Nov 8 18:59:17.139: INFO: Waiting up to 5m0s for pod "execpodw9ws9" in namespace "services-6480" to be "running" Nov 8 18:59:17.148: INFO: Pod "execpodw9ws9": Phase="Pending", Reason="", readiness=false. Elapsed: 9.230032ms Nov 8 18:59:19.152: INFO: Pod "execpodw9ws9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01303333s Nov 8 18:59:21.152: INFO: Pod "execpodw9ws9": Phase="Running", Reason="", readiness=false. Elapsed: 4.013094267s Nov 8 18:59:21.152: INFO: Pod "execpodw9ws9" satisfied condition "running" Nov 8 18:59:21.152: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=services-6480 exec execpodw9ws9 -- /bin/sh -x -c nslookup nodeport-service.services-6480.svc.cluster.local' Nov 8 18:59:21.294: INFO: rc: 1 Nov 8 18:59:21.294: INFO: ExternalName service "services-6480/execpodw9ws9" failed to resolve to IP Nov 8 18:59:23.294: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=services-6480 exec execpodw9ws9 -- /bin/sh -x -c nslookup nodeport-service.services-6480.svc.cluster.local' Nov 8 18:59:23.595: INFO: rc: 137 Nov 8 18:59:23.595: INFO: ExternalName service "services-6480/execpodw9ws9" failed to resolve to IP Nov 8 18:59:25.294: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=services-6480 exec execpodw9ws9 -- /bin/sh -x -c nslookup nodeport-service.services-6480.svc.cluster.local' Nov 8 18:59:25.445: INFO: rc: 1 Nov 8 18:59:25.445: INFO: ExternalName service "services-6480/execpodw9ws9" failed to resolve to IP Nov 8 18:59:27.295: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=services-6480 exec execpodw9ws9 -- /bin/sh -x -c nslookup nodeport-service.services-6480.svc.cluster.local' Nov 8 18:59:27.415: INFO: rc: 1 Nov 8 18:59:27.415: INFO: ExternalName service "services-6480/execpodw9ws9" failed to resolve to IP Nov 8 18:59:29.295: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=services-6480 exec execpodw9ws9 -- /bin/sh -x -c nslookup nodeport-service.services-6480.svc.cluster.local' Nov 8 18:59:29.426: INFO: rc: 1 Nov 8 18:59:29.426: INFO: ExternalName service "services-6480/execpodw9ws9" failed to resolve to IP Nov 8 18:59:31.294: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=services-6480 exec execpodw9ws9 -- /bin/sh -x -c nslookup nodeport-service.services-6480.svc.cluster.local' Nov 8 18:59:31.416: INFO: rc: 1 Nov 8 18:59:31.416: INFO: ExternalName service "services-6480/execpodw9ws9" failed to resolve to IP Nov 8 18:59:33.294: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=services-6480 exec execpodw9ws9 -- /bin/sh -x -c nslookup nodeport-service.services-6480.svc.cluster.local' Nov 8 18:59:33.437: INFO: rc: 1 Nov 8 18:59:33.437: INFO: ExternalName service "services-6480/execpodw9ws9" failed to resolve to IP Nov 8 18:59:35.294: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=services-6480 exec execpodw9ws9 -- /bin/sh -x -c nslookup nodeport-service.services-6480.svc.cluster.local' Nov 8 18:59:35.589: INFO: rc: 137 Nov 8 18:59:35.589: INFO: ExternalName service "services-6480/execpodw9ws9" failed to resolve to IP Nov 8 18:59:37.295: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=services-6480 exec execpodw9ws9 -- /bin/sh -x -c nslookup nodeport-service.services-6480.svc.cluster.local' Nov 8 18:59:37.439: INFO: rc: 1 Nov 8 18:59:37.439: INFO: ExternalName service "services-6480/execpodw9ws9" failed to resolve to IP Nov 8 18:59:39.294: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=services-6480 exec execpodw9ws9 -- /bin/sh -x -c nslookup nodeport-service.services-6480.svc.cluster.local' Nov 8 18:59:39.437: INFO: rc: 1 Nov 8 18:59:39.437: INFO: ExternalName service "services-6480/execpodw9ws9" failed to resolve to IP Nov 8 18:59:41.295: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=services-6480 exec execpodw9ws9 -- /bin/sh -x -c nslookup nodeport-service.services-6480.svc.cluster.local' Nov 8 18:59:41.422: INFO: rc: 1 Nov 8 18:59:41.422: INFO: ExternalName service "services-6480/execpodw9ws9" failed to resolve to IP Nov 8 18:59:43.295: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=services-6480 exec execpodw9ws9 -- /bin/sh -x -c nslookup nodeport-service.services-6480.svc.cluster.local' Nov 8 18:59:43.404: INFO: rc: 1 Nov 8 18:59:43.404: INFO: ExternalName service "services-6480/execpodw9ws9" failed to resolve to IP Nov 8 18:59:45.294: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=services-6480 exec execpodw9ws9 -- /bin/sh -x -c nslookup nodeport-service.services-6480.svc.cluster.local' Nov 8 18:59:45.424: INFO: rc: 1 Nov 8 18:59:45.424: INFO: ExternalName service "services-6480/execpodw9ws9" failed to resolve to IP Nov 8 18:59:47.294: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=services-6480 exec execpodw9ws9 -- /bin/sh -x -c nslookup nodeport-service.services-6480.svc.cluster.local' Nov 8 18:59:47.415: INFO: rc: 1 Nov 8 18:59:47.415: INFO: ExternalName service "services-6480/execpodw9ws9" failed to resolve to IP Nov 8 18:59:49.295: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=services-6480 exec execpodw9ws9 -- /bin/sh -x -c nslookup nodeport-service.services-6480.svc.cluster.local' Nov 8 18:59:49.426: INFO: rc: 1 Nov 8 18:59:49.426: INFO: ExternalName service "services-6480/execpodw9ws9" failed to resolve to IP Nov 8 18:59:51.295: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=services-6480 exec execpodw9ws9 -- /bin/sh -x -c nslookup nodeport-service.services-6480.svc.cluster.local' Nov 8 18:59:51.418: INFO: rc: 1 Nov 8 18:59:51.418: INFO: ExternalName service "services-6480/execpodw9ws9" failed to resolve to IP Nov 8 18:59:53.295: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=services-6480 exec execpodw9ws9 -- /bin/sh -x -c nslookup nodeport-service.services-6480.svc.cluster.local' Nov 8 18:59:53.421: INFO: rc: 1 Nov 8 18:59:53.421: INFO: ExternalName service "services-6480/execpodw9ws9" failed to resolve to IP Nov 8 18:59:55.294: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=services-6480 exec execpodw9ws9 -- /bin/sh -x -c nslookup nodeport-service.services-6480.svc.cluster.local' Nov 8 18:59:55.427: INFO: rc: 1 Nov 8 18:59:55.427: INFO: ExternalName service "services-6480/execpodw9ws9" failed to resolve to IP Nov 8 18:59:57.295: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=services-6480 exec execpodw9ws9 -- /bin/sh -x -c nslookup nodeport-service.services-6480.svc.cluster.local' Nov 8 18:59:57.442: INFO: rc: 1 Nov 8 18:59:57.442: INFO: ExternalName service "services-6480/execpodw9ws9" failed to resolve to IP Nov 8 18:59:59.295: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=services-6480 exec execpodw9ws9 -- /bin/sh -x -c nslookup nodeport-service.services-6480.svc.cluster.local' Nov 8 18:59:59.590: INFO: rc: 137 Nov 8 18:59:59.590: INFO: ExternalName service "services-6480/execpodw9ws9" failed to resolve to IP Nov 8 19:00:01.295: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=services-6480 exec execpodw9ws9 -- /bin/sh -x -c nslookup nodeport-service.services-6480.svc.cluster.local' Nov 8 19:00:01.420: INFO: rc: 1 Nov 8 19:00:01.421: INFO: ExternalName service "services-6480/execpodw9ws9" failed to resolve to IP Nov 8 19:00:03.294: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=services-6480 exec execpodw9ws9 -- /bin/sh -x -c nslookup nodeport-service.services-6480.svc.cluster.local' Nov 8 19:00:03.409: INFO: rc: 1 Nov 8 19:00:03.410: INFO: ExternalName service "services-6480/execpodw9ws9" failed to resolve to IP Nov 8 19:00:05.294: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=services-6480 exec execpodw9ws9 -- /bin/sh -x -c nslookup nodeport-service.services-6480.svc.cluster.local' Nov 8 19:00:05.439: INFO: rc: 1 Nov 8 19:00:05.439: INFO: ExternalName service "services-6480/execpodw9ws9" failed to resolve to IP Nov 8 19:00:07.295: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=services-6480 exec execpodw9ws9 -- /bin/sh -x -c nslookup nodeport-service.services-6480.svc.cluster.local' Nov 8 19:00:07.432: INFO: rc: 1 Nov 8 19:00:07.432: INFO: ExternalName service "services-6480/execpodw9ws9" failed to resolve to IP Nov 8 19:00:09.294: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=services-6480 exec execpodw9ws9 -- /bin/sh -x -c nslookup nodeport-service.services-6480.svc.cluster.local' Nov 8 19:00:09.421: INFO: rc: 1 Nov 8 19:00:09.421: INFO: ExternalName service "services-6480/execpodw9ws9" failed to resolve to IP Nov 8 19:00:11.295: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=services-6480 exec execpodw9ws9 -- /bin/sh -x -c nslookup nodeport-service.services-6480.svc.cluster.local' Nov 8 19:00:11.424: INFO: rc: 1 Nov 8 19:00:11.424: INFO: ExternalName service "services-6480/execpodw9ws9" failed to resolve to IP Nov 8 19:00:13.294: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=services-6480 exec execpodw9ws9 -- /bin/sh -x -c nslookup nodeport-service.services-6480.svc.cluster.local' Nov 8 19:00:13.430: INFO: rc: 1 Nov 8 19:00:13.430: INFO: ExternalName service "services-6480/execpodw9ws9" failed to resolve to IP Nov 8 19:00:15.295: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=services-6480 exec execpodw9ws9 -- /bin/sh -x -c nslookup nodeport-service.services-6480.svc.cluster.local' Nov 8 19:00:15.418: INFO: rc: 1 Nov 8 19:00:15.418: INFO: ExternalName service "services-6480/execpodw9ws9" failed to resolve to IP Nov 8 19:00:17.295: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=services-6480 exec execpodw9ws9 -- /bin/sh -x -c nslookup nodeport-service.services-6480.svc.cluster.local' Nov 8 19:00:17.423: INFO: rc: 1 Nov 8 19:00:17.423: INFO: ExternalName service "services-6480/execpodw9ws9" failed to resolve to IP Nov 8 19:00:19.296: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=services-6480 exec execpodw9ws9 -- /bin/sh -x -c nslookup nodeport-service.services-6480.svc.cluster.local' Nov 8 19:00:19.407: INFO: rc: 1 Nov 8 19:00:19.407: INFO: ExternalName service "services-6480/execpodw9ws9" failed to resolve to IP Nov 8 19:00:21.295: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=services-6480 exec execpodw9ws9 -- /bin/sh -x -c nslookup nodeport-service.services-6480.svc.cluster.local' Nov 8 19:00:21.444: INFO: rc: 1 Nov 8 19:00:21.444: INFO: ExternalName service "services-6480/execpodw9ws9" failed to resolve to IP Nov 8 19:00:23.295: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=services-6480 exec execpodw9ws9 -- /bin/sh -x -c nslookup nodeport-service.services-6480.svc.cluster.local' Nov 8 19:00:23.406: INFO: rc: 1 Nov 8 19:00:23.406: INFO: ExternalName service "services-6480/execpodw9ws9" failed to resolve to IP Nov 8 19:00:25.295: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=services-6480 exec execpodw9ws9 -- /bin/sh -x -c nslookup nodeport-service.services-6480.svc.cluster.local' Nov 8 19:00:25.461: INFO: rc: 1 Nov 8 19:00:25.461: INFO: ExternalName service "services-6480/execpodw9ws9" failed to resolve to IP Nov 8 19:00:27.294: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=services-6480 exec execpodw9ws9 -- /bin/sh -x -c nslookup nodeport-service.services-6480.svc.cluster.local' Nov 8 19:00:27.426: INFO: rc: 1 Nov 8 19:00:27.426: INFO: ExternalName service "services-6480/execpodw9ws9" failed to resolve to IP Nov 8 19:00:29.295: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=services-6480 exec execpodw9ws9 -- /bin/sh -x -c nslookup nodeport-service.services-6480.svc.cluster.local' Nov 8 19:00:29.439: INFO: rc: 1 Nov 8 19:00:29.439: INFO: ExternalName service "services-6480/execpodw9ws9" failed to resolve to IP Nov 8 19:00:31.295: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=services-6480 exec execpodw9ws9 -- /bin/sh -x -c nslookup nodeport-service.services-6480.svc.cluster.local' Nov 8 19:00:31.422: INFO: rc: 1 Nov 8 19:00:31.422: INFO: ExternalName service "services-6480/execpodw9ws9" failed to resolve to IP Nov 8 19:00:33.294: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=services-6480 exec execpodw9ws9 -- /bin/sh -x -c nslookup nodeport-service.services-6480.svc.cluster.local' Nov 8 19:00:33.424: INFO: rc: 1 Nov 8 19:00:33.424: INFO: ExternalName service "services-6480/execpodw9ws9" failed to resolve to IP Nov 8 19:00:35.295: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=services-6480 exec execpodw9ws9 -- /bin/sh -x -c nslookup nodeport-service.services-6480.svc.cluster.local' Nov 8 19:00:35.429: INFO: rc: 1 Nov 8 19:00:35.429: INFO: ExternalName service "services-6480/execpodw9ws9" failed to resolve to IP Nov 8 19:00:37.295: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=services-6480 exec execpodw9ws9 -- /bin/sh -x -c nslookup nodeport-service.services-6480.svc.cluster.local' Nov 8 19:00:37.449: INFO: rc: 1 Nov 8 19:00:37.449: INFO: ExternalName service "services-6480/execpodw9ws9" failed to resolve to IP Nov 8 19:00:39.295: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=services-6480 exec execpodw9ws9 -- /bin/sh -x -c nslookup nodeport-service.services-6480.svc.cluster.local' Nov 8 19:00:39.433: INFO: rc: 1 Nov 8 19:00:39.433: INFO: ExternalName service "services-6480/execpodw9ws9" failed to resolve to IP Nov 8 19:00:41.294: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=services-6480 exec execpodw9ws9 -- /bin/sh -x -c nslookup nodeport-service.services-6480.svc.cluster.local' Nov 8 19:00:41.441: INFO: rc: 1 Nov 8 19:00:41.441: INFO: ExternalName service "services-6480/execpodw9ws9" failed to resolve to IP Nov 8 19:00:43.295: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=services-6480 exec execpodw9ws9 -- /bin/sh -x -c nslookup nodeport-service.services-6480.svc.cluster.local' Nov 8 19:00:43.420: INFO: rc: 1 Nov 8 19:00:43.420: INFO: ExternalName service "services-6480/execpodw9ws9" failed to resolve to IP Nov 8 19:00:45.294: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=services-6480 exec execpodw9ws9 -- /bin/sh -x -c nslookup nodeport-service.services-6480.svc.cluster.local' Nov 8 19:00:45.431: INFO: rc: 1 Nov 8 19:00:45.431: INFO: ExternalName service "services-6480/execpodw9ws9" failed to resolve to IP Nov 8 19:00:47.295: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=services-6480 exec execpodw9ws9 -- /bin/sh -x -c nslookup nodeport-service.services-6480.svc.cluster.local' Nov 8 19:00:47.419: INFO: rc: 1 Nov 8 19:00:47.419: INFO: ExternalName service "services-6480/execpodw9ws9" failed to resolve to IP Nov 8 19:00:49.294: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=services-6480 exec execpodw9ws9 -- /bin/sh -x -c nslookup nodeport-service.services-6480.svc.cluster.local' Nov 8 19:00:49.428: INFO: rc: 1 Nov 8 19:00:49.428: INFO: ExternalName service "services-6480/execpodw9ws9" failed to resolve to IP Nov 8 19:00:51.295: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=services-6480 exec execpodw9ws9 -- /bin/sh -x -c nslookup nodeport-service.services-6480.svc.cluster.local' Nov 8 19:00:51.437: INFO: rc: 1 Nov 8 19:00:51.437: INFO: ExternalName service "services-6480/execpodw9ws9" failed to resolve to IP Nov 8 19:00:53.295: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=services-6480 exec execpodw9ws9 -- /bin/sh -x -c nslookup nodeport-service.services-6480.svc.cluster.local' Nov 8 19:00:53.422: INFO: rc: 1 Nov 8 19:00:53.422: INFO: ExternalName service "services-6480/execpodw9ws9" failed to resolve to IP Nov 8 19:00:55.295: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=services-6480 exec execpodw9ws9 -- /bin/sh -x -c nslookup nodeport-service.services-6480.svc.cluster.local' Nov 8 19:00:55.436: INFO: rc: 1 Nov 8 19:00:55.436: INFO: ExternalName service "services-6480/execpodw9ws9" failed to resolve to IP Nov 8 19:00:57.295: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=services-6480 exec execpodw9ws9 -- /bin/sh -x -c nslookup nodeport-service.services-6480.svc.cluster.local' Nov 8 19:00:57.416: INFO: rc: 1 Nov 8 19:00:57.416: INFO: ExternalName service "services-6480/execpodw9ws9" failed to resolve to IP Nov 8 19:00:59.294: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=services-6480 exec execpodw9ws9 -- /bin/sh -x -c nslookup nodeport-service.services-6480.svc.cluster.local' Nov 8 19:00:59.585: INFO: rc: 137 Nov 8 19:00:59.585: INFO: ExternalName service "services-6480/execpodw9ws9" failed to resolve to IP Nov 8 19:01:01.295: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=services-6480 exec execpodw9ws9 -- /bin/sh -x -c nslookup nodeport-service.services-6480.svc.cluster.local' Nov 8 19:01:01.416: INFO: rc: 1 Nov 8 19:01:01.417: INFO: ExternalName service "services-6480/execpodw9ws9" failed to resolve to IP Nov 8 19:01:03.294: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=services-6480 exec execpodw9ws9 -- /bin/sh -x -c nslookup nodeport-service.services-6480.svc.cluster.local' Nov 8 19:01:03.416: INFO: rc: 1 Nov 8 19:01:03.416: INFO: ExternalName service "services-6480/execpodw9ws9" failed to resolve to IP Nov 8 19:01:05.295: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=services-6480 exec execpodw9ws9 -- /bin/sh -x -c nslookup nodeport-service.services-6480.svc.cluster.local' Nov 8 19:01:05.418: INFO: rc: 1 Nov 8 19:01:05.418: INFO: ExternalName service "services-6480/execpodw9ws9" failed to resolve to IP Nov 8 19:01:07.295: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=services-6480 exec execpodw9ws9 -- /bin/sh -x -c nslookup nodeport-service.services-6480.svc.cluster.local' Nov 8 19:01:07.420: INFO: rc: 1 Nov 8 19:01:07.421: INFO: ExternalName service "services-6480/execpodw9ws9" failed to resolve to IP Nov 8 19:01:09.294: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=services-6480 exec execpodw9ws9 -- /bin/sh -x -c nslookup nodeport-service.services-6480.svc.cluster.local' Nov 8 19:01:09.411: INFO: rc: 1 Nov 8 19:01:09.411: INFO: ExternalName service "services-6480/execpodw9ws9" failed to resolve to IP Nov 8 19:01:11.295: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=services-6480 exec execpodw9ws9 -- /bin/sh -x -c nslookup nodeport-service.services-6480.svc.cluster.local' Nov 8 19:01:11.431: INFO: rc: 1 Nov 8 19:01:11.431: INFO: ExternalName service "services-6480/execpodw9ws9" failed to resolve to IP Nov 8 19:01:13.294: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=services-6480 exec execpodw9ws9 -- /bin/sh -x -c nslookup nodeport-service.services-6480.svc.cluster.local' Nov 8 19:01:13.424: INFO: rc: 1 Nov 8 19:01:13.424: INFO: ExternalName service "services-6480/execpodw9ws9" failed to resolve to IP Nov 8 19:01:15.295: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=services-6480 exec execpodw9ws9 -- /bin/sh -x -c nslookup nodeport-service.services-6480.svc.cluster.local' Nov 8 19:01:15.436: INFO: rc: 1 Nov 8 19:01:15.436: INFO: ExternalName service "services-6480/execpodw9ws9" failed to resolve to IP Nov 8 19:01:17.294: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=services-6480 exec execpodw9ws9 -- /bin/sh -x -c nslookup nodeport-service.services-6480.svc.cluster.local' Nov 8 19:01:17.416: INFO: rc: 1 Nov 8 19:01:17.416: INFO: ExternalName service "services-6480/execpodw9ws9" failed to resolve to IP Nov 8 19:01:19.294: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=services-6480 exec execpodw9ws9 -- /bin/sh -x -c nslookup nodeport-service.services-6480.svc.cluster.local' Nov 8 19:01:19.432: INFO: rc: 1 Nov 8 19:01:19.432: INFO: ExternalName service "services-6480/execpodw9ws9" failed to resolve to IP Nov 8 19:01:21.295: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=services-6480 exec execpodw9ws9 -- /bin/sh -x -c nslookup nodeport-service.services-6480.svc.cluster.local' Nov 8 19:01:21.406: INFO: rc: 1 Nov 8 19:01:21.406: INFO: ExternalName service "services-6480/execpodw9ws9" failed to resolve to IP Nov 8 19:01:21.406: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=services-6480 exec execpodw9ws9 -- /bin/sh -x -c nslookup nodeport-service.services-6480.svc.cluster.local' Nov 8 19:01:21.555: INFO: rc: 1 Nov 8 19:01:21.555: INFO: ExternalName service "services-6480/execpodw9ws9" failed to resolve to IP Nov 8 19:01:21.555: INFO: Unexpected error: <*errors.errorString | 0xc000285cb0>: { s: "timed out waiting for the condition", } Nov 8 19:01:21.555: FAIL: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/network.glob..func26.19() test/e2e/network/service.go:1604 +0x34a STEP: deleting ReplicationController externalsvc in namespace services-6480, will wait for the garbage collector to delete the pods 11/08/22 19:01:21.556 Nov 8 19:01:21.621: INFO: Deleting ReplicationController externalsvc took: 10.203913ms Nov 8 19:01:21.722: INFO: Terminating ReplicationController externalsvc pods took: 101.048496ms Nov 8 19:01:25.641: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services test/e2e/framework/node/init/init.go:32 Nov 8 19:01:25.652: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-network] Services test/e2e/network/service.go:771 Nov 8 19:01:25.659: INFO: Output of kubectl describe svc: Nov 8 19:01:25.659: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://localhost:6443 --kubeconfig=/workspace/.kube/config --namespace=services-6480 describe svc --namespace=services-6480' Nov 8 19:01:25.773: INFO: stderr: "No resources found in services-6480 namespace.\n" Nov 8 19:01:25.773: INFO: stdout: "" Nov 8 19:01:25.773: INFO: [DeferCleanup (Each)] [sig-network] Services test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-network] Services dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/08/22 19:01:25.773 STEP: Collecting events from namespace "services-6480". 11/08/22 19:01:25.774 STEP: Found 21 events. 11/08/22 19:01:25.778 Nov 8 19:01:25.778: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for execpodw9ws9: { } Scheduled: Successfully assigned services-6480/execpodw9ws9 to 172.17.0.1 Nov 8 19:01:25.778: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for externalsvc-cd9cj: { } Scheduled: Successfully assigned services-6480/externalsvc-cd9cj to 172.17.0.1 Nov 8 19:01:25.778: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for externalsvc-xmnz4: { } Scheduled: Successfully assigned services-6480/externalsvc-xmnz4 to 172.17.0.1 Nov 8 19:01:25.778: INFO: At 2022-11-08 18:59:14 +0000 UTC - event for externalsvc: {replication-controller } SuccessfulCreate: Created pod: externalsvc-cd9cj Nov 8 19:01:25.778: INFO: At 2022-11-08 18:59:14 +0000 UTC - event for externalsvc: {replication-controller } SuccessfulCreate: Created pod: externalsvc-xmnz4 Nov 8 19:01:25.778: INFO: At 2022-11-08 18:59:16 +0000 UTC - event for externalsvc-cd9cj: {kubelet 172.17.0.1} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.40" already present on machine Nov 8 19:01:25.778: INFO: At 2022-11-08 18:59:16 +0000 UTC - event for externalsvc-cd9cj: {kubelet 172.17.0.1} Started: Started container externalsvc Nov 8 19:01:25.778: INFO: At 2022-11-08 18:59:16 +0000 UTC - event for externalsvc-cd9cj: {kubelet 172.17.0.1} Created: Created container externalsvc Nov 8 19:01:25.778: INFO: At 2022-11-08 18:59:16 +0000 UTC - event for externalsvc-xmnz4: {kubelet 172.17.0.1} Created: Created container externalsvc Nov 8 19:01:25.778: INFO: At 2022-11-08 18:59:16 +0000 UTC - event for externalsvc-xmnz4: {kubelet 172.17.0.1} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.40" already present on machine Nov 8 19:01:25.778: INFO: At 2022-11-08 18:59:16 +0000 UTC - event for externalsvc-xmnz4: {kubelet 172.17.0.1} Started: Started container externalsvc Nov 8 19:01:25.778: INFO: At 2022-11-08 18:59:18 +0000 UTC - event for externalsvc-cd9cj: {kubelet 172.17.0.1} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Nov 8 19:01:25.778: INFO: At 2022-11-08 18:59:18 +0000 UTC - event for externalsvc-xmnz4: {kubelet 172.17.0.1} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Nov 8 19:01:25.778: INFO: At 2022-11-08 18:59:19 +0000 UTC - event for execpodw9ws9: {kubelet 172.17.0.1} Failed: Error: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: can't get final child's PID from pipe: EOF: unknown Nov 8 19:01:25.778: INFO: At 2022-11-08 18:59:19 +0000 UTC - event for execpodw9ws9: {kubelet 172.17.0.1} Created: Created container agnhost-container Nov 8 19:01:25.778: INFO: At 2022-11-08 18:59:19 +0000 UTC - event for execpodw9ws9: {kubelet 172.17.0.1} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.40" already present on machine Nov 8 19:01:25.778: INFO: At 2022-11-08 18:59:20 +0000 UTC - event for execpodw9ws9: {kubelet 172.17.0.1} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Nov 8 19:01:25.778: INFO: At 2022-11-08 18:59:22 +0000 UTC - event for execpodw9ws9: {kubelet 172.17.0.1} Started: Started container agnhost-container Nov 8 19:01:25.778: INFO: At 2022-11-08 18:59:23 +0000 UTC - event for externalsvc-xmnz4: {kubelet 172.17.0.1} BackOff: Back-off restarting failed container externalsvc in pod externalsvc-xmnz4_services-6480(b79bd74b-8111-4991-8fa6-1ad216a3b0d0) Nov 8 19:01:25.778: INFO: At 2022-11-08 18:59:24 +0000 UTC - event for externalsvc-cd9cj: {kubelet 172.17.0.1} BackOff: Back-off restarting failed container externalsvc in pod externalsvc-cd9cj_services-6480(fb93709a-5eec-46a4-b370-d5be29751a01) Nov 8 19:01:25.778: INFO: At 2022-11-08 18:59:26 +0000 UTC - event for execpodw9ws9: {kubelet 172.17.0.1} BackOff: Back-off restarting failed container agnhost-container in pod execpodw9ws9_services-6480(5c0d3d8c-bbda-4e1e-af00-3e347ae5df2d) Nov 8 19:01:25.782: INFO: POD NODE PHASE GRACE CONDITIONS Nov 8 19:01:25.782: INFO: execpodw9ws9 172.17.0.1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-08 18:59:17 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-08 19:01:00 +0000 UTC ContainersNotReady containers with unready status: [agnhost-container]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-08 19:01:00 +0000 UTC ContainersNotReady containers with unready status: [agnhost-container]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-08 18:59:17 +0000 UTC }] Nov 8 19:01:25.782: INFO: Nov 8 19:01:25.801: INFO: Logging node info for node 172.17.0.1 Nov 8 19:01:25.804: INFO: Node Info: &Node{ObjectMeta:{172.17.0.1 1c9ca6f0-ace7-4a33-a1cd-137d512be00a 7083 0 2022-11-08 18:07:44 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:172.17.0.1 kubernetes.io/os:linux] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-11-08 18:07:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}} } {kubelet Update v1 2022-11-08 18:07:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {kubelet Update v1 2022-11-08 18:59:54 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{259962224640 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67441348608 0} {<nil>} 65860692Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{233966001789 0} {<nil>} 233966001789 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67336491008 0} {<nil>} 65758292Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-08 18:59:54 +0000 UTC,LastTransitionTime:2022-11-08 18:07:43 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-08 18:59:54 +0000 UTC,LastTransitionTime:2022-11-08 18:07:43 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-08 18:59:54 +0000 UTC,LastTransitionTime:2022-11-08 18:07:43 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-08 18:59:54 +0000 UTC,LastTransitionTime:2022-11-08 18:07:54 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.17.0.1,},NodeAddress{Type:Hostname,Address:172.17.0.1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:,SystemUUID:7d8834b1-ec1e-71b0-7148-50316089d154,BootID:99214993-e7b1-4bff-9db2-b9548be8d199,KernelVersion:5.4.0-1078-gke,OSImage:Debian GNU/Linux 10 (buster),ContainerRuntimeVersion:containerd://1.6.8,KubeletVersion:v1.26.0-alpha.3.387+504f252722dcc8,KubeProxyVersion:v1.26.0-alpha.3.387+504f252722dcc8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:11e6a66017ba4e4b938c1612b7a54a3befcefd354796c04e1dba76873a13518e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.5],SizeBytes:112030526,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:af7e3857d87770ddb40f5ea4f89b5a2709504ab1ee31f9ea4ab5823c045f2146 registry.k8s.io/e2e-test-images/agnhost:2.40],SizeBytes:51155161,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nautilus@sha256:99c0d6f1ad24a1aa1905d9c6534d193f268f7b23f9add2ae6bb41f31094bdd5c registry.k8s.io/e2e-test-images/nautilus:1.5],SizeBytes:49642095,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 registry.k8s.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:40764680,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 registry.k8s.io/e2e-test-images/nginx:1.14-2],SizeBytes:6979041,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 8 19:01:25.805: INFO: Logging kubelet events for node 172.17.0.1 Nov 8 19:01:25.809: INFO: Logging pods the kubelet thinks is on node 172.17.0.1 Nov 8 19:01:25.815: INFO: execpodw9ws9 started at 2022-11-08 18:59:17 +0000 UTC (0+1 container statuses recorded) Nov 8 19:01:25.815: INFO: Container agnhost-container ready: false, restart count 4 Nov 8 19:01:25.815: INFO: coredns-755454cbdc-s26wr started at 2022-11-08 18:07:54 +0000 UTC (0+1 container statuses recorded) Nov 8 19:01:25.815: INFO: Container coredns ready: false, restart count 15 Nov 8 19:01:25.850: INFO: Latency metrics for node 172.17.0.1 [DeferCleanup (Each)] [sig-network] Services tear down framework | framework.go:193 STEP: Destroying namespace "services-6480" for this suite. 11/08/22 19:01:25.85
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-node\]\sEphemeral\sContainers\s\[NodeConformance\]\swill\sstart\san\sephemeral\scontainer\sin\san\sexisting\spod\s\[Conformance\]$'
test/e2e/framework/pod/pod_client.go:172 k8s.io/kubernetes/test/e2e/framework/pod.(*PodClient).AddEphemeralContainerSync(0xc004bfe660, 0xc003fa1000, 0xc004239e10, 0x3?) test/e2e/framework/pod/pod_client.go:172 +0x65c k8s.io/kubernetes/test/e2e/common/node.glob..func6.2() test/e2e/common/node/ephemeral_containers.go:72 +0x528from junit_01.xml
[BeforeEach] [sig-node] Ephemeral Containers [NodeConformance] set up framework | framework.go:178 STEP: Creating a kubernetes client 11/08/22 19:02:05.922 Nov 8 19:02:05.922: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename ephemeral-containers-test 11/08/22 19:02:05.924 STEP: Waiting for a default service account to be provisioned in namespace 11/08/22 19:02:05.941 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/08/22 19:02:05.946 [BeforeEach] [sig-node] Ephemeral Containers [NodeConformance] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-node] Ephemeral Containers [NodeConformance] test/e2e/common/node/ephemeral_containers.go:38 [It] will start an ephemeral container in an existing pod [Conformance] test/e2e/common/node/ephemeral_containers.go:45 STEP: creating a target pod 11/08/22 19:02:05.952 Nov 8 19:02:05.962: INFO: Waiting up to 5m0s for pod "ephemeral-containers-target-pod" in namespace "ephemeral-containers-test-3779" to be "running and ready" Nov 8 19:02:05.965: INFO: Pod "ephemeral-containers-target-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 3.133137ms Nov 8 19:02:05.965: INFO: The phase of Pod ephemeral-containers-target-pod is Pending, waiting for it to be Running (with Ready = true) Nov 8 19:02:07.970: INFO: Pod "ephemeral-containers-target-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008316062s Nov 8 19:02:07.970: INFO: The phase of Pod ephemeral-containers-target-pod is Pending, waiting for it to be Running (with Ready = true) Nov 8 19:02:09.969: INFO: Pod "ephemeral-containers-target-pod": Phase="Running", Reason="", readiness=true. Elapsed: 4.007680776s Nov 8 19:02:09.969: INFO: The phase of Pod ephemeral-containers-target-pod is Running (Ready = true) Nov 8 19:02:09.969: INFO: Pod "ephemeral-containers-target-pod" satisfied condition "running and ready" STEP: adding an ephemeral container 11/08/22 19:02:09.973 Nov 8 19:02:09.989: INFO: Waiting up to 1m0s for pod "ephemeral-containers-target-pod" in namespace "ephemeral-containers-test-3779" to be "container debugger running" Nov 8 19:02:09.995: INFO: Pod "ephemeral-containers-target-pod": Phase="Running", Reason="", readiness=true. Elapsed: 4.925876ms Nov 8 19:02:12.000: INFO: Pod "ephemeral-containers-target-pod": Phase="Running", Reason="", readiness=false. Elapsed: 2.01010926s Nov 8 19:02:14.000: INFO: Pod "ephemeral-containers-target-pod": Phase="Running", Reason="", readiness=true. Elapsed: 4.010054992s Nov 8 19:02:16.000: INFO: Pod "ephemeral-containers-target-pod": Phase="Running", Reason="", readiness=false. Elapsed: 6.010478028s Nov 8 19:02:17.999: INFO: Pod "ephemeral-containers-target-pod": Phase="Running", Reason="", readiness=false. Elapsed: 8.009616506s Nov 8 19:02:19.999: INFO: Pod "ephemeral-containers-target-pod": Phase="Running", Reason="", readiness=false. Elapsed: 10.009504412s Nov 8 19:02:21.999: INFO: Pod "ephemeral-containers-target-pod": Phase="Running", Reason="", readiness=false. Elapsed: 12.009751503s Nov 8 19:02:24.000: INFO: Pod "ephemeral-containers-target-pod": Phase="Running", Reason="", readiness=false. Elapsed: 14.010262327s Nov 8 19:02:26.000: INFO: Pod "ephemeral-containers-target-pod": Phase="Running", Reason="", readiness=true. Elapsed: 16.010656631s Nov 8 19:02:28.001: INFO: Pod "ephemeral-containers-target-pod": Phase="Running", Reason="", readiness=false. Elapsed: 18.01095037s Nov 8 19:02:30.000: INFO: Pod "ephemeral-containers-target-pod": Phase="Running", Reason="", readiness=false. Elapsed: 20.010530883s Nov 8 19:02:32.000: INFO: Pod "ephemeral-containers-target-pod": Phase="Running", Reason="", readiness=false. Elapsed: 22.010845732s Nov 8 19:02:34.000: INFO: Pod "ephemeral-containers-target-pod": Phase="Running", Reason="", readiness=false. Elapsed: 24.010432038s Nov 8 19:02:36.001: INFO: Pod "ephemeral-containers-target-pod": Phase="Running", Reason="", readiness=false. Elapsed: 26.011052236s Nov 8 19:02:37.999: INFO: Pod "ephemeral-containers-target-pod": Phase="Running", Reason="", readiness=false. Elapsed: 28.00910897s Nov 8 19:02:40.000: INFO: Pod "ephemeral-containers-target-pod": Phase="Running", Reason="", readiness=false. Elapsed: 30.010255172s Nov 8 19:02:42.000: INFO: Pod "ephemeral-containers-target-pod": Phase="Running", Reason="", readiness=false. Elapsed: 32.010111479s Nov 8 19:02:44.000: INFO: Pod "ephemeral-containers-target-pod": Phase="Running", Reason="", readiness=false. Elapsed: 34.010148789s Nov 8 19:02:46.000: INFO: Pod "ephemeral-containers-target-pod": Phase="Running", Reason="", readiness=false. Elapsed: 36.009966818s Nov 8 19:02:48.000: INFO: Pod "ephemeral-containers-target-pod": Phase="Running", Reason="", readiness=false. Elapsed: 38.010683701s Nov 8 19:02:50.000: INFO: Pod "ephemeral-containers-target-pod": Phase="Running", Reason="", readiness=true. Elapsed: 40.010549934s Nov 8 19:02:52.000: INFO: Pod "ephemeral-containers-target-pod": Phase="Running", Reason="", readiness=false. Elapsed: 42.010260242s Nov 8 19:02:54.000: INFO: Pod "ephemeral-containers-target-pod": Phase="Running", Reason="", readiness=false. Elapsed: 44.010379179s Nov 8 19:02:56.000: INFO: Pod "ephemeral-containers-target-pod": Phase="Running", Reason="", readiness=false. Elapsed: 46.009976344s Nov 8 19:02:57.999: INFO: Pod "ephemeral-containers-target-pod": Phase="Running", Reason="", readiness=false. Elapsed: 48.009602592s Nov 8 19:03:00.000: INFO: Pod "ephemeral-containers-target-pod": Phase="Running", Reason="", readiness=false. Elapsed: 50.010009563s Nov 8 19:03:01.998: INFO: Pod "ephemeral-containers-target-pod": Phase="Running", Reason="", readiness=false. Elapsed: 52.008599966s Nov 8 19:03:04.000: INFO: Pod "ephemeral-containers-target-pod": Phase="Running", Reason="", readiness=false. Elapsed: 54.010264513s Nov 8 19:03:06.001: INFO: Pod "ephemeral-containers-target-pod": Phase="Running", Reason="", readiness=false. Elapsed: 56.011021376s Nov 8 19:03:08.001: INFO: Pod "ephemeral-containers-target-pod": Phase="Running", Reason="", readiness=false. Elapsed: 58.010921274s Nov 8 19:03:10.001: INFO: Pod "ephemeral-containers-target-pod": Phase="Running", Reason="", readiness=false. Elapsed: 1m0.010961949s Nov 8 19:03:10.005: INFO: Pod "ephemeral-containers-target-pod": Phase="Running", Reason="", readiness=false. Elapsed: 1m0.015300093s Nov 8 19:03:10.006: INFO: Unexpected error: <*pod.timeoutError | 0xc004e32d20>: { msg: "timed out while waiting for pod ephemeral-containers-test-3779/ephemeral-containers-target-pod to be container debugger running", observedObjects: [ <*v1.Pod | 0xc0008f8c00>{ TypeMeta: {Kind: "", APIVersion: ""}, ObjectMeta: { Name: "ephemeral-containers-target-pod", GenerateName: "", Namespace: "ephemeral-containers-test-3779", SelfLink: "", UID: "32667caf-f561-4bdb-a31d-882fb88585cb", ResourceVersion: "7792", Generation: 0, CreationTimestamp: { Time: { wall: 0, ext: 63803530925, loc: { name: "Local", zone: [ {name: "UTC", offset: 0, isDST: false}, ], tx: [ { when: -576460752303423488, index: 0, isstd: false, isutc: false, }, ], extend: "UTC0", cacheStart: 9223372036854775807, cacheEnd: 9223372036854775807, cacheZone: {name: "UTC", offset: 0, isDST: false}, }, }, }, DeletionTimestamp: nil, DeletionGracePeriodSeconds: nil, Labels: nil, Annotations: nil, OwnerReferences: nil, Finalizers: nil, ManagedFields: [ { Manager: "e2e.test", Operation: "Update", APIVersion: "v1", Time: { Time: { wall: 0, ext: 63803530925, loc: { name: "Local", zone: [...], tx: [...], extend: "UTC0", cacheStart: 9223372036854775807, cacheEnd: 9223372036854775807, cacheZone: {name: ..., offset: ..., isDST: ...}, }, }, }, FieldsType: "FieldsV1", FieldsV1: { Raw: "{\"f:spec\":{\"f:containers\":{\"k:{\\\"name\\\":\\\"test-container-1\\\"}\":{\".\":{},\"f:args\":{},\"f:command\":{},\"f:image\":{},\"f:imagePullPolicy\":{},\"f:name\":{},\"f:resources\":{},\"f:terminationMessagePath\":{},\"f:terminationMessagePolicy\":{}}},\"f:dnsPolicy\":{},\"f:enableServiceLinks\":{},\"f:restartPolicy\":{},\"f:schedulerName\":{},\"f:securityContext\":{},\"f:terminationGracePeriodSeconds\":{}}}", }, Subresource: "", }, { Manager: "kubelet", Operation: "Update", APIVersion: "v1", Time: { Time: { wall: 0, ext: 63803530988, loc: { name: "Local", zone: [...], tx: [...], extend: "UTC0", cacheStart: 9223372036854775807, cacheEnd: 9223372036854775807, cacheZone: {name: ..., offset: ..., isDST: ...}, }, }, }, FieldsType: "FieldsV1", FieldsV1: { Raw: "{\"f:status\":{\"f:conditions\":{\"k:{\\\"type\\\":\\\"ContainersReady\\\"}\":{\".\":{},\"f:lastProbeTime\":{},\"f:lastTransitionTime\":{},\"f:message\":{},\"f:reason\":{},\"f:status\":{},\"f:type\":{}},\"k:{\\\"type\\\":\\\"Initialized\\\"}\":{\".\":{},\"f:lastProbeTime\":{},\"f:lastTransitionTime\":{},\"f:status\":{},\"f:type\":{}},\"k:{\\\"type\\\":\\\"Ready\\\"}\":{\".\":{},\"f:lastProbeTime\":{},\"f:lastTransitionTime\":{},\"f:message\":{},\"f:reason\":{},\"f:status\":{},\"f:type\":{}}},\"f:containerStatuses\":{},\"f:ephemeralContainerStatuses\":{},\"f:hostIP\":{},\"f:phase\":{},\"f:podIP\":{},\"f:podIPs\":{\".\":{},\"k:{\\\"ip\\\":\\\"10.88.6.147\\\"}\":{\".\":{},\"f:ip\":{}},\"k:{\\\"ip\\\":\\\"2001:4860:4860::693\\\"}\":{\".\":{},\"f:ip\":{}}},\"f:startTime\":{}}}", }, Subresource: "status", }, ], }, Spec: { Volumes: [ { Name: "kube-api-access-dhkpj", VolumeSource: { HostPath: nil, EmptyDir: nil, GCEPersistentDisk: nil, AWSElasticBlockStore: nil, GitRepo: nil, Secret: nil, NFS: nil, ISCSI: nil, Glusterfs: nil, PersistentVolumeClaim: nil, RBD: nil, FlexVolume: nil, Cinder: nil, CephFS: nil, Flocker: nil, DownwardAPI: nil, FC: nil, AzureFile: nil, ConfigMap: nil, VsphereVolume: nil, Quobyte: nil, AzureDisk: nil, PhotonPersistentDisk: nil, Projected: { Sources: [ { Secret: ..., DownwardAPI: ..., ConfigMap: ..., ServiceAccountToken: ..., }, { Secret: ..., DownwardAPI: ..., ConfigMap: ..., ServiceAccountToken: ..., }, { Secret: ..., DownwardAPI: ..., ConfigMap: ..., ServiceAccountToken: ..., ... Gomega truncated this representation as it exceeds 'format.MaxLength'. Consider having the object provide a custom 'GomegaStringer' representation or adjust the parameters in Gomega's 'format' package. Learn more here: https://onsi.github.io/gomega/#adjusting-output Nov 8 19:03:10.006: FAIL: timed out while waiting for pod ephemeral-containers-test-3779/ephemeral-containers-target-pod to be container debugger running Full Stack Trace k8s.io/kubernetes/test/e2e/framework/pod.(*PodClient).AddEphemeralContainerSync(0xc004bfe660, 0xc003fa1000, 0xc004239e10, 0x3?) test/e2e/framework/pod/pod_client.go:172 +0x65c k8s.io/kubernetes/test/e2e/common/node.glob..func6.2() test/e2e/common/node/ephemeral_containers.go:72 +0x528 [AfterEach] [sig-node] Ephemeral Containers [NodeConformance] test/e2e/framework/node/init/init.go:32 Nov 8 19:03:10.007: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-node] Ephemeral Containers [NodeConformance] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-node] Ephemeral Containers [NodeConformance] dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/08/22 19:03:10.012 STEP: Collecting events from namespace "ephemeral-containers-test-3779". 11/08/22 19:03:10.012 STEP: Found 9 events. 11/08/22 19:03:10.016 Nov 8 19:03:10.016: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for ephemeral-containers-target-pod: { } Scheduled: Successfully assigned ephemeral-containers-test-3779/ephemeral-containers-target-pod to 172.17.0.1 Nov 8 19:03:10.016: INFO: At 2022-11-08 19:02:08 +0000 UTC - event for ephemeral-containers-target-pod: {kubelet 172.17.0.1} Pulled: Container image "registry.k8s.io/e2e-test-images/busybox:1.29-2" already present on machine Nov 8 19:03:10.016: INFO: At 2022-11-08 19:02:08 +0000 UTC - event for ephemeral-containers-target-pod: {kubelet 172.17.0.1} Created: Created container test-container-1 Nov 8 19:03:10.016: INFO: At 2022-11-08 19:02:09 +0000 UTC - event for ephemeral-containers-target-pod: {kubelet 172.17.0.1} Started: Started container test-container-1 Nov 8 19:03:10.016: INFO: At 2022-11-08 19:02:10 +0000 UTC - event for ephemeral-containers-target-pod: {kubelet 172.17.0.1} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Nov 8 19:03:10.016: INFO: At 2022-11-08 19:02:13 +0000 UTC - event for ephemeral-containers-target-pod: {kubelet 172.17.0.1} Pulled: Container image "registry.k8s.io/e2e-test-images/busybox:1.29-2" already present on machine Nov 8 19:03:10.016: INFO: At 2022-11-08 19:02:13 +0000 UTC - event for ephemeral-containers-target-pod: {kubelet 172.17.0.1} Created: Created container debugger Nov 8 19:03:10.016: INFO: At 2022-11-08 19:02:13 +0000 UTC - event for ephemeral-containers-target-pod: {kubelet 172.17.0.1} Started: Started container debugger Nov 8 19:03:10.016: INFO: At 2022-11-08 19:02:16 +0000 UTC - event for ephemeral-containers-target-pod: {kubelet 172.17.0.1} BackOff: Back-off restarting failed container test-container-1 in pod ephemeral-containers-target-pod_ephemeral-containers-test-3779(32667caf-f561-4bdb-a31d-882fb88585cb) Nov 8 19:03:10.021: INFO: POD NODE PHASE GRACE CONDITIONS Nov 8 19:03:10.021: INFO: ephemeral-containers-target-pod 172.17.0.1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-08 19:02:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-08 19:02:50 +0000 UTC ContainersNotReady containers with unready status: [test-container-1]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-08 19:02:50 +0000 UTC ContainersNotReady containers with unready status: [test-container-1]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-08 19:02:05 +0000 UTC }] Nov 8 19:03:10.021: INFO: Nov 8 19:03:10.044: INFO: Logging node info for node 172.17.0.1 Nov 8 19:03:10.047: INFO: Node Info: &Node{ObjectMeta:{172.17.0.1 1c9ca6f0-ace7-4a33-a1cd-137d512be00a 7083 0 2022-11-08 18:07:44 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:172.17.0.1 kubernetes.io/os:linux] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-11-08 18:07:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}} } {kubelet Update v1 2022-11-08 18:07:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {kubelet Update v1 2022-11-08 18:59:54 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{259962224640 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67441348608 0} {<nil>} 65860692Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{233966001789 0} {<nil>} 233966001789 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67336491008 0} {<nil>} 65758292Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-08 18:59:54 +0000 UTC,LastTransitionTime:2022-11-08 18:07:43 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-08 18:59:54 +0000 UTC,LastTransitionTime:2022-11-08 18:07:43 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-08 18:59:54 +0000 UTC,LastTransitionTime:2022-11-08 18:07:43 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-08 18:59:54 +0000 UTC,LastTransitionTime:2022-11-08 18:07:54 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.17.0.1,},NodeAddress{Type:Hostname,Address:172.17.0.1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:,SystemUUID:7d8834b1-ec1e-71b0-7148-50316089d154,BootID:99214993-e7b1-4bff-9db2-b9548be8d199,KernelVersion:5.4.0-1078-gke,OSImage:Debian GNU/Linux 10 (buster),ContainerRuntimeVersion:containerd://1.6.8,KubeletVersion:v1.26.0-alpha.3.387+504f252722dcc8,KubeProxyVersion:v1.26.0-alpha.3.387+504f252722dcc8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:11e6a66017ba4e4b938c1612b7a54a3befcefd354796c04e1dba76873a13518e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.5],SizeBytes:112030526,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:af7e3857d87770ddb40f5ea4f89b5a2709504ab1ee31f9ea4ab5823c045f2146 registry.k8s.io/e2e-test-images/agnhost:2.40],SizeBytes:51155161,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nautilus@sha256:99c0d6f1ad24a1aa1905d9c6534d193f268f7b23f9add2ae6bb41f31094bdd5c registry.k8s.io/e2e-test-images/nautilus:1.5],SizeBytes:49642095,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 registry.k8s.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:40764680,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 registry.k8s.io/e2e-test-images/nginx:1.14-2],SizeBytes:6979041,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 8 19:03:10.047: INFO: Logging kubelet events for node 172.17.0.1 Nov 8 19:03:10.051: INFO: Logging pods the kubelet thinks is on node 172.17.0.1 Nov 8 19:03:10.057: INFO: ephemeral-containers-target-pod started at 2022-11-08 19:02:05 +0000 UTC (0+1 container statuses recorded) Nov 8 19:03:10.057: INFO: Container test-container-1 ready: false, restart count 3 Nov 8 19:03:10.057: INFO: coredns-755454cbdc-s26wr started at 2022-11-08 18:07:54 +0000 UTC (0+1 container statuses recorded) Nov 8 19:03:10.057: INFO: Container coredns ready: false, restart count 15 Nov 8 19:03:10.086: INFO: Latency metrics for node 172.17.0.1 [DeferCleanup (Each)] [sig-node] Ephemeral Containers [NodeConformance] tear down framework | framework.go:193 STEP: Destroying namespace "ephemeral-containers-test-3779" for this suite. 11/08/22 19:03:10.087
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-node\]\sProbing\scontainer\sshould\s\*not\*\sbe\srestarted\swith\sa\stcp\:8080\sliveness\sprobe\s\[NodeConformance\]\s\[Conformance\]$'
test/e2e/common/node/container_probe.go:994 k8s.io/kubernetes/test/e2e/common/node.RunLivenessTest(0xc0003f05a0, 0xc004d4a400, 0x0, 0xc00344aec0?) test/e2e/common/node/container_probe.go:994 +0xdad k8s.io/kubernetes/test/e2e/common/node.glob..func2.7() test/e2e/common/node/container_probe.go:191 +0x105from junit_01.xml
[BeforeEach] [sig-node] Probing container set up framework | framework.go:178 STEP: Creating a kubernetes client 11/08/22 19:03:16.183 Nov 8 19:03:16.183: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename container-probe 11/08/22 19:03:16.184 STEP: Waiting for a default service account to be provisioned in namespace 11/08/22 19:03:16.201 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/08/22 19:03:16.206 [BeforeEach] [sig-node] Probing container test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-node] Probing container test/e2e/common/node/container_probe.go:63 [It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] test/e2e/common/node/container_probe.go:184 STEP: Creating pod liveness-fb3a89a8-4141-4db9-8d45-7045b5a90ced in namespace container-probe-3700 11/08/22 19:03:16.21 Nov 8 19:03:16.222: INFO: Waiting up to 5m0s for pod "liveness-fb3a89a8-4141-4db9-8d45-7045b5a90ced" in namespace "container-probe-3700" to be "not pending" Nov 8 19:03:16.226: INFO: Pod "liveness-fb3a89a8-4141-4db9-8d45-7045b5a90ced": Phase="Pending", Reason="", readiness=false. Elapsed: 3.204835ms Nov 8 19:03:18.231: INFO: Pod "liveness-fb3a89a8-4141-4db9-8d45-7045b5a90ced": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008139846s Nov 8 19:03:20.231: INFO: Pod "liveness-fb3a89a8-4141-4db9-8d45-7045b5a90ced": Phase="Running", Reason="", readiness=false. Elapsed: 4.00806685s Nov 8 19:03:20.231: INFO: Pod "liveness-fb3a89a8-4141-4db9-8d45-7045b5a90ced" satisfied condition "not pending" Nov 8 19:03:20.231: INFO: Started pod liveness-fb3a89a8-4141-4db9-8d45-7045b5a90ced in namespace container-probe-3700 STEP: checking the pod's current state and verifying that restartCount is present 11/08/22 19:03:20.231 Nov 8 19:03:20.235: INFO: Initial restart count of pod liveness-fb3a89a8-4141-4db9-8d45-7045b5a90ced is 0 Nov 8 19:03:24.249: INFO: Restart count of pod container-probe-3700/liveness-fb3a89a8-4141-4db9-8d45-7045b5a90ced is now 1 (4.013708269s elapsed) Nov 8 19:03:34.275: INFO: Restart count of pod container-probe-3700/liveness-fb3a89a8-4141-4db9-8d45-7045b5a90ced is now 2 (14.039840631s elapsed) Nov 8 19:04:00.337: INFO: Restart count of pod container-probe-3700/liveness-fb3a89a8-4141-4db9-8d45-7045b5a90ced is now 3 (40.102396488s elapsed) Nov 8 19:04:44.462: INFO: Restart count of pod container-probe-3700/liveness-fb3a89a8-4141-4db9-8d45-7045b5a90ced is now 4 (1m24.226722782s elapsed) Nov 8 19:06:06.677: INFO: Restart count of pod container-probe-3700/liveness-fb3a89a8-4141-4db9-8d45-7045b5a90ced is now 5 (2m46.441466946s elapsed) Nov 8 19:07:20.860: FAIL: pod container-probe-3700/liveness-fb3a89a8-4141-4db9-8d45-7045b5a90ced - expected number of restarts: 0, found restarts: 5 Full Stack Trace k8s.io/kubernetes/test/e2e/common/node.RunLivenessTest(0xc0003f05a0, 0xc004d4a400, 0x0, 0xc00344aec0?) test/e2e/common/node/container_probe.go:994 +0xdad k8s.io/kubernetes/test/e2e/common/node.glob..func2.7() test/e2e/common/node/container_probe.go:191 +0x105 STEP: deleting the pod 11/08/22 19:07:20.861 [AfterEach] [sig-node] Probing container test/e2e/framework/node/init/init.go:32 Nov 8 19:07:20.883: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-node] Probing container test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-node] Probing container dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/08/22 19:07:20.887 STEP: Collecting events from namespace "container-probe-3700". 11/08/22 19:07:20.887 STEP: Found 6 events. 11/08/22 19:07:20.893 Nov 8 19:07:20.893: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for liveness-fb3a89a8-4141-4db9-8d45-7045b5a90ced: { } Scheduled: Successfully assigned container-probe-3700/liveness-fb3a89a8-4141-4db9-8d45-7045b5a90ced to 172.17.0.1 Nov 8 19:07:20.893: INFO: At 2022-11-08 19:03:18 +0000 UTC - event for liveness-fb3a89a8-4141-4db9-8d45-7045b5a90ced: {kubelet 172.17.0.1} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.40" already present on machine Nov 8 19:07:20.893: INFO: At 2022-11-08 19:03:18 +0000 UTC - event for liveness-fb3a89a8-4141-4db9-8d45-7045b5a90ced: {kubelet 172.17.0.1} Created: Created container agnhost-container Nov 8 19:07:20.893: INFO: At 2022-11-08 19:03:18 +0000 UTC - event for liveness-fb3a89a8-4141-4db9-8d45-7045b5a90ced: {kubelet 172.17.0.1} Started: Started container agnhost-container Nov 8 19:07:20.893: INFO: At 2022-11-08 19:03:19 +0000 UTC - event for liveness-fb3a89a8-4141-4db9-8d45-7045b5a90ced: {kubelet 172.17.0.1} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Nov 8 19:07:20.893: INFO: At 2022-11-08 19:03:26 +0000 UTC - event for liveness-fb3a89a8-4141-4db9-8d45-7045b5a90ced: {kubelet 172.17.0.1} BackOff: Back-off restarting failed container agnhost-container in pod liveness-fb3a89a8-4141-4db9-8d45-7045b5a90ced_container-probe-3700(a5566d6b-24d0-482e-a4c9-c6519e521ce3) Nov 8 19:07:20.897: INFO: POD NODE PHASE GRACE CONDITIONS Nov 8 19:07:20.897: INFO: Nov 8 19:07:20.901: INFO: Logging node info for node 172.17.0.1 Nov 8 19:07:20.904: INFO: Node Info: &Node{ObjectMeta:{172.17.0.1 1c9ca6f0-ace7-4a33-a1cd-137d512be00a 8011 0 2022-11-08 18:07:44 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:172.17.0.1 kubernetes.io/os:linux] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-11-08 18:07:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}} } {kubelet Update v1 2022-11-08 18:07:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {kubelet Update v1 2022-11-08 19:05:01 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{259962224640 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67441348608 0} {<nil>} 65860692Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{233966001789 0} {<nil>} 233966001789 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67336491008 0} {<nil>} 65758292Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-08 19:05:01 +0000 UTC,LastTransitionTime:2022-11-08 18:07:43 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-08 19:05:01 +0000 UTC,LastTransitionTime:2022-11-08 18:07:43 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-08 19:05:01 +0000 UTC,LastTransitionTime:2022-11-08 18:07:43 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-08 19:05:01 +0000 UTC,LastTransitionTime:2022-11-08 18:07:54 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.17.0.1,},NodeAddress{Type:Hostname,Address:172.17.0.1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:,SystemUUID:7d8834b1-ec1e-71b0-7148-50316089d154,BootID:99214993-e7b1-4bff-9db2-b9548be8d199,KernelVersion:5.4.0-1078-gke,OSImage:Debian GNU/Linux 10 (buster),ContainerRuntimeVersion:containerd://1.6.8,KubeletVersion:v1.26.0-alpha.3.387+504f252722dcc8,KubeProxyVersion:v1.26.0-alpha.3.387+504f252722dcc8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:11e6a66017ba4e4b938c1612b7a54a3befcefd354796c04e1dba76873a13518e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.5],SizeBytes:112030526,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:af7e3857d87770ddb40f5ea4f89b5a2709504ab1ee31f9ea4ab5823c045f2146 registry.k8s.io/e2e-test-images/agnhost:2.40],SizeBytes:51155161,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nautilus@sha256:99c0d6f1ad24a1aa1905d9c6534d193f268f7b23f9add2ae6bb41f31094bdd5c registry.k8s.io/e2e-test-images/nautilus:1.5],SizeBytes:49642095,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 registry.k8s.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:40764680,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 registry.k8s.io/e2e-test-images/nginx:1.14-2],SizeBytes:6979041,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 8 19:07:20.904: INFO: Logging kubelet events for node 172.17.0.1 Nov 8 19:07:20.909: INFO: Logging pods the kubelet thinks is on node 172.17.0.1 Nov 8 19:07:20.924: INFO: coredns-755454cbdc-s26wr started at 2022-11-08 18:07:54 +0000 UTC (0+1 container statuses recorded) Nov 8 19:07:20.924: INFO: Container coredns ready: false, restart count 16 Nov 8 19:07:20.964: INFO: Latency metrics for node 172.17.0.1 [DeferCleanup (Each)] [sig-node] Probing container tear down framework | framework.go:193 STEP: Destroying namespace "container-probe-3700" for this suite. 11/08/22 19:07:20.964
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-node\]\sProbing\scontainer\swith\sreadiness\sprobe\sthat\sfails\sshould\snever\sbe\sready\sand\snever\srestart\s\[NodeConformance\]\s\[Conformance\]$'
test/e2e/common/node/container_probe.go:127 k8s.io/kubernetes/test/e2e/common/node.glob..func2.3() test/e2e/common/node/container_probe.go:127 +0x3dafrom junit_01.xml
[BeforeEach] [sig-node] Probing container set up framework | framework.go:178 STEP: Creating a kubernetes client 11/08/22 19:08:55.651 Nov 8 19:08:55.651: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename container-probe 11/08/22 19:08:55.653 STEP: Waiting for a default service account to be provisioned in namespace 11/08/22 19:08:55.683 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/08/22 19:08:55.688 [BeforeEach] [sig-node] Probing container test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-node] Probing container test/e2e/common/node/container_probe.go:63 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] test/e2e/common/node/container_probe.go:108 Nov 8 19:09:55.713: FAIL: pod should have a restart count of 0 but got 3 Expected <int>: 3 to equal <int>: 0 Full Stack Trace k8s.io/kubernetes/test/e2e/common/node.glob..func2.3() test/e2e/common/node/container_probe.go:127 +0x3da [AfterEach] [sig-node] Probing container test/e2e/framework/node/init/init.go:32 Nov 8 19:09:55.714: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-node] Probing container test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-node] Probing container dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/08/22 19:09:55.719 STEP: Collecting events from namespace "container-probe-1142". 11/08/22 19:09:55.719 STEP: Found 9 events. 11/08/22 19:09:55.725 Nov 8 19:09:55.726: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for test-webserver-67340723-7efe-4e5e-9cb7-d75226fd03a9: { } Scheduled: Successfully assigned container-probe-1142/test-webserver-67340723-7efe-4e5e-9cb7-d75226fd03a9 to 172.17.0.1 Nov 8 19:09:55.726: INFO: At 2022-11-08 19:08:57 +0000 UTC - event for test-webserver-67340723-7efe-4e5e-9cb7-d75226fd03a9: {kubelet 172.17.0.1} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to start sandbox container task "66e2b888dead86fbf8a7b7e5ea2371d121e7132124c4e7ebc6cbb21744848eee": cannot start a stopped process: unknown Nov 8 19:09:55.726: INFO: At 2022-11-08 19:09:10 +0000 UTC - event for test-webserver-67340723-7efe-4e5e-9cb7-d75226fd03a9: {kubelet 172.17.0.1} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.40" already present on machine Nov 8 19:09:55.726: INFO: At 2022-11-08 19:09:10 +0000 UTC - event for test-webserver-67340723-7efe-4e5e-9cb7-d75226fd03a9: {kubelet 172.17.0.1} Created: Created container test-webserver Nov 8 19:09:55.726: INFO: At 2022-11-08 19:09:10 +0000 UTC - event for test-webserver-67340723-7efe-4e5e-9cb7-d75226fd03a9: {kubelet 172.17.0.1} Started: Started container test-webserver Nov 8 19:09:55.726: INFO: At 2022-11-08 19:09:11 +0000 UTC - event for test-webserver-67340723-7efe-4e5e-9cb7-d75226fd03a9: {kubelet 172.17.0.1} Unhealthy: Readiness probe failed: Get "http://10.88.7.67:81/": dial tcp 10.88.7.67:81: connect: connection refused Nov 8 19:09:55.726: INFO: At 2022-11-08 19:09:12 +0000 UTC - event for test-webserver-67340723-7efe-4e5e-9cb7-d75226fd03a9: {kubelet 172.17.0.1} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Nov 8 19:09:55.726: INFO: At 2022-11-08 19:09:14 +0000 UTC - event for test-webserver-67340723-7efe-4e5e-9cb7-d75226fd03a9: {kubelet 172.17.0.1} Unhealthy: Readiness probe failed: Get "http://10.88.7.69:81/": dial tcp 10.88.7.69:81: connect: connection refused Nov 8 19:09:55.726: INFO: At 2022-11-08 19:09:18 +0000 UTC - event for test-webserver-67340723-7efe-4e5e-9cb7-d75226fd03a9: {kubelet 172.17.0.1} BackOff: Back-off restarting failed container test-webserver in pod test-webserver-67340723-7efe-4e5e-9cb7-d75226fd03a9_container-probe-1142(4e9162d9-cc28-4ea0-a8ac-06b4742adffd) Nov 8 19:09:55.730: INFO: POD NODE PHASE GRACE CONDITIONS Nov 8 19:09:55.730: INFO: test-webserver-67340723-7efe-4e5e-9cb7-d75226fd03a9 172.17.0.1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-08 19:08:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-08 19:08:55 +0000 UTC ContainersNotReady containers with unready status: [test-webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-08 19:08:55 +0000 UTC ContainersNotReady containers with unready status: [test-webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-08 19:08:55 +0000 UTC }] Nov 8 19:09:55.730: INFO: Nov 8 19:09:55.743: INFO: Logging node info for node 172.17.0.1 Nov 8 19:09:55.749: INFO: Node Info: &Node{ObjectMeta:{172.17.0.1 1c9ca6f0-ace7-4a33-a1cd-137d512be00a 8011 0 2022-11-08 18:07:44 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:172.17.0.1 kubernetes.io/os:linux] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-11-08 18:07:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}} } {kubelet Update v1 2022-11-08 18:07:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {kubelet Update v1 2022-11-08 19:05:01 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{259962224640 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67441348608 0} {<nil>} 65860692Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{233966001789 0} {<nil>} 233966001789 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67336491008 0} {<nil>} 65758292Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-08 19:05:01 +0000 UTC,LastTransitionTime:2022-11-08 18:07:43 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-08 19:05:01 +0000 UTC,LastTransitionTime:2022-11-08 18:07:43 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-08 19:05:01 +0000 UTC,LastTransitionTime:2022-11-08 18:07:43 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-08 19:05:01 +0000 UTC,LastTransitionTime:2022-11-08 18:07:54 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.17.0.1,},NodeAddress{Type:Hostname,Address:172.17.0.1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:,SystemUUID:7d8834b1-ec1e-71b0-7148-50316089d154,BootID:99214993-e7b1-4bff-9db2-b9548be8d199,KernelVersion:5.4.0-1078-gke,OSImage:Debian GNU/Linux 10 (buster),ContainerRuntimeVersion:containerd://1.6.8,KubeletVersion:v1.26.0-alpha.3.387+504f252722dcc8,KubeProxyVersion:v1.26.0-alpha.3.387+504f252722dcc8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:11e6a66017ba4e4b938c1612b7a54a3befcefd354796c04e1dba76873a13518e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.5],SizeBytes:112030526,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:af7e3857d87770ddb40f5ea4f89b5a2709504ab1ee31f9ea4ab5823c045f2146 registry.k8s.io/e2e-test-images/agnhost:2.40],SizeBytes:51155161,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nautilus@sha256:99c0d6f1ad24a1aa1905d9c6534d193f268f7b23f9add2ae6bb41f31094bdd5c registry.k8s.io/e2e-test-images/nautilus:1.5],SizeBytes:49642095,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 registry.k8s.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:40764680,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 registry.k8s.io/e2e-test-images/nginx:1.14-2],SizeBytes:6979041,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 8 19:09:55.749: INFO: Logging kubelet events for node 172.17.0.1 Nov 8 19:09:55.753: INFO: Logging pods the kubelet thinks is on node 172.17.0.1 Nov 8 19:09:55.761: INFO: coredns-755454cbdc-s26wr started at 2022-11-08 18:07:54 +0000 UTC (0+1 container statuses recorded) Nov 8 19:09:55.761: INFO: Container coredns ready: false, restart count 17 Nov 8 19:09:55.761: INFO: test-webserver-67340723-7efe-4e5e-9cb7-d75226fd03a9 started at 2022-11-08 19:08:55 +0000 UTC (0+1 container statuses recorded) Nov 8 19:09:55.761: INFO: Container test-webserver ready: false, restart count 3 Nov 8 19:09:55.808: INFO: Latency metrics for node 172.17.0.1 [DeferCleanup (Each)] [sig-node] Probing container tear down framework | framework.go:193 STEP: Destroying namespace "container-probe-1142" for this suite. 11/08/22 19:09:55.809
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-node\]\sVariable\sExpansion\sshould\ssucceed\sin\swriting\ssubpaths\sin\scontainer\s\[Slow\]\s\[Conformance\]$'
test/e2e/common/node/expansion.go:348 k8s.io/kubernetes/test/e2e/common/node.glob..func7.8() test/e2e/common/node/expansion.go:348 +0x4b2from junit_01.xml
[BeforeEach] [sig-node] Variable Expansion set up framework | framework.go:178 STEP: Creating a kubernetes client 11/08/22 18:50:56.448 Nov 8 18:50:56.448: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename var-expansion 11/08/22 18:50:56.449 STEP: Waiting for a default service account to be provisioned in namespace 11/08/22 18:50:56.463 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/08/22 18:50:56.467 [BeforeEach] [sig-node] Variable Expansion test/e2e/framework/metrics/init/init.go:31 [It] should succeed in writing subpaths in container [Slow] [Conformance] test/e2e/common/node/expansion.go:297 STEP: creating the pod 11/08/22 18:50:56.472 STEP: waiting for pod running 11/08/22 18:50:56.483 Nov 8 18:50:56.483: INFO: Waiting up to 2m0s for pod "var-expansion-fba77f64-8c53-4f38-aba3-9335feab44ae" in namespace "var-expansion-1369" to be "running" Nov 8 18:50:56.487: INFO: Pod "var-expansion-fba77f64-8c53-4f38-aba3-9335feab44ae": Phase="Pending", Reason="", readiness=false. Elapsed: 3.683768ms Nov 8 18:50:58.491: INFO: Pod "var-expansion-fba77f64-8c53-4f38-aba3-9335feab44ae": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00835671s Nov 8 18:51:00.492: INFO: Pod "var-expansion-fba77f64-8c53-4f38-aba3-9335feab44ae": Phase="Running", Reason="", readiness=true. Elapsed: 4.009096582s Nov 8 18:51:00.492: INFO: Pod "var-expansion-fba77f64-8c53-4f38-aba3-9335feab44ae" satisfied condition "running" STEP: creating a file in subpath 11/08/22 18:51:00.492 Nov 8 18:51:00.496: INFO: ExecWithOptions {Command:[/bin/sh -c touch /volume_mount/mypath/foo/test.log] Namespace:var-expansion-1369 PodName:var-expansion-fba77f64-8c53-4f38-aba3-9335feab44ae ContainerName:dapi-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 8 18:51:00.496: INFO: >>> kubeConfig: /workspace/.kube/config Nov 8 18:51:00.497: INFO: ExecWithOptions: Clientset creation Nov 8 18:51:00.497: INFO: ExecWithOptions: execute(POST https://localhost:6443/api/v1/namespaces/var-expansion-1369/pods/var-expansion-fba77f64-8c53-4f38-aba3-9335feab44ae/exec?command=%2Fbin%2Fsh&command=-c&command=touch+%2Fvolume_mount%2Fmypath%2Ffoo%2Ftest.log&container=dapi-container&container=dapi-container&stderr=true&stdout=true) Nov 8 18:51:00.520: FAIL: expected to be able to write to subpath Full Stack Trace k8s.io/kubernetes/test/e2e/common/node.glob..func7.8() test/e2e/common/node/expansion.go:348 +0x4b2 [AfterEach] [sig-node] Variable Expansion test/e2e/framework/node/init/init.go:32 Nov 8 18:51:00.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-node] Variable Expansion test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-node] Variable Expansion dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/08/22 18:51:00.523 STEP: Collecting events from namespace "var-expansion-1369". 11/08/22 18:51:00.523 STEP: Found 4 events. 11/08/22 18:51:00.527 Nov 8 18:51:00.527: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for var-expansion-fba77f64-8c53-4f38-aba3-9335feab44ae: { } Scheduled: Successfully assigned var-expansion-1369/var-expansion-fba77f64-8c53-4f38-aba3-9335feab44ae to 172.17.0.1 Nov 8 18:51:00.527: INFO: At 2022-11-08 18:50:59 +0000 UTC - event for var-expansion-fba77f64-8c53-4f38-aba3-9335feab44ae: {kubelet 172.17.0.1} Pulled: Container image "registry.k8s.io/e2e-test-images/busybox:1.29-2" already present on machine Nov 8 18:51:00.527: INFO: At 2022-11-08 18:50:59 +0000 UTC - event for var-expansion-fba77f64-8c53-4f38-aba3-9335feab44ae: {kubelet 172.17.0.1} Created: Created container dapi-container Nov 8 18:51:00.527: INFO: At 2022-11-08 18:50:59 +0000 UTC - event for var-expansion-fba77f64-8c53-4f38-aba3-9335feab44ae: {kubelet 172.17.0.1} Started: Started container dapi-container Nov 8 18:51:00.531: INFO: POD NODE PHASE GRACE CONDITIONS Nov 8 18:51:00.531: INFO: var-expansion-fba77f64-8c53-4f38-aba3-9335feab44ae 172.17.0.1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-08 18:50:56 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-11-08 18:50:59 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-11-08 18:50:59 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-08 18:50:56 +0000 UTC }] Nov 8 18:51:00.531: INFO: Nov 8 18:51:00.540: INFO: Logging node info for node 172.17.0.1 Nov 8 18:51:00.543: INFO: Node Info: &Node{ObjectMeta:{172.17.0.1 1c9ca6f0-ace7-4a33-a1cd-137d512be00a 5244 0 2022-11-08 18:07:44 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:172.17.0.1 kubernetes.io/os:linux] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-11-08 18:07:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}} } {kubelet Update v1 2022-11-08 18:07:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {kubelet Update v1 2022-11-08 18:49:43 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{259962224640 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67441348608 0} {<nil>} 65860692Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{233966001789 0} {<nil>} 233966001789 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67336491008 0} {<nil>} 65758292Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-08 18:49:43 +0000 UTC,LastTransitionTime:2022-11-08 18:07:43 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-08 18:49:43 +0000 UTC,LastTransitionTime:2022-11-08 18:07:43 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-08 18:49:43 +0000 UTC,LastTransitionTime:2022-11-08 18:07:43 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-08 18:49:43 +0000 UTC,LastTransitionTime:2022-11-08 18:07:54 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.17.0.1,},NodeAddress{Type:Hostname,Address:172.17.0.1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:,SystemUUID:7d8834b1-ec1e-71b0-7148-50316089d154,BootID:99214993-e7b1-4bff-9db2-b9548be8d199,KernelVersion:5.4.0-1078-gke,OSImage:Debian GNU/Linux 10 (buster),ContainerRuntimeVersion:containerd://1.6.8,KubeletVersion:v1.26.0-alpha.3.387+504f252722dcc8,KubeProxyVersion:v1.26.0-alpha.3.387+504f252722dcc8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:11e6a66017ba4e4b938c1612b7a54a3befcefd354796c04e1dba76873a13518e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.5],SizeBytes:112030526,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:af7e3857d87770ddb40f5ea4f89b5a2709504ab1ee31f9ea4ab5823c045f2146 registry.k8s.io/e2e-test-images/agnhost:2.40],SizeBytes:51155161,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nautilus@sha256:99c0d6f1ad24a1aa1905d9c6534d193f268f7b23f9add2ae6bb41f31094bdd5c registry.k8s.io/e2e-test-images/nautilus:1.5],SizeBytes:49642095,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 registry.k8s.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:40764680,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 registry.k8s.io/e2e-test-images/nginx:1.14-2],SizeBytes:6979041,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 8 18:51:00.543: INFO: Logging kubelet events for node 172.17.0.1 Nov 8 18:51:00.546: INFO: Logging pods the kubelet thinks is on node 172.17.0.1 Nov 8 18:51:00.552: INFO: suspend-false-to-true-ft27t started at 2022-11-08 18:50:52 +0000 UTC (0+1 container statuses recorded) Nov 8 18:51:00.552: INFO: Container c ready: false, restart count 0 Nov 8 18:51:00.552: INFO: suspend-false-to-true-wfwcn started at 2022-11-08 18:50:52 +0000 UTC (0+1 container statuses recorded) Nov 8 18:51:00.552: INFO: Container c ready: false, restart count 0 Nov 8 18:51:00.552: INFO: coredns-755454cbdc-s26wr started at 2022-11-08 18:07:54 +0000 UTC (0+1 container statuses recorded) Nov 8 18:51:00.552: INFO: Container coredns ready: false, restart count 13 Nov 8 18:51:00.552: INFO: var-expansion-fba77f64-8c53-4f38-aba3-9335feab44ae started at 2022-11-08 18:50:56 +0000 UTC (0+1 container statuses recorded) Nov 8 18:51:00.552: INFO: Container dapi-container ready: true, restart count 0 Nov 8 18:51:00.584: INFO: Latency metrics for node 172.17.0.1 [DeferCleanup (Each)] [sig-node] Variable Expansion tear down framework | framework.go:193 STEP: Destroying namespace "var-expansion-1369" for this suite. 11/08/22 18:51:00.585
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-storage\]\sConfigMap\soptional\supdates\sshould\sbe\sreflected\sin\svolume\s\[NodeConformance\]\s\[Conformance\]$'
test/e2e/framework/pod/pod_client.go:106 k8s.io/kubernetes/test/e2e/framework/pod.(*PodClient).CreateSync(0xc0044b63d8, 0x10?) test/e2e/framework/pod/pod_client.go:106 +0x94 k8s.io/kubernetes/test/e2e/common/storage.glob..func1.12() test/e2e/common/storage/configmap_volume.go:378 +0x155ffrom junit_01.xml
[BeforeEach] [sig-storage] ConfigMap set up framework | framework.go:178 STEP: Creating a kubernetes client 11/08/22 19:08:12.66 Nov 8 19:08:12.660: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename configmap 11/08/22 19:08:12.662 STEP: Waiting for a default service account to be provisioned in namespace 11/08/22 19:08:12.685 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/08/22 19:08:12.69 [BeforeEach] [sig-storage] ConfigMap test/e2e/framework/metrics/init/init.go:31 [It] optional updates should be reflected in volume [NodeConformance] [Conformance] test/e2e/common/storage/configmap_volume.go:240 STEP: Creating configMap with name cm-test-opt-del-eaaa5f42-816a-4b4a-9a62-946607fad3e7 11/08/22 19:08:12.702 STEP: Creating configMap with name cm-test-opt-upd-68d6f97c-7363-46ce-9944-7f1a15808f20 11/08/22 19:08:12.709 STEP: Creating the pod 11/08/22 19:08:12.72 Nov 8 19:08:12.738: INFO: Waiting up to 5m0s for pod "pod-configmaps-c9ed158a-a9e8-49ba-8c73-9c51fcb54de9" in namespace "configmap-2267" to be "running and ready" Nov 8 19:08:12.743: INFO: Pod "pod-configmaps-c9ed158a-a9e8-49ba-8c73-9c51fcb54de9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.789067ms Nov 8 19:08:12.743: INFO: The phase of Pod pod-configmaps-c9ed158a-a9e8-49ba-8c73-9c51fcb54de9 is Pending, waiting for it to be Running (with Ready = true) Nov 8 19:08:14.747: INFO: Pod "pod-configmaps-c9ed158a-a9e8-49ba-8c73-9c51fcb54de9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009461602s Nov 8 19:08:14.748: INFO: The phase of Pod pod-configmaps-c9ed158a-a9e8-49ba-8c73-9c51fcb54de9 is Pending, waiting for it to be Running (with Ready = true) Nov 8 19:08:16.748: INFO: Pod "pod-configmaps-c9ed158a-a9e8-49ba-8c73-9c51fcb54de9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010045775s Nov 8 19:08:16.748: INFO: The phase of Pod pod-configmaps-c9ed158a-a9e8-49ba-8c73-9c51fcb54de9 is Pending, waiting for it to be Running (with Ready = true) Nov 8 19:08:18.748: INFO: Pod "pod-configmaps-c9ed158a-a9e8-49ba-8c73-9c51fcb54de9": Phase="Failed", Reason="", readiness=false. Elapsed: 6.009617477s Nov 8 19:08:18.748: INFO: The phase of Pod pod-configmaps-c9ed158a-a9e8-49ba-8c73-9c51fcb54de9 is Failed which is unexpected, pod status: v1.PodStatus{Phase:"Failed", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2022, time.November, 8, 19, 8, 12, 0, time.Local), Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2022, time.November, 8, 19, 8, 12, 0, time.Local), Reason:"PodFailed", Message:""}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2022, time.November, 8, 19, 8, 12, 0, time.Local), Reason:"PodFailed", Message:""}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2022, time.November, 8, 19, 8, 12, 0, time.Local), Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.1", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:time.Date(2022, time.November, 8, 19, 8, 12, 0, time.Local), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"createcm-volume-test", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00418f340)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"registry.k8s.io/e2e-test-images/agnhost:2.40", ImageID:"registry.k8s.io/e2e-test-images/agnhost@sha256:af7e3857d87770ddb40f5ea4f89b5a2709504ab1ee31f9ea4ab5823c045f2146", ContainerID:"containerd://fc397c9f72ddb84c0eac5f7c16d69023dfbbddaa45d5bc905d28c18eb2e9cd1d", Started:(*bool)(0xc0041b065f)}, v1.ContainerStatus{Name:"delcm-volume-test", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00418f3b0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"registry.k8s.io/e2e-test-images/agnhost:2.40", ImageID:"registry.k8s.io/e2e-test-images/agnhost@sha256:af7e3857d87770ddb40f5ea4f89b5a2709504ab1ee31f9ea4ab5823c045f2146", ContainerID:"containerd://2534506b820cf9d669582fa5afa55d3133a1623a0de1646ec3db7c320a33a11b", Started:(*bool)(0xc0041b0665)}, v1.ContainerStatus{Name:"updcm-volume-test", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00418f420)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"registry.k8s.io/e2e-test-images/agnhost:2.40", ImageID:"registry.k8s.io/e2e-test-images/agnhost@sha256:af7e3857d87770ddb40f5ea4f89b5a2709504ab1ee31f9ea4ab5823c045f2146", ContainerID:"containerd://511694d1ded5ff8b9c6c0a1b690c82487ee2045e98423e14dae0953557793db5", Started:(*bool)(0xc0041b066b)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)} Nov 8 19:08:18.748: INFO: Error evaluating pod condition running and ready: final error: pod failed permanently Nov 8 19:08:18.748: INFO: Unexpected error: <*fmt.wrapError | 0xc004c873c0>: { msg: "error while waiting for pod configmap-2267/pod-configmaps-c9ed158a-a9e8-49ba-8c73-9c51fcb54de9 to be running and ready: final error: pod failed permanently", err: <*pod.FinalErr | 0xc0007417b0>{ Err: <*errors.errorString | 0xc0007417a0>{ s: "pod failed permanently", }, }, } Nov 8 19:08:18.748: FAIL: error while waiting for pod configmap-2267/pod-configmaps-c9ed158a-a9e8-49ba-8c73-9c51fcb54de9 to be running and ready: final error: pod failed permanently Full Stack Trace k8s.io/kubernetes/test/e2e/framework/pod.(*PodClient).CreateSync(0xc0044b63d8, 0x10?) test/e2e/framework/pod/pod_client.go:106 +0x94 k8s.io/kubernetes/test/e2e/common/storage.glob..func1.12() test/e2e/common/storage/configmap_volume.go:378 +0x155f [AfterEach] [sig-storage] ConfigMap test/e2e/framework/node/init/init.go:32 Nov 8 19:08:18.748: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-storage] ConfigMap test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] ConfigMap dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/08/22 19:08:18.754 STEP: Collecting events from namespace "configmap-2267". 11/08/22 19:08:18.754 STEP: Found 10 events. 11/08/22 19:08:18.758 Nov 8 19:08:18.758: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-configmaps-c9ed158a-a9e8-49ba-8c73-9c51fcb54de9: { } Scheduled: Successfully assigned configmap-2267/pod-configmaps-c9ed158a-a9e8-49ba-8c73-9c51fcb54de9 to 172.17.0.1 Nov 8 19:08:18.758: INFO: At 2022-11-08 19:08:15 +0000 UTC - event for pod-configmaps-c9ed158a-a9e8-49ba-8c73-9c51fcb54de9: {kubelet 172.17.0.1} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.40" already present on machine Nov 8 19:08:18.758: INFO: At 2022-11-08 19:08:15 +0000 UTC - event for pod-configmaps-c9ed158a-a9e8-49ba-8c73-9c51fcb54de9: {kubelet 172.17.0.1} Created: Created container delcm-volume-test Nov 8 19:08:18.758: INFO: At 2022-11-08 19:08:15 +0000 UTC - event for pod-configmaps-c9ed158a-a9e8-49ba-8c73-9c51fcb54de9: {kubelet 172.17.0.1} Started: Started container delcm-volume-test Nov 8 19:08:18.758: INFO: At 2022-11-08 19:08:15 +0000 UTC - event for pod-configmaps-c9ed158a-a9e8-49ba-8c73-9c51fcb54de9: {kubelet 172.17.0.1} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.40" already present on machine Nov 8 19:08:18.758: INFO: At 2022-11-08 19:08:15 +0000 UTC - event for pod-configmaps-c9ed158a-a9e8-49ba-8c73-9c51fcb54de9: {kubelet 172.17.0.1} Created: Created container updcm-volume-test Nov 8 19:08:18.758: INFO: At 2022-11-08 19:08:15 +0000 UTC - event for pod-configmaps-c9ed158a-a9e8-49ba-8c73-9c51fcb54de9: {kubelet 172.17.0.1} Started: Started container updcm-volume-test Nov 8 19:08:18.758: INFO: At 2022-11-08 19:08:15 +0000 UTC - event for pod-configmaps-c9ed158a-a9e8-49ba-8c73-9c51fcb54de9: {kubelet 172.17.0.1} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.40" already present on machine Nov 8 19:08:18.758: INFO: At 2022-11-08 19:08:15 +0000 UTC - event for pod-configmaps-c9ed158a-a9e8-49ba-8c73-9c51fcb54de9: {kubelet 172.17.0.1} Created: Created container createcm-volume-test Nov 8 19:08:18.758: INFO: At 2022-11-08 19:08:15 +0000 UTC - event for pod-configmaps-c9ed158a-a9e8-49ba-8c73-9c51fcb54de9: {kubelet 172.17.0.1} Started: Started container createcm-volume-test Nov 8 19:08:18.762: INFO: POD NODE PHASE GRACE CONDITIONS Nov 8 19:08:18.762: INFO: pod-configmaps-c9ed158a-a9e8-49ba-8c73-9c51fcb54de9 172.17.0.1 Failed [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-08 19:08:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-08 19:08:12 +0000 UTC PodFailed } {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-08 19:08:12 +0000 UTC PodFailed } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-08 19:08:12 +0000 UTC }] Nov 8 19:08:18.762: INFO: Nov 8 19:08:18.786: INFO: Logging node info for node 172.17.0.1 Nov 8 19:08:18.790: INFO: Node Info: &Node{ObjectMeta:{172.17.0.1 1c9ca6f0-ace7-4a33-a1cd-137d512be00a 8011 0 2022-11-08 18:07:44 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:172.17.0.1 kubernetes.io/os:linux] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-11-08 18:07:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}} } {kubelet Update v1 2022-11-08 18:07:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {kubelet Update v1 2022-11-08 19:05:01 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{259962224640 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67441348608 0} {<nil>} 65860692Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{233966001789 0} {<nil>} 233966001789 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67336491008 0} {<nil>} 65758292Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-08 19:05:01 +0000 UTC,LastTransitionTime:2022-11-08 18:07:43 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-08 19:05:01 +0000 UTC,LastTransitionTime:2022-11-08 18:07:43 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-08 19:05:01 +0000 UTC,LastTransitionTime:2022-11-08 18:07:43 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-08 19:05:01 +0000 UTC,LastTransitionTime:2022-11-08 18:07:54 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.17.0.1,},NodeAddress{Type:Hostname,Address:172.17.0.1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:,SystemUUID:7d8834b1-ec1e-71b0-7148-50316089d154,BootID:99214993-e7b1-4bff-9db2-b9548be8d199,KernelVersion:5.4.0-1078-gke,OSImage:Debian GNU/Linux 10 (buster),ContainerRuntimeVersion:containerd://1.6.8,KubeletVersion:v1.26.0-alpha.3.387+504f252722dcc8,KubeProxyVersion:v1.26.0-alpha.3.387+504f252722dcc8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:11e6a66017ba4e4b938c1612b7a54a3befcefd354796c04e1dba76873a13518e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.5],SizeBytes:112030526,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:af7e3857d87770ddb40f5ea4f89b5a2709504ab1ee31f9ea4ab5823c045f2146 registry.k8s.io/e2e-test-images/agnhost:2.40],SizeBytes:51155161,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nautilus@sha256:99c0d6f1ad24a1aa1905d9c6534d193f268f7b23f9add2ae6bb41f31094bdd5c registry.k8s.io/e2e-test-images/nautilus:1.5],SizeBytes:49642095,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 registry.k8s.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:40764680,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 registry.k8s.io/e2e-test-images/nginx:1.14-2],SizeBytes:6979041,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 8 19:08:18.790: INFO: Logging kubelet events for node 172.17.0.1 Nov 8 19:08:18.795: INFO: Logging pods the kubelet thinks is on node 172.17.0.1 Nov 8 19:08:18.802: INFO: coredns-755454cbdc-s26wr started at 2022-11-08 18:07:54 +0000 UTC (0+1 container statuses recorded) Nov 8 19:08:18.803: INFO: Container coredns ready: false, restart count 16 Nov 8 19:08:18.803: INFO: pod-configmaps-c9ed158a-a9e8-49ba-8c73-9c51fcb54de9 started at 2022-11-08 19:08:12 +0000 UTC (0+3 container statuses recorded) Nov 8 19:08:18.803: INFO: Container createcm-volume-test ready: false, restart count 0 Nov 8 19:08:18.803: INFO: Container delcm-volume-test ready: false, restart count 0 Nov 8 19:08:18.803: INFO: Container updcm-volume-test ready: false, restart count 0 Nov 8 19:08:18.841: INFO: Latency metrics for node 172.17.0.1 [DeferCleanup (Each)] [sig-storage] ConfigMap tear down framework | framework.go:193 STEP: Destroying namespace "configmap-2267" for this suite. 11/08/22 19:08:18.842
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-storage\]\sEmptyDir\svolumes\sshould\ssupport\s\(root\,0644\,default\)\s\[LinuxOnly\]\s\[NodeConformance\]\s\[Conformance\]$'
test/e2e/framework/pod/output/output.go:237 k8s.io/kubernetes/test/e2e/framework/pod/output.TestContainerOutputMatcher(0x74ef307?, {0xc002937590?, 0xc0036ebec8?}, 0xc0012dd800, 0x0, {0xc001947ef8, 0x2, 0x2}, 0x5e?) test/e2e/framework/pod/output/output.go:237 +0x176 k8s.io/kubernetes/test/e2e/framework/pod/output.TestContainerOutput(...) test/e2e/framework/pod/output/output.go:214 k8s.io/kubernetes/test/e2e/common/storage.doTest0644(0x0?, 0xc0003f1a40?, {0x0, 0x0}) test/e2e/common/storage/empty_dir.go:527 +0x51f k8s.io/kubernetes/test/e2e/common/storage.glob..func4.10() test/e2e/common/storage/empty_dir.go:168 +0x25from junit_01.xml
[BeforeEach] [sig-storage] EmptyDir volumes set up framework | framework.go:178 STEP: Creating a kubernetes client 11/08/22 19:08:43.364 Nov 8 19:08:43.364: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename emptydir 11/08/22 19:08:43.365 STEP: Waiting for a default service account to be provisioned in namespace 11/08/22 19:08:43.382 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/08/22 19:08:43.387 [BeforeEach] [sig-storage] EmptyDir volumes test/e2e/framework/metrics/init/init.go:31 [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] test/e2e/common/storage/empty_dir.go:167 STEP: Creating a pod to test emptydir 0644 on node default medium 11/08/22 19:08:43.391 Nov 8 19:08:43.404: INFO: Waiting up to 5m0s for pod "pod-5863f5ee-d50a-4167-9ceb-0fdf67d0329a" in namespace "emptydir-806" to be "Succeeded or Failed" Nov 8 19:08:43.408: INFO: Pod "pod-5863f5ee-d50a-4167-9ceb-0fdf67d0329a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.119775ms Nov 8 19:08:45.414: INFO: Pod "pod-5863f5ee-d50a-4167-9ceb-0fdf67d0329a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009641082s Nov 8 19:08:47.414: INFO: Pod "pod-5863f5ee-d50a-4167-9ceb-0fdf67d0329a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010370551s Nov 8 19:08:49.412: INFO: Pod "pod-5863f5ee-d50a-4167-9ceb-0fdf67d0329a": Phase="Failed", Reason="", readiness=false. Elapsed: 6.008578935s Nov 8 19:08:49.424: INFO: Output of node "172.17.0.1" pod "pod-5863f5ee-d50a-4167-9ceb-0fdf67d0329a" container "test-container": STEP: delete the pod 11/08/22 19:08:49.424 Nov 8 19:08:49.443: INFO: Waiting for pod pod-5863f5ee-d50a-4167-9ceb-0fdf67d0329a to disappear Nov 8 19:08:49.447: INFO: Pod pod-5863f5ee-d50a-4167-9ceb-0fdf67d0329a no longer exists Nov 8 19:08:49.447: INFO: Unexpected error: <*errors.errorString | 0xc0014b3c50>: { s: "expected pod \"pod-5863f5ee-d50a-4167-9ceb-0fdf67d0329a\" success: error while waiting for pod emptydir-806/pod-5863f5ee-d50a-4167-9ceb-0fdf67d0329a to be Succeeded or Failed: pod \"pod-5863f5ee-d50a-4167-9ceb-0fdf67d0329a\" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-11-08 19:08:43 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-11-08 19:08:43 +0000 UTC Reason:PodFailed Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-11-08 19:08:43 +0000 UTC Reason:PodFailed Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-11-08 19:08:43 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.1 PodIP: PodIPs:[] StartTime:2022-11-08 19:08:43 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:test-container State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:128,Signal:0,Reason:StartError,Message:failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: can't get final child's PID from pipe: EOF: unknown,StartedAt:1970-01-01 00:00:00 +0000 UTC,FinishedAt:2022-11-08 19:08:45 +0000 UTC,ContainerID:containerd://c2bd9692b2af507dcabc88a8331f74d42dd52b1fe2d362dbb1596662a4ceeda0,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/e2e-test-images/agnhost:2.40 ImageID:registry.k8s.io/e2e-test-images/agnhost@sha256:af7e3857d87770ddb40f5ea4f89b5a2709504ab1ee31f9ea4ab5823c045f2146 ContainerID:containerd://c2bd9692b2af507dcabc88a8331f74d42dd52b1fe2d362dbb1596662a4ceeda0 Started:0xc00425759a}] QOSClass:BestEffort EphemeralContainerStatuses:[]}", } Nov 8 19:08:49.447: FAIL: expected pod "pod-5863f5ee-d50a-4167-9ceb-0fdf67d0329a" success: error while waiting for pod emptydir-806/pod-5863f5ee-d50a-4167-9ceb-0fdf67d0329a to be Succeeded or Failed: pod "pod-5863f5ee-d50a-4167-9ceb-0fdf67d0329a" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-11-08 19:08:43 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-11-08 19:08:43 +0000 UTC Reason:PodFailed Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-11-08 19:08:43 +0000 UTC Reason:PodFailed Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-11-08 19:08:43 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.1 PodIP: PodIPs:[] StartTime:2022-11-08 19:08:43 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:test-container State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:128,Signal:0,Reason:StartError,Message:failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: can't get final child's PID from pipe: EOF: unknown,StartedAt:1970-01-01 00:00:00 +0000 UTC,FinishedAt:2022-11-08 19:08:45 +0000 UTC,ContainerID:containerd://c2bd9692b2af507dcabc88a8331f74d42dd52b1fe2d362dbb1596662a4ceeda0,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/e2e-test-images/agnhost:2.40 ImageID:registry.k8s.io/e2e-test-images/agnhost@sha256:af7e3857d87770ddb40f5ea4f89b5a2709504ab1ee31f9ea4ab5823c045f2146 ContainerID:containerd://c2bd9692b2af507dcabc88a8331f74d42dd52b1fe2d362dbb1596662a4ceeda0 Started:0xc00425759a}] QOSClass:BestEffort EphemeralContainerStatuses:[]} Full Stack Trace k8s.io/kubernetes/test/e2e/framework/pod/output.TestContainerOutputMatcher(0x74ef307?, {0xc002937590?, 0xc0036ebec8?}, 0xc0012dd800, 0x0, {0xc001947ef8, 0x2, 0x2}, 0x5e?) test/e2e/framework/pod/output/output.go:237 +0x176 k8s.io/kubernetes/test/e2e/framework/pod/output.TestContainerOutput(...) test/e2e/framework/pod/output/output.go:214 k8s.io/kubernetes/test/e2e/common/storage.doTest0644(0x0?, 0xc0003f1a40?, {0x0, 0x0}) test/e2e/common/storage/empty_dir.go:527 +0x51f k8s.io/kubernetes/test/e2e/common/storage.glob..func4.10() test/e2e/common/storage/empty_dir.go:168 +0x25 [AfterEach] [sig-storage] EmptyDir volumes test/e2e/framework/node/init/init.go:32 Nov 8 19:08:49.447: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-storage] EmptyDir volumes test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] EmptyDir volumes dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/08/22 19:08:49.451 STEP: Collecting events from namespace "emptydir-806". 11/08/22 19:08:49.452 STEP: Found 4 events. 11/08/22 19:08:49.456 Nov 8 19:08:49.456: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-5863f5ee-d50a-4167-9ceb-0fdf67d0329a: { } Scheduled: Successfully assigned emptydir-806/pod-5863f5ee-d50a-4167-9ceb-0fdf67d0329a to 172.17.0.1 Nov 8 19:08:49.456: INFO: At 2022-11-08 19:08:45 +0000 UTC - event for pod-5863f5ee-d50a-4167-9ceb-0fdf67d0329a: {kubelet 172.17.0.1} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.40" already present on machine Nov 8 19:08:49.456: INFO: At 2022-11-08 19:08:45 +0000 UTC - event for pod-5863f5ee-d50a-4167-9ceb-0fdf67d0329a: {kubelet 172.17.0.1} Created: Created container test-container Nov 8 19:08:49.456: INFO: At 2022-11-08 19:08:45 +0000 UTC - event for pod-5863f5ee-d50a-4167-9ceb-0fdf67d0329a: {kubelet 172.17.0.1} Failed: Error: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: can't get final child's PID from pipe: EOF: unknown Nov 8 19:08:49.461: INFO: POD NODE PHASE GRACE CONDITIONS Nov 8 19:08:49.461: INFO: Nov 8 19:08:49.464: INFO: Logging node info for node 172.17.0.1 Nov 8 19:08:49.469: INFO: Node Info: &Node{ObjectMeta:{172.17.0.1 1c9ca6f0-ace7-4a33-a1cd-137d512be00a 8011 0 2022-11-08 18:07:44 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:172.17.0.1 kubernetes.io/os:linux] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-11-08 18:07:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}} } {kubelet Update v1 2022-11-08 18:07:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {kubelet Update v1 2022-11-08 19:05:01 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{259962224640 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67441348608 0} {<nil>} 65860692Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{233966001789 0} {<nil>} 233966001789 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67336491008 0} {<nil>} 65758292Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-08 19:05:01 +0000 UTC,LastTransitionTime:2022-11-08 18:07:43 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-08 19:05:01 +0000 UTC,LastTransitionTime:2022-11-08 18:07:43 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-08 19:05:01 +0000 UTC,LastTransitionTime:2022-11-08 18:07:43 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-08 19:05:01 +0000 UTC,LastTransitionTime:2022-11-08 18:07:54 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.17.0.1,},NodeAddress{Type:Hostname,Address:172.17.0.1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:,SystemUUID:7d8834b1-ec1e-71b0-7148-50316089d154,BootID:99214993-e7b1-4bff-9db2-b9548be8d199,KernelVersion:5.4.0-1078-gke,OSImage:Debian GNU/Linux 10 (buster),ContainerRuntimeVersion:containerd://1.6.8,KubeletVersion:v1.26.0-alpha.3.387+504f252722dcc8,KubeProxyVersion:v1.26.0-alpha.3.387+504f252722dcc8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:11e6a66017ba4e4b938c1612b7a54a3befcefd354796c04e1dba76873a13518e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.5],SizeBytes:112030526,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:af7e3857d87770ddb40f5ea4f89b5a2709504ab1ee31f9ea4ab5823c045f2146 registry.k8s.io/e2e-test-images/agnhost:2.40],SizeBytes:51155161,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nautilus@sha256:99c0d6f1ad24a1aa1905d9c6534d193f268f7b23f9add2ae6bb41f31094bdd5c registry.k8s.io/e2e-test-images/nautilus:1.5],SizeBytes:49642095,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 registry.k8s.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:40764680,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 registry.k8s.io/e2e-test-images/nginx:1.14-2],SizeBytes:6979041,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 8 19:08:49.469: INFO: Logging kubelet events for node 172.17.0.1 Nov 8 19:08:49.475: INFO: Logging pods the kubelet thinks is on node 172.17.0.1 Nov 8 19:08:49.483: INFO: coredns-755454cbdc-s26wr started at 2022-11-08 18:07:54 +0000 UTC (0+1 container statuses recorded) Nov 8 19:08:49.483: INFO: Container coredns ready: false, restart count 16 Nov 8 19:08:49.516: INFO: Latency metrics for node 172.17.0.1 [DeferCleanup (Each)] [sig-storage] EmptyDir volumes tear down framework | framework.go:193 STEP: Destroying namespace "emptydir-806" for this suite. 11/08/22 19:08:49.516
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-storage\]\sProjected\sconfigMap\supdates\sshould\sbe\sreflected\sin\svolume\s\[NodeConformance\]\s\[Conformance\]$'
test/e2e/common/storage/projected_configmap.go:166 k8s.io/kubernetes/test/e2e/common/storage.glob..func7.10() test/e2e/common/storage/projected_configmap.go:166 +0x9eefrom junit_01.xml
[BeforeEach] [sig-storage] Projected configMap set up framework | framework.go:178 STEP: Creating a kubernetes client 11/08/22 18:46:31.101 Nov 8 18:46:31.101: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename projected 11/08/22 18:46:31.102 STEP: Waiting for a default service account to be provisioned in namespace 11/08/22 18:46:31.12 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/08/22 18:46:31.125 [BeforeEach] [sig-storage] Projected configMap test/e2e/framework/metrics/init/init.go:31 [It] updates should be reflected in volume [NodeConformance] [Conformance] test/e2e/common/storage/projected_configmap.go:124 STEP: Creating projection with configMap that has name projected-configmap-test-upd-20bc75dd-dd77-43b8-b93d-63ff542063fe 11/08/22 18:46:31.134 STEP: Creating the pod 11/08/22 18:46:31.141 Nov 8 18:46:31.155: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-24ba2b4c-6dc6-4829-aa7c-e394486e248f" in namespace "projected-9335" to be "running and ready" Nov 8 18:46:31.159: INFO: Pod "pod-projected-configmaps-24ba2b4c-6dc6-4829-aa7c-e394486e248f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.002936ms Nov 8 18:46:31.159: INFO: The phase of Pod pod-projected-configmaps-24ba2b4c-6dc6-4829-aa7c-e394486e248f is Pending, waiting for it to be Running (with Ready = true) Nov 8 18:46:33.163: INFO: Pod "pod-projected-configmaps-24ba2b4c-6dc6-4829-aa7c-e394486e248f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007668102s Nov 8 18:46:33.163: INFO: The phase of Pod pod-projected-configmaps-24ba2b4c-6dc6-4829-aa7c-e394486e248f is Pending, waiting for it to be Running (with Ready = true) Nov 8 18:46:35.163: INFO: Pod "pod-projected-configmaps-24ba2b4c-6dc6-4829-aa7c-e394486e248f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.007401829s Nov 8 18:46:35.163: INFO: The phase of Pod pod-projected-configmaps-24ba2b4c-6dc6-4829-aa7c-e394486e248f is Pending, waiting for it to be Running (with Ready = true) Nov 8 18:46:37.164: INFO: Pod "pod-projected-configmaps-24ba2b4c-6dc6-4829-aa7c-e394486e248f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.008946588s Nov 8 18:46:37.164: INFO: The phase of Pod pod-projected-configmaps-24ba2b4c-6dc6-4829-aa7c-e394486e248f is Pending, waiting for it to be Running (with Ready = true) Nov 8 18:46:39.163: INFO: Pod "pod-projected-configmaps-24ba2b4c-6dc6-4829-aa7c-e394486e248f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.007751473s Nov 8 18:46:39.163: INFO: The phase of Pod pod-projected-configmaps-24ba2b4c-6dc6-4829-aa7c-e394486e248f is Pending, waiting for it to be Running (with Ready = true) Nov 8 18:46:41.164: INFO: Pod "pod-projected-configmaps-24ba2b4c-6dc6-4829-aa7c-e394486e248f": Phase="Pending", Reason="", readiness=false. Elapsed: 10.008856574s Nov 8 18:46:41.164: INFO: The phase of Pod pod-projected-configmaps-24ba2b4c-6dc6-4829-aa7c-e394486e248f is Pending, waiting for it to be Running (with Ready = true) Nov 8 18:46:43.167: INFO: Pod "pod-projected-configmaps-24ba2b4c-6dc6-4829-aa7c-e394486e248f": Phase="Pending", Reason="", readiness=false. Elapsed: 12.011790087s Nov 8 18:46:43.167: INFO: The phase of Pod pod-projected-configmaps-24ba2b4c-6dc6-4829-aa7c-e394486e248f is Pending, waiting for it to be Running (with Ready = true) Nov 8 18:46:45.163: INFO: Pod "pod-projected-configmaps-24ba2b4c-6dc6-4829-aa7c-e394486e248f": Phase="Pending", Reason="", readiness=false. Elapsed: 14.008106891s Nov 8 18:46:45.163: INFO: The phase of Pod pod-projected-configmaps-24ba2b4c-6dc6-4829-aa7c-e394486e248f is Pending, waiting for it to be Running (with Ready = true) Nov 8 18:46:47.164: INFO: Pod "pod-projected-configmaps-24ba2b4c-6dc6-4829-aa7c-e394486e248f": Phase="Running", Reason="", readiness=true. Elapsed: 16.009107817s Nov 8 18:46:47.164: INFO: The phase of Pod pod-projected-configmaps-24ba2b4c-6dc6-4829-aa7c-e394486e248f is Running (Ready = true) Nov 8 18:46:47.164: INFO: Pod "pod-projected-configmaps-24ba2b4c-6dc6-4829-aa7c-e394486e248f" satisfied condition "running and ready" STEP: Updating configmap projected-configmap-test-upd-20bc75dd-dd77-43b8-b93d-63ff542063fe 11/08/22 18:46:47.173 STEP: waiting to observe update in volume 11/08/22 18:46:47.18 Nov 8 18:50:47.180: FAIL: Timed out after 240.000s. Expected <string>: content of file "/etc/projected-configmap-volume/data-1": value-1 to contain substring <string>: value-2 Full Stack Trace k8s.io/kubernetes/test/e2e/common/storage.glob..func7.10() test/e2e/common/storage/projected_configmap.go:166 +0x9ee [AfterEach] [sig-storage] Projected configMap test/e2e/framework/node/init/init.go:32 Nov 8 18:50:47.181: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-storage] Projected configMap test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] Projected configMap dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/08/22 18:50:47.186 STEP: Collecting events from namespace "projected-9335". 11/08/22 18:50:47.186 STEP: Found 5 events. 11/08/22 18:50:47.191 Nov 8 18:50:47.191: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-projected-configmaps-24ba2b4c-6dc6-4829-aa7c-e394486e248f: { } Scheduled: Successfully assigned projected-9335/pod-projected-configmaps-24ba2b4c-6dc6-4829-aa7c-e394486e248f to 172.17.0.1 Nov 8 18:50:47.191: INFO: At 2022-11-08 18:46:33 +0000 UTC - event for pod-projected-configmaps-24ba2b4c-6dc6-4829-aa7c-e394486e248f: {kubelet 172.17.0.1} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to start sandbox container task "af77c6df6fe41a53007634c833cbc68aa54e422c962d5d8d5ed2fafccfb6261a": cannot start a stopped process: unknown Nov 8 18:50:47.191: INFO: At 2022-11-08 18:46:46 +0000 UTC - event for pod-projected-configmaps-24ba2b4c-6dc6-4829-aa7c-e394486e248f: {kubelet 172.17.0.1} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.40" already present on machine Nov 8 18:50:47.191: INFO: At 2022-11-08 18:46:46 +0000 UTC - event for pod-projected-configmaps-24ba2b4c-6dc6-4829-aa7c-e394486e248f: {kubelet 172.17.0.1} Created: Created container agnhost-container Nov 8 18:50:47.191: INFO: At 2022-11-08 18:46:46 +0000 UTC - event for pod-projected-configmaps-24ba2b4c-6dc6-4829-aa7c-e394486e248f: {kubelet 172.17.0.1} Started: Started container agnhost-container Nov 8 18:50:47.196: INFO: POD NODE PHASE GRACE CONDITIONS Nov 8 18:50:47.196: INFO: pod-projected-configmaps-24ba2b4c-6dc6-4829-aa7c-e394486e248f 172.17.0.1 Failed [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-08 18:46:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-08 18:46:48 +0000 UTC PodFailed } {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-08 18:46:48 +0000 UTC PodFailed } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-08 18:46:31 +0000 UTC }] Nov 8 18:50:47.196: INFO: Nov 8 18:50:47.208: INFO: Logging node info for node 172.17.0.1 Nov 8 18:50:47.211: INFO: Node Info: &Node{ObjectMeta:{172.17.0.1 1c9ca6f0-ace7-4a33-a1cd-137d512be00a 5244 0 2022-11-08 18:07:44 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:172.17.0.1 kubernetes.io/os:linux] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-11-08 18:07:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}} } {kubelet Update v1 2022-11-08 18:07:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {kubelet Update v1 2022-11-08 18:49:43 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{259962224640 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67441348608 0} {<nil>} 65860692Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{233966001789 0} {<nil>} 233966001789 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67336491008 0} {<nil>} 65758292Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-08 18:49:43 +0000 UTC,LastTransitionTime:2022-11-08 18:07:43 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-08 18:49:43 +0000 UTC,LastTransitionTime:2022-11-08 18:07:43 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-08 18:49:43 +0000 UTC,LastTransitionTime:2022-11-08 18:07:43 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-08 18:49:43 +0000 UTC,LastTransitionTime:2022-11-08 18:07:54 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.17.0.1,},NodeAddress{Type:Hostname,Address:172.17.0.1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:,SystemUUID:7d8834b1-ec1e-71b0-7148-50316089d154,BootID:99214993-e7b1-4bff-9db2-b9548be8d199,KernelVersion:5.4.0-1078-gke,OSImage:Debian GNU/Linux 10 (buster),ContainerRuntimeVersion:containerd://1.6.8,KubeletVersion:v1.26.0-alpha.3.387+504f252722dcc8,KubeProxyVersion:v1.26.0-alpha.3.387+504f252722dcc8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:11e6a66017ba4e4b938c1612b7a54a3befcefd354796c04e1dba76873a13518e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.5],SizeBytes:112030526,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:af7e3857d87770ddb40f5ea4f89b5a2709504ab1ee31f9ea4ab5823c045f2146 registry.k8s.io/e2e-test-images/agnhost:2.40],SizeBytes:51155161,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nautilus@sha256:99c0d6f1ad24a1aa1905d9c6534d193f268f7b23f9add2ae6bb41f31094bdd5c registry.k8s.io/e2e-test-images/nautilus:1.5],SizeBytes:49642095,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 registry.k8s.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:40764680,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 registry.k8s.io/e2e-test-images/nginx:1.14-2],SizeBytes:6979041,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 8 18:50:47.212: INFO: Logging kubelet events for node 172.17.0.1 Nov 8 18:50:47.217: INFO: Logging pods the kubelet thinks is on node 172.17.0.1 Nov 8 18:50:47.223: INFO: coredns-755454cbdc-s26wr started at 2022-11-08 18:07:54 +0000 UTC (0+1 container statuses recorded) Nov 8 18:50:47.223: INFO: Container coredns ready: false, restart count 13 Nov 8 18:50:47.223: INFO: pod-projected-configmaps-24ba2b4c-6dc6-4829-aa7c-e394486e248f started at 2022-11-08 18:46:31 +0000 UTC (0+1 container statuses recorded) Nov 8 18:50:47.223: INFO: Container agnhost-container ready: false, restart count 0 Nov 8 18:50:47.257: INFO: Latency metrics for node 172.17.0.1 [DeferCleanup (Each)] [sig-storage] Projected configMap tear down framework | framework.go:193 STEP: Destroying namespace "projected-9335" for this suite. 11/08/22 18:50:47.257
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-storage\]\sProjected\sdownwardAPI\sshould\sprovide\snode\sallocatable\s\(cpu\)\sas\sdefault\scpu\slimit\sif\sthe\slimit\sis\snot\sset\s\[NodeConformance\]\s\[Conformance\]$'
test/e2e/framework/pod/output/output.go:237 k8s.io/kubernetes/test/e2e/framework/pod/output.TestContainerOutputMatcher(0xc0040f2b00?, {0x751eef8?, 0x74fed59?}, 0xc00409d800, 0x0, {0xc00372bf58, 0x1, 0x1}, 0x7e82740?) test/e2e/framework/pod/output/output.go:237 +0x176 k8s.io/kubernetes/test/e2e/framework/pod/output.TestContainerOutputRegexp(...) test/e2e/framework/pod/output/output.go:221 k8s.io/kubernetes/test/e2e/common/storage.glob..func8.13() test/e2e/common/storage/projected_downwardapi.go:253 +0x9bfrom junit_01.xml
[BeforeEach] [sig-storage] Projected downwardAPI set up framework | framework.go:178 STEP: Creating a kubernetes client 11/08/22 18:18:37.767 Nov 8 18:18:37.767: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename projected 11/08/22 18:18:37.769 STEP: Waiting for a default service account to be provisioned in namespace 11/08/22 18:18:37.794 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/08/22 18:18:37.805 [BeforeEach] [sig-storage] Projected downwardAPI test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] Projected downwardAPI test/e2e/common/storage/projected_downwardapi.go:44 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] test/e2e/common/storage/projected_downwardapi.go:249 STEP: Creating a pod to test downward API volume plugin 11/08/22 18:18:37.814 Nov 8 18:18:37.849: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bdce53f5-83e0-4447-a6e5-8b52d5ab7ecd" in namespace "projected-7666" to be "Succeeded or Failed" Nov 8 18:18:37.858: INFO: Pod "downwardapi-volume-bdce53f5-83e0-4447-a6e5-8b52d5ab7ecd": Phase="Pending", Reason="", readiness=false. Elapsed: 9.354566ms Nov 8 18:18:39.869: INFO: Pod "downwardapi-volume-bdce53f5-83e0-4447-a6e5-8b52d5ab7ecd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019829945s Nov 8 18:18:41.868: INFO: Pod "downwardapi-volume-bdce53f5-83e0-4447-a6e5-8b52d5ab7ecd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.0196332s Nov 8 18:18:43.862: INFO: Pod "downwardapi-volume-bdce53f5-83e0-4447-a6e5-8b52d5ab7ecd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.01364466s Nov 8 18:18:45.863: INFO: Pod "downwardapi-volume-bdce53f5-83e0-4447-a6e5-8b52d5ab7ecd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.014582506s Nov 8 18:18:47.864: INFO: Pod "downwardapi-volume-bdce53f5-83e0-4447-a6e5-8b52d5ab7ecd": Phase="Pending", Reason="", readiness=false. Elapsed: 10.01543721s Nov 8 18:18:49.863: INFO: Pod "downwardapi-volume-bdce53f5-83e0-4447-a6e5-8b52d5ab7ecd": Phase="Pending", Reason="", readiness=false. Elapsed: 12.013680482s Nov 8 18:18:51.863: INFO: Pod "downwardapi-volume-bdce53f5-83e0-4447-a6e5-8b52d5ab7ecd": Phase="Pending", Reason="", readiness=false. Elapsed: 14.014111265s Nov 8 18:18:53.864: INFO: Pod "downwardapi-volume-bdce53f5-83e0-4447-a6e5-8b52d5ab7ecd": Phase="Failed", Reason="", readiness=false. Elapsed: 16.015219984s Nov 8 18:18:53.876: INFO: Output of node "172.17.0.1" pod "downwardapi-volume-bdce53f5-83e0-4447-a6e5-8b52d5ab7ecd" container "client-container": STEP: delete the pod 11/08/22 18:18:53.876 Nov 8 18:18:53.891: INFO: Waiting for pod downwardapi-volume-bdce53f5-83e0-4447-a6e5-8b52d5ab7ecd to disappear Nov 8 18:18:53.896: INFO: Pod downwardapi-volume-bdce53f5-83e0-4447-a6e5-8b52d5ab7ecd no longer exists Nov 8 18:18:53.896: INFO: Unexpected error: <*errors.errorString | 0xc00426cfc0>: { s: "expected pod \"downwardapi-volume-bdce53f5-83e0-4447-a6e5-8b52d5ab7ecd\" success: error while waiting for pod projected-7666/downwardapi-volume-bdce53f5-83e0-4447-a6e5-8b52d5ab7ecd to be Succeeded or Failed: pod \"downwardapi-volume-bdce53f5-83e0-4447-a6e5-8b52d5ab7ecd\" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-11-08 18:18:37 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-11-08 18:18:37 +0000 UTC Reason:PodFailed Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-11-08 18:18:37 +0000 UTC Reason:PodFailed Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-11-08 18:18:37 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.1 PodIP:10.88.1.207 PodIPs:[{IP:10.88.1.207} {IP:2001:4860:4860::1cf}] StartTime:2022-11-08 18:18:37 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:client-container State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:128,Signal:0,Reason:StartError,Message:failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: can't get final child's PID from pipe: EOF: unknown,StartedAt:1970-01-01 00:00:00 +0000 UTC,FinishedAt:2022-11-08 18:18:41 +0000 UTC,ContainerID:containerd://89f7a25cab3c34e98bc8ccb103ded3ac52b2e2bbfac2640a5c8546e037c7bd46,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/e2e-test-images/agnhost:2.40 ImageID:registry.k8s.io/e2e-test-images/agnhost@sha256:af7e3857d87770ddb40f5ea4f89b5a2709504ab1ee31f9ea4ab5823c045f2146 ContainerID:containerd://89f7a25cab3c34e98bc8ccb103ded3ac52b2e2bbfac2640a5c8546e037c7bd46 Started:0xc00428257a}] QOSClass:BestEffort EphemeralContainerStatuses:[]}", } Nov 8 18:18:53.896: FAIL: expected pod "downwardapi-volume-bdce53f5-83e0-4447-a6e5-8b52d5ab7ecd" success: error while waiting for pod projected-7666/downwardapi-volume-bdce53f5-83e0-4447-a6e5-8b52d5ab7ecd to be Succeeded or Failed: pod "downwardapi-volume-bdce53f5-83e0-4447-a6e5-8b52d5ab7ecd" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-11-08 18:18:37 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-11-08 18:18:37 +0000 UTC Reason:PodFailed Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-11-08 18:18:37 +0000 UTC Reason:PodFailed Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-11-08 18:18:37 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.1 PodIP:10.88.1.207 PodIPs:[{IP:10.88.1.207} {IP:2001:4860:4860::1cf}] StartTime:2022-11-08 18:18:37 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:client-container State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:128,Signal:0,Reason:StartError,Message:failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: can't get final child's PID from pipe: EOF: unknown,StartedAt:1970-01-01 00:00:00 +0000 UTC,FinishedAt:2022-11-08 18:18:41 +0000 UTC,ContainerID:containerd://89f7a25cab3c34e98bc8ccb103ded3ac52b2e2bbfac2640a5c8546e037c7bd46,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/e2e-test-images/agnhost:2.40 ImageID:registry.k8s.io/e2e-test-images/agnhost@sha256:af7e3857d87770ddb40f5ea4f89b5a2709504ab1ee31f9ea4ab5823c045f2146 ContainerID:containerd://89f7a25cab3c34e98bc8ccb103ded3ac52b2e2bbfac2640a5c8546e037c7bd46 Started:0xc00428257a}] QOSClass:BestEffort EphemeralContainerStatuses:[]} Full Stack Trace k8s.io/kubernetes/test/e2e/framework/pod/output.TestContainerOutputMatcher(0xc0040f2b00?, {0x751eef8?, 0x74fed59?}, 0xc00409d800, 0x0, {0xc00372bf58, 0x1, 0x1}, 0x7e82740?) test/e2e/framework/pod/output/output.go:237 +0x176 k8s.io/kubernetes/test/e2e/framework/pod/output.TestContainerOutputRegexp(...) test/e2e/framework/pod/output/output.go:221 k8s.io/kubernetes/test/e2e/common/storage.glob..func8.13() test/e2e/common/storage/projected_downwardapi.go:253 +0x9b [AfterEach] [sig-storage] Projected downwardAPI test/e2e/framework/node/init/init.go:32 Nov 8 18:18:53.896: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-storage] Projected downwardAPI test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] Projected downwardAPI dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/08/22 18:18:53.9 STEP: Collecting events from namespace "projected-7666". 11/08/22 18:18:53.9 STEP: Found 1 events. 11/08/22 18:18:53.904 Nov 8 18:18:53.905: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for downwardapi-volume-bdce53f5-83e0-4447-a6e5-8b52d5ab7ecd: { } Scheduled: Successfully assigned projected-7666/downwardapi-volume-bdce53f5-83e0-4447-a6e5-8b52d5ab7ecd to 172.17.0.1 Nov 8 18:18:53.907: INFO: POD NODE PHASE GRACE CONDITIONS Nov 8 18:18:53.907: INFO: Nov 8 18:18:53.911: INFO: Logging node info for node 172.17.0.1 Nov 8 18:18:53.915: INFO: Node Info: &Node{ObjectMeta:{172.17.0.1 1c9ca6f0-ace7-4a33-a1cd-137d512be00a 1859 0 2022-11-08 18:07:44 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:172.17.0.1 kubernetes.io/os:linux] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-11-08 18:07:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}} } {kubelet Update v1 2022-11-08 18:07:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {kubelet Update v1 2022-11-08 18:18:35 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{259962224640 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67441348608 0} {<nil>} 65860692Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{233966001789 0} {<nil>} 233966001789 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67336491008 0} {<nil>} 65758292Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-08 18:18:35 +0000 UTC,LastTransitionTime:2022-11-08 18:07:43 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-08 18:18:35 +0000 UTC,LastTransitionTime:2022-11-08 18:07:43 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-08 18:18:35 +0000 UTC,LastTransitionTime:2022-11-08 18:07:43 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-08 18:18:35 +0000 UTC,LastTransitionTime:2022-11-08 18:07:54 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.17.0.1,},NodeAddress{Type:Hostname,Address:172.17.0.1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:,SystemUUID:7d8834b1-ec1e-71b0-7148-50316089d154,BootID:99214993-e7b1-4bff-9db2-b9548be8d199,KernelVersion:5.4.0-1078-gke,OSImage:Debian GNU/Linux 10 (buster),ContainerRuntimeVersion:containerd://1.6.8,KubeletVersion:v1.26.0-alpha.3.387+504f252722dcc8,KubeProxyVersion:v1.26.0-alpha.3.387+504f252722dcc8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:af7e3857d87770ddb40f5ea4f89b5a2709504ab1ee31f9ea4ab5823c045f2146 registry.k8s.io/e2e-test-images/agnhost:2.40],SizeBytes:51155161,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nautilus@sha256:99c0d6f1ad24a1aa1905d9c6534d193f268f7b23f9add2ae6bb41f31094bdd5c registry.k8s.io/e2e-test-images/nautilus:1.5],SizeBytes:49642095,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 registry.k8s.io/e2e-test-images/nginx:1.14-2],SizeBytes:6979041,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 8 18:18:53.916: INFO: Logging kubelet events for node 172.17.0.1 Nov 8 18:18:53.920: INFO: Logging pods the kubelet thinks is on node 172.17.0.1 Nov 8 18:18:53.926: INFO: coredns-755454cbdc-s26wr started at 2022-11-08 18:07:54 +0000 UTC (0+1 container statuses recorded) Nov 8 18:18:53.926: INFO: Container coredns ready: false, restart count 7 Nov 8 18:18:53.960: INFO: Latency metrics for node 172.17.0.1 [DeferCleanup (Each)] [sig-storage] Projected downwardAPI tear down framework | framework.go:193 STEP: Destroying namespace "projected-7666" for this suite. 11/08/22 18:18:53.961
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-storage\]\sProjected\ssecret\soptional\supdates\sshould\sbe\sreflected\sin\svolume\s\[NodeConformance\]\s\[Conformance\]$'
test/e2e/common/storage/projected_secret.go:406 k8s.io/kubernetes/test/e2e/common/storage.glob..func9.8() test/e2e/common/storage/projected_secret.go:406 +0x23b1from junit_01.xml
[BeforeEach] [sig-storage] Projected secret set up framework | framework.go:178 STEP: Creating a kubernetes client 11/08/22 18:39:57.306 Nov 8 18:39:57.306: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename projected 11/08/22 18:39:57.308 STEP: Waiting for a default service account to be provisioned in namespace 11/08/22 18:39:57.33 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/08/22 18:39:57.336 [BeforeEach] [sig-storage] Projected secret test/e2e/framework/metrics/init/init.go:31 [It] optional updates should be reflected in volume [NodeConformance] [Conformance] test/e2e/common/storage/projected_secret.go:215 STEP: Creating secret with name s-test-opt-del-70b2e704-c24d-4c25-885f-e7c4fdde457f 11/08/22 18:39:57.349 STEP: Creating secret with name s-test-opt-upd-6147f804-771c-46a1-8a46-48bf918b1021 11/08/22 18:39:57.36 STEP: Creating the pod 11/08/22 18:39:57.371 Nov 8 18:39:57.386: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-14c6802a-5804-4057-b575-d02404fe2998" in namespace "projected-9082" to be "running and ready" Nov 8 18:39:57.394: INFO: Pod "pod-projected-secrets-14c6802a-5804-4057-b575-d02404fe2998": Phase="Pending", Reason="", readiness=false. Elapsed: 7.940518ms Nov 8 18:39:57.394: INFO: The phase of Pod pod-projected-secrets-14c6802a-5804-4057-b575-d02404fe2998 is Pending, waiting for it to be Running (with Ready = true) Nov 8 18:39:59.399: INFO: Pod "pod-projected-secrets-14c6802a-5804-4057-b575-d02404fe2998": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012795147s Nov 8 18:39:59.399: INFO: The phase of Pod pod-projected-secrets-14c6802a-5804-4057-b575-d02404fe2998 is Pending, waiting for it to be Running (with Ready = true) Nov 8 18:40:01.399: INFO: Pod "pod-projected-secrets-14c6802a-5804-4057-b575-d02404fe2998": Phase="Running", Reason="", readiness=true. Elapsed: 4.013012858s Nov 8 18:40:01.399: INFO: The phase of Pod pod-projected-secrets-14c6802a-5804-4057-b575-d02404fe2998 is Running (Ready = true) Nov 8 18:40:01.399: INFO: Pod "pod-projected-secrets-14c6802a-5804-4057-b575-d02404fe2998" satisfied condition "running and ready" STEP: Deleting secret s-test-opt-del-70b2e704-c24d-4c25-885f-e7c4fdde457f 11/08/22 18:40:01.427 STEP: Updating secret s-test-opt-upd-6147f804-771c-46a1-8a46-48bf918b1021 11/08/22 18:40:01.435 STEP: Creating secret with name s-test-opt-create-5ced95aa-49f1-420d-ae13-1d9c2d4805c9 11/08/22 18:40:01.441 STEP: waiting to observe update in volume 11/08/22 18:40:01.45 Nov 8 18:44:01.452: FAIL: Timed out after 240.001s. Expected <string>: Error reading file /etc/projected-secret-volumes/create/data-1: open /etc/projected-secret-volumes/create/data-1: no such file or directory, retrying to contain substring <string>: value-1 Full Stack Trace k8s.io/kubernetes/test/e2e/common/storage.glob..func9.8() test/e2e/common/storage/projected_secret.go:406 +0x23b1 [AfterEach] [sig-storage] Projected secret test/e2e/framework/node/init/init.go:32 Nov 8 18:44:01.452: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-storage] Projected secret test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] Projected secret dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/08/22 18:44:01.457 STEP: Collecting events from namespace "projected-9082". 11/08/22 18:44:01.458 STEP: Found 10 events. 11/08/22 18:44:01.463 Nov 8 18:44:01.463: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-projected-secrets-14c6802a-5804-4057-b575-d02404fe2998: { } Scheduled: Successfully assigned projected-9082/pod-projected-secrets-14c6802a-5804-4057-b575-d02404fe2998 to 172.17.0.1 Nov 8 18:44:01.463: INFO: At 2022-11-08 18:39:59 +0000 UTC - event for pod-projected-secrets-14c6802a-5804-4057-b575-d02404fe2998: {kubelet 172.17.0.1} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.40" already present on machine Nov 8 18:44:01.463: INFO: At 2022-11-08 18:39:59 +0000 UTC - event for pod-projected-secrets-14c6802a-5804-4057-b575-d02404fe2998: {kubelet 172.17.0.1} Created: Created container dels-volume-test Nov 8 18:44:01.463: INFO: At 2022-11-08 18:39:59 +0000 UTC - event for pod-projected-secrets-14c6802a-5804-4057-b575-d02404fe2998: {kubelet 172.17.0.1} Started: Started container dels-volume-test Nov 8 18:44:01.463: INFO: At 2022-11-08 18:39:59 +0000 UTC - event for pod-projected-secrets-14c6802a-5804-4057-b575-d02404fe2998: {kubelet 172.17.0.1} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.40" already present on machine Nov 8 18:44:01.463: INFO: At 2022-11-08 18:40:00 +0000 UTC - event for pod-projected-secrets-14c6802a-5804-4057-b575-d02404fe2998: {kubelet 172.17.0.1} Created: Created container upds-volume-test Nov 8 18:44:01.463: INFO: At 2022-11-08 18:40:00 +0000 UTC - event for pod-projected-secrets-14c6802a-5804-4057-b575-d02404fe2998: {kubelet 172.17.0.1} Started: Started container upds-volume-test Nov 8 18:44:01.463: INFO: At 2022-11-08 18:40:00 +0000 UTC - event for pod-projected-secrets-14c6802a-5804-4057-b575-d02404fe2998: {kubelet 172.17.0.1} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.40" already present on machine Nov 8 18:44:01.463: INFO: At 2022-11-08 18:40:00 +0000 UTC - event for pod-projected-secrets-14c6802a-5804-4057-b575-d02404fe2998: {kubelet 172.17.0.1} Created: Created container creates-volume-test Nov 8 18:44:01.463: INFO: At 2022-11-08 18:40:00 +0000 UTC - event for pod-projected-secrets-14c6802a-5804-4057-b575-d02404fe2998: {kubelet 172.17.0.1} Started: Started container creates-volume-test Nov 8 18:44:01.467: INFO: POD NODE PHASE GRACE CONDITIONS Nov 8 18:44:01.467: INFO: pod-projected-secrets-14c6802a-5804-4057-b575-d02404fe2998 172.17.0.1 Failed [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-08 18:39:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-08 18:40:02 +0000 UTC PodFailed } {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-08 18:40:02 +0000 UTC PodFailed } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-08 18:39:57 +0000 UTC }] Nov 8 18:44:01.467: INFO: Nov 8 18:44:01.499: INFO: Logging node info for node 172.17.0.1 Nov 8 18:44:01.504: INFO: Node Info: &Node{ObjectMeta:{172.17.0.1 1c9ca6f0-ace7-4a33-a1cd-137d512be00a 4467 0 2022-11-08 18:07:44 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:172.17.0.1 kubernetes.io/os:linux] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-11-08 18:07:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}} } {kubelet Update v1 2022-11-08 18:07:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {kubelet Update v1 2022-11-08 18:40:22 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{259962224640 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67441348608 0} {<nil>} 65860692Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{233966001789 0} {<nil>} 233966001789 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{67336491008 0} {<nil>} 65758292Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-08 18:40:22 +0000 UTC,LastTransitionTime:2022-11-08 18:07:43 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-08 18:40:22 +0000 UTC,LastTransitionTime:2022-11-08 18:07:43 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-08 18:40:22 +0000 UTC,LastTransitionTime:2022-11-08 18:07:43 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-08 18:40:22 +0000 UTC,LastTransitionTime:2022-11-08 18:07:54 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.17.0.1,},NodeAddress{Type:Hostname,Address:172.17.0.1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:,SystemUUID:7d8834b1-ec1e-71b0-7148-50316089d154,BootID:99214993-e7b1-4bff-9db2-b9548be8d199,KernelVersion:5.4.0-1078-gke,OSImage:Debian GNU/Linux 10 (buster),ContainerRuntimeVersion:containerd://1.6.8,KubeletVersion:v1.26.0-alpha.3.387+504f252722dcc8,KubeProxyVersion:v1.26.0-alpha.3.387+504f252722dcc8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:11e6a66017ba4e4b938c1612b7a54a3befcefd354796c04e1dba76873a13518e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.5],SizeBytes:112030526,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:af7e3857d87770ddb40f5ea4f89b5a2709504ab1ee31f9ea4ab5823c045f2146 registry.k8s.io/e2e-test-images/agnhost:2.40],SizeBytes:51155161,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nautilus@sha256:99c0d6f1ad24a1aa1905d9c6534d193f268f7b23f9add2ae6bb41f31094bdd5c registry.k8s.io/e2e-test-images/nautilus:1.5],SizeBytes:49642095,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 registry.k8s.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:40764680,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 registry.k8s.io/e2e-test-images/nginx:1.14-2],SizeBytes:6979041,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 8 18:44:01.504: INFO: Logging kubelet events for node 172.17.0.1 Nov 8 18:44:01.507: INFO: Logging pods the kubelet thinks is on node 172.17.0.1 Nov 8 18:44:01.515: INFO: coredns-755454cbdc-s26wr started at 2022-11-08 18:07:54 +0000 UTC (0+1 container statuses recorded) Nov 8 18:44:01.515: INFO: Container coredns ready: false, restart count 12 Nov 8 18:44:01.515: INFO: pod-projected-secrets-14c6802a-5804-4057-b575-d02404fe2998 started at 2022-11-08 18:39:57 +0000 UTC (0+3 container statuses recorded) Nov 8 18:44:01.515: INFO: Container creates-volume-test ready: false, restart count 0 Nov 8 18:44:01.515: INFO: Container dels-volume-test ready: false, restart count 0 Nov 8 18:44:01.515: INFO: Container upds-volume-test ready: false, restart count 0 Nov 8 18:44:01.551: INFO: Latency metrics for node 172.17.0.1 [DeferCleanup (Each)] [sig-storage] Projected secret tear down framework | framework.go:193 STEP: Destroying namespace "projected-9082" for this suite. 11/08/22 18:44:01.553
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-storage\]\sSubpath\sAtomic\swriter\svolumes\sshould\ssupport\ssubpaths\swith\sdownward\spod\s\[Conformance\]$'
test/e2e/framework/pod/output/output.go:237 k8s.io/kubernetes/test/e2e/framework/pod/output.TestContainerOutputMatcher(0xc00284f000?, {0x74fbbf9?, 0x0?}, 0xc00284f000, 0x0, {0xc003433df8, 0x1, 0x1}, 0x0?) test/e2e/framework/pod/output/output.go:237 +0x176 k8s.io/kubernetes/test/e2e/framework/pod/output.TestContainerOutput(...) test/e2e/framework/pod/output/output.go:214 k8s.io/kubernetes/test/e2e/storage/testsuites.TestBasicSubpathFile(0xc00166e000?, {0xc002936f60?, 0x21?}, 0xc00284f000?, {0x74c59b9?, 0xc003433e80?}) test/e2e/storage/testsuites/subpath.go:489 +0x12a k8s.io/kubernetes/test/e2e/storage/testsuites.TestBasicSubpath(...) test/e2e/storage/testsuites/subpath.go:480 k8s.io/kubernetes/test/e2e/storage.glob..func31.1.5() test/e2e/storage/subpath.go:98 +0x17d
[BeforeEach] [sig-storage] Subpath set up framework | framework.go:178 STEP: Creating a kubernetes client 11/08/22 18:13:16.068 Nov 8 18:13:16.069: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename subpath 11/08/22 18:13:16.07 STEP: Waiting for a default service account to be provisioned in namespace 11/08/22 18:13:16.093 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/08/22 18:13:16.101 [BeforeEach] [sig-storage] Subpath test/e2e/framework/metrics/init/init.go:31 [BeforeEach] Atomic writer volumes test/e2e/storage/subpath.go:40 STEP: Setting up data 11/08/22 18:13:16.105 [It] should support subpaths with downward pod [Conformance] test/e2e/storage/subpath.go:92 STEP: Creating pod pod-subpath-test-downwardapi-w6dq 11/08/22 18:13:16.125 STEP: Creating a pod to test atomic-volume-subpath 11/08/22 18:13:16.126 Nov 8 18:13:16.135: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-w6dq" in namespace "subpath-5748" to be "Succeeded or Failed" Nov 8 18:13:16.139: INFO: Pod "pod-subpath-test-downwardapi-w6dq": Phase="Pending", Reason="", readiness=false. Elapsed: 3.685967ms Nov 8 18:13:18.143: INFO: Pod "pod-subpath-test-downwardapi-w6dq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007764169s Nov 8 18:13:20.144: INFO: Pod "pod-subpath-test-downwardapi-w6dq": Phase="Running", Reason="", readiness=false. Elapsed: 4.008328594s Nov 8 18:13:22.145: INFO: Pod "pod-subpath-test-downwardapi-w6dq": Phase="Failed", Reason="", readiness=false. Elapsed: 6.009398507s Nov 8 18:13:22.159: INFO: Output of node "172.17.0.1" pod "pod-subpath-test-downwardapi-w6dq" container "test-container-subpath-downwardapi-w6dq": content of file "/test-volume": pod-subpath-test-downwardapi-w6dq Unexpected content. Expected: mount-tester new file . Retrying STEP: delete the pod 11/08/22 18:13:22.159 Nov 8 18:13:22.179: INFO: Waiting for pod pod-subpath-test-downwardapi-w6dq to disappear Nov 8 18:13:22.186: INFO: Pod pod-subpath-test-downwardapi-w6dq no longer exists Nov 8 18:13:22.186: INFO: Unexpected error: <*errors.errorString | 0xc000f513d0>: { s: "expected pod \"pod-subpath-test-downwardapi-w6dq\" success: error while waiting for pod subpath-5748/pod-subpath-test-downwardapi-w6dq to be Succeeded or Failed: pod \"pod-subpath-test-downwardapi-w6dq\" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-11-08 18:13:16 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-11-08 18:13:19 +0000 UTC Reason:PodFailed Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-11-08 18:13:19 +0000 UTC Reason:PodFailed Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-11-08 18:13:16 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.1 PodIP:10.88.0.160 PodIPs:[{IP:10.88.0.160} {IP:2001:4860:4860::a0}] StartTime:2022-11-08 18:13:16 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:test-container-subpath-downwardapi-w6dq State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:137,Signal:0,Reason:Error,Message:,StartedAt:2022-11-08 18:13:18 +0000 UTC,FinishedAt:2022-11-08 18:13:19 +0000 UTC,ContainerID:containerd://97fcd22c3200dcf344d1139872e35dc592b0d69eb2a46fa0f47eecfe124de06c,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/e2e-test-images/agnhost:2.40 ImageID:registry.k8s.io/e2e-test-images/agnhost@sha256:af7e3857d87770ddb40f5ea4f89b5a2709504ab1ee31f9ea4ab5823c045f2146 ContainerID:containerd://97fcd22c3200dcf344d1139872e35dc592b0d69eb2a46fa0f47eecfe124de06c Started:0xc00369ef1f}] QOSClass:BestEffort EphemeralContainerStatuses:[]}", } Nov 8 18:13:22.186: FAIL: expected pod "pod-subpath-test-downwardapi-w6dq" success: error while waiting for pod subpath-5748/pod-subpath-test-downwardapi-w6dq to be Succeeded or Failed: pod "pod-subpath-test-downwardapi-w6dq" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-11-08 18:13:16 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-11-08 18:13:19 +0000 UTC Reason:PodFailed Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-11-08 18:13:19 +0000 UTC Reason:PodFailed Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-11-08 18:13:16 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.1 PodIP:10.88.0.160 PodIPs:[{IP:10.88.0.160} {IP:2001:4860:4860::a0}] StartTime:2022-11-08 18:13:16 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:test-container-subpath-downwardapi-w6dq State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:137,Signal:0,Reason:Error,Message:,StartedAt:2022-11-08 18:13:18 +0000 UTC,FinishedAt:2022-11-08 18:13:19 +0000 UTC,ContainerID:containerd://97fcd22c3200dcf344d1139872e35dc592b0d69eb2a46fa0f47eecfe124de06c,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/e2e-test-images/agnhost:2.40 ImageID:registry.k8s.io/e2e-test-images/agnhost@sha256:af7e3857d87770ddb40f5ea4f89b5a2709504ab1ee31f9ea4ab5823c045f2146 ContainerID:containerd://97fcd22c3200dcf344d1139872e35dc592b0d69eb2a46fa0f47eecfe124de06c Started:0xc00369ef1f}] QOSClass:BestEffort EphemeralContainerStatuses:[]} Full Stack Trace k8s.io/kubernetes/test/e2e/framework/pod/output.TestContainerOutputMatcher(0xc00284f000?, {0x74fbbf9?, 0x0?}, 0xc00284f000, 0x0, {0xc003433df8, 0x1, 0x1}, 0x0?) test/e2e/framework/pod/output/output.go:237 +0x176 k8s.io/kubernetes/test/e2e/framework/pod/output.TestContainerOutput(...) test/e2e/framework/pod/output/output.go:214 k8s.io/kubernetes/test/e2e/storage/testsuites.TestBasicSubpathFile(0xc00166e000?, {0xc002936f60?, 0x21?}, 0xc00284f000?, {0x74c59b9?, 0xc003433e80?}) test/e2e/storage/testsuites/subpath.go:489 +0x12a k8s.io/kubernetes/test/e2e/storage/testsuites.TestBasicSubpath(...) test/e2e/storage/testsuites/subpath.go:480 k8s.io/kubernetes/test/e2e/storage.glob..func31.1.5() test/e2e/storage/subpath.go:98 +0x17d [AfterEach] [sig-storage] Subpath test/e2e/framework/node/init/init.go:32 Nov 8 18:13:22.187: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-storage] Subpath test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] Subpath dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/08/22 18:13:22.195 STEP: Collecting events from namespace "subpath-5748". 11/08/22 18:13:22.196 STEP: Found 4 events. 11/08/22 18:13:22.2 Nov 8 18:13:22.200: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-subpath-test-downwardapi-w6dq: { } Scheduled: Successfully assigned subpath-5748/pod-subpath-test-downwardapi-w6dq to 172.17.0.1 Nov 8 18:13:22.200: INFO: At 2022-11-08 18:13:18 +0000 UTC - event for pod-subpath-test-downwardapi-w6dq: {kubelet 172.17.0.1} Pulled: Co