Result | FAILURE |
Tests | 6 failed / 839 succeeded |
Started | |
Elapsed | 41m38s |
Revision | release-1.6 |
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-apps\]\sDeployment\sshould\srun\sthe\slifecycle\sof\sa\sDeployment\s\[Conformance\]$'
[FAILED] failed to see MODIFIED event: watch closed before UntilWithoutRetry timeout In [It] at: test/e2e/apps/deployment.go:424 @ 01/14/23 18:35:22.952from ginkgo_report.xml
> Enter [BeforeEach] [sig-apps] Deployment - set up framework | framework.go:188 @ 01/14/23 18:28:39.173 STEP: Creating a kubernetes client - test/e2e/framework/framework.go:208 @ 01/14/23 18:28:39.173 Jan 14 18:28:39.173: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/kubeconfig STEP: Building a namespace api object, basename deployment - test/e2e/framework/framework.go:247 @ 01/14/23 18:28:39.174 STEP: Waiting for a default service account to be provisioned in namespace - test/e2e/framework/framework.go:256 @ 01/14/23 18:28:39.495 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace - test/e2e/framework/framework.go:259 @ 01/14/23 18:28:39.701 < Exit [BeforeEach] [sig-apps] Deployment - set up framework | framework.go:188 @ 01/14/23 18:28:39.906 (733ms) > Enter [BeforeEach] [sig-apps] Deployment - test/e2e/framework/metrics/init/init.go:33 @ 01/14/23 18:28:39.906 < Exit [BeforeEach] [sig-apps] Deployment - test/e2e/framework/metrics/init/init.go:33 @ 01/14/23 18:28:39.906 (0s) > Enter [BeforeEach] [sig-apps] Deployment - test/e2e/apps/deployment.go:91 @ 01/14/23 18:28:39.906 < Exit [BeforeEach] [sig-apps] Deployment - test/e2e/apps/deployment.go:91 @ 01/14/23 18:28:39.906 (0s) > Enter [It] should run the lifecycle of a Deployment [Conformance] - test/e2e/apps/deployment.go:185 @ 01/14/23 18:28:39.906 STEP: creating a Deployment - test/e2e/apps/deployment.go:207 @ 01/14/23 18:28:40.017 STEP: waiting for Deployment to be created - test/e2e/apps/deployment.go:217 @ 01/14/23 18:28:40.13 STEP: waiting for all Replicas to be Ready - test/e2e/apps/deployment.go:235 @ 01/14/23 18:28:40.252 Jan 14 18:28:40.372: INFO: observed Deployment test-deployment in namespace deployment-6429 with ReadyReplicas 0 and labels map[test-deployment-static:true] Jan 14 18:28:40.372: INFO: observed Deployment test-deployment in namespace deployment-6429 with ReadyReplicas 0 and labels map[test-deployment-static:true] Jan 14 18:28:40.372: INFO: observed Deployment test-deployment in namespace deployment-6429 with ReadyReplicas 0 and labels map[test-deployment-static:true] Jan 14 18:28:40.372: INFO: observed Deployment test-deployment in namespace deployment-6429 with ReadyReplicas 0 and labels map[test-deployment-static:true] Jan 14 18:28:40.372: INFO: observed Deployment test-deployment in namespace deployment-6429 with ReadyReplicas 0 and labels map[test-deployment-static:true] Jan 14 18:28:40.372: INFO: observed Deployment test-deployment in namespace deployment-6429 with ReadyReplicas 0 and labels map[test-deployment-static:true] Jan 14 18:28:40.417: INFO: observed Deployment test-deployment in namespace deployment-6429 with ReadyReplicas 0 and labels map[test-deployment-static:true] Jan 14 18:28:40.417: INFO: observed Deployment test-deployment in namespace deployment-6429 with ReadyReplicas 0 and labels map[test-deployment-static:true] Jan 14 18:29:16.405: INFO: observed Deployment test-deployment in namespace deployment-6429 with ReadyReplicas 1 and labels map[test-deployment-static:true] Jan 14 18:29:16.405: INFO: observed Deployment test-deployment in namespace deployment-6429 with ReadyReplicas 1 and labels map[test-deployment-static:true] Jan 14 18:29:43.324: INFO: observed Deployment test-deployment in namespace deployment-6429 with ReadyReplicas 2 and labels map[test-deployment-static:true] STEP: patching the Deployment - test/e2e/apps/deployment.go:253 @ 01/14/23 18:29:43.324 W0114 18:29:43.483877 60153 warnings.go:70] unknown field "spec.template.spec.TerminationGracePeriodSeconds" Jan 14 18:29:43.597: INFO: observed event type ADDED STEP: waiting for Replicas to scale - test/e2e/apps/deployment.go:294 @ 01/14/23 18:29:43.597 Jan 14 18:29:43.724: INFO: observed Deployment test-deployment in namespace deployment-6429 with ReadyReplicas 0 Jan 14 18:29:43.724: INFO: observed Deployment test-deployment in namespace deployment-6429 with ReadyReplicas 0 Jan 14 18:29:43.724: INFO: observed Deployment test-deployment in namespace deployment-6429 with ReadyReplicas 0 Jan 14 18:29:43.724: INFO: observed Deployment test-deployment in namespace deployment-6429 with ReadyReplicas 0 Jan 14 18:29:43.730: INFO: observed Deployment test-deployment in namespace deployment-6429 with ReadyReplicas 0 Jan 14 18:29:43.730: INFO: observed Deployment test-deployment in namespace deployment-6429 with ReadyReplicas 0 Jan 14 18:29:43.730: INFO: observed Deployment test-deployment in namespace deployment-6429 with ReadyReplicas 0 Jan 14 18:29:43.730: INFO: observed Deployment test-deployment in namespace deployment-6429 with ReadyReplicas 0 Jan 14 18:29:43.741: INFO: observed Deployment test-deployment in namespace deployment-6429 with ReadyReplicas 1 Jan 14 18:29:43.741: INFO: observed Deployment test-deployment in namespace deployment-6429 with ReadyReplicas 1 Jan 14 18:29:43.741: INFO: observed Deployment test-deployment in namespace deployment-6429 with ReadyReplicas 2 Jan 14 18:29:43.741: INFO: observed Deployment test-deployment in namespace deployment-6429 with ReadyReplicas 2 Jan 14 18:29:43.748: INFO: observed Deployment test-deployment in namespace deployment-6429 with ReadyReplicas 2 Jan 14 18:29:43.748: INFO: observed Deployment test-deployment in namespace deployment-6429 with ReadyReplicas 2 Jan 14 18:29:43.748: INFO: observed Deployment test-deployment in namespace deployment-6429 with ReadyReplicas 2 Jan 14 18:29:43.748: INFO: observed Deployment test-deployment in namespace deployment-6429 with ReadyReplicas 2 Jan 14 18:29:43.753: INFO: observed Deployment test-deployment in namespace deployment-6429 with ReadyReplicas 2 Jan 14 18:29:43.753: INFO: observed Deployment test-deployment in namespace deployment-6429 with ReadyReplicas 2 Jan 14 18:29:43.810: INFO: observed Deployment test-deployment in namespace deployment-6429 with ReadyReplicas 1 Jan 14 18:29:43.810: INFO: observed Deployment test-deployment in namespace deployment-6429 with ReadyReplicas 1 Jan 14 18:29:43.861: INFO: observed Deployment test-deployment in namespace deployment-6429 with ReadyReplicas 1 Jan 14 18:29:43.861: INFO: observed Deployment test-deployment in namespace deployment-6429 with ReadyReplicas 1 Jan 14 18:30:28.716: INFO: observed Deployment test-deployment in namespace deployment-6429 with ReadyReplicas 2 Jan 14 18:30:28.716: INFO: observed Deployment test-deployment in namespace deployment-6429 with ReadyReplicas 2 Jan 14 18:30:28.768: INFO: observed Deployment test-deployment in namespace deployment-6429 with ReadyReplicas 1 STEP: listing Deployments - test/e2e/apps/deployment.go:315 @ 01/14/23 18:30:28.768 Jan 14 18:30:28.872: INFO: Found test-deployment with labels: map[test-deployment:patched test-deployment-static:true] STEP: updating the Deployment - test/e2e/apps/deployment.go:332 @ 01/14/23 18:30:28.872 Jan 14 18:30:29.102: INFO: observed Deployment test-deployment in namespace deployment-6429 with ReadyReplicas 1 STEP: fetching the DeploymentStatus - test/e2e/apps/deployment.go:367 @ 01/14/23 18:30:29.102 Jan 14 18:30:29.325: INFO: observed Deployment test-deployment in namespace deployment-6429 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Jan 14 18:30:29.331: INFO: observed Deployment test-deployment in namespace deployment-6429 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Jan 14 18:30:29.331: INFO: observed Deployment test-deployment in namespace deployment-6429 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Jan 14 18:30:29.340: INFO: observed Deployment test-deployment in namespace deployment-6429 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Jan 14 18:30:29.340: INFO: observed Deployment test-deployment in namespace deployment-6429 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Jan 14 18:30:58.657: INFO: observed Deployment test-deployment in namespace deployment-6429 with ReadyReplicas 2 and labels map[test-deployment:updated test-deployment-static:true] Jan 14 18:33:10.214: INFO: observed Deployment test-deployment in namespace deployment-6429 with ReadyReplicas 3 and labels map[test-deployment:updated test-deployment-static:true] Jan 14 18:33:10.250: INFO: observed Deployment test-deployment in namespace deployment-6429 with ReadyReplicas 2 and labels map[test-deployment:updated test-deployment-static:true] Jan 14 18:33:10.296: INFO: observed Deployment test-deployment in namespace deployment-6429 with ReadyReplicas 2 and labels map[test-deployment:updated test-deployment-static:true] Automatically polling progress: [sig-apps] Deployment should run the lifecycle of a Deployment [Conformance] (Spec Runtime: 5m0.734s) test/e2e/apps/deployment.go:185 In [It] (Node Runtime: 5m0.001s) test/e2e/apps/deployment.go:185 At [By Step] fetching the DeploymentStatus (Step Runtime: 3m10.805s) test/e2e/apps/deployment.go:367 Spec Goroutine goroutine 526 [select] k8s.io/kubernetes/vendor/k8s.io/client-go/tools/watch.UntilWithoutRetry({0x80f0620, 0xc001106540}, {0x80cc560, 0xc001327980}, {0xc0057e1830, 0x1, 0x45d964b800?}) vendor/k8s.io/client-go/tools/watch/until.go:73 k8s.io/kubernetes/vendor/k8s.io/client-go/tools/watch.Until({0x80f0620, 0xc001106540}, {0xc0024001d7?, 0x80a8310?}, {0x80bca00?, 0xc000e8e630?}, {0xc0057e1830, 0x1, 0x1}) vendor/k8s.io/client-go/tools/watch/until.go:113 > k8s.io/kubernetes/test/e2e/apps.glob..func5.13({0x7f52910591d8?, 0xc00135b600}) test/e2e/apps/deployment.go:378 | ctxUntil, cancel = context.WithTimeout(ctx, f.Timeouts.PodStart) | defer cancel() > _, err = watchtools.Until(ctxUntil, deploymentsList.ResourceVersion, w, func(event watch.Event) (bool, error) { | if deployment, ok := event.Object.(*appsv1.Deployment); ok { | found := deployment.ObjectMeta.Name == testDeployment.Name && k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func2({0x80f8d88?, 0xc00135b600}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:452 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:854 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:841 Automatically polling progress: [sig-apps] Deployment should run the lifecycle of a Deployment [Conformance] (Spec Runtime: 5m20.737s) test/e2e/apps/deployment.go:185 In [It] (Node Runtime: 5m20.004s) test/e2e/apps/deployment.go:185 At [By Step] fetching the DeploymentStatus (Step Runtime: 3m30.808s) test/e2e/apps/deployment.go:367 Spec Goroutine goroutine 526 [select] k8s.io/kubernetes/vendor/k8s.io/client-go/tools/watch.UntilWithoutRetry({0x80f0620, 0xc001106540}, {0x80cc560, 0xc001327980}, {0xc0057e1830, 0x1, 0x45d964b800?}) vendor/k8s.io/client-go/tools/watch/until.go:73 k8s.io/kubernetes/vendor/k8s.io/client-go/tools/watch.Until({0x80f0620, 0xc001106540}, {0xc0024001d7?, 0x80a8310?}, {0x80bca00?, 0xc000e8e630?}, {0xc0057e1830, 0x1, 0x1}) vendor/k8s.io/client-go/tools/watch/until.go:113 > k8s.io/kubernetes/test/e2e/apps.glob..func5.13({0x7f52910591d8?, 0xc00135b600}) test/e2e/apps/deployment.go:378 | ctxUntil, cancel = context.WithTimeout(ctx, f.Timeouts.PodStart) | defer cancel() > _, err = watchtools.Until(ctxUntil, deploymentsList.ResourceVersion, w, func(event watch.Event) (bool, error) { | if deployment, ok := event.Object.(*appsv1.Deployment); ok { | found := deployment.ObjectMeta.Name == testDeployment.Name && k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func2({0x80f8d88?, 0xc00135b600}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:452 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:854 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:841 Automatically polling progress: [sig-apps] Deployment should run the lifecycle of a Deployment [Conformance] (Spec Runtime: 5m40.739s) test/e2e/apps/deployment.go:185 In [It] (Node Runtime: 5m40.006s) test/e2e/apps/deployment.go:185 At [By Step] fetching the DeploymentStatus (Step Runtime: 3m50.81s) test/e2e/apps/deployment.go:367 Spec Goroutine goroutine 526 [select, 2 minutes] k8s.io/kubernetes/vendor/k8s.io/client-go/tools/watch.UntilWithoutRetry({0x80f0620, 0xc001106540}, {0x80cc560, 0xc001327980}, {0xc0057e1830, 0x1, 0x45d964b800?}) vendor/k8s.io/client-go/tools/watch/until.go:73 k8s.io/kubernetes/vendor/k8s.io/client-go/tools/watch.Until({0x80f0620, 0xc001106540}, {0xc0024001d7?, 0x80a8310?}, {0x80bca00?, 0xc000e8e630?}, {0xc0057e1830, 0x1, 0x1}) vendor/k8s.io/client-go/tools/watch/until.go:113 > k8s.io/kubernetes/test/e2e/apps.glob..func5.13({0x7f52910591d8?, 0xc00135b600}) test/e2e/apps/deployment.go:378 | ctxUntil, cancel = context.WithTimeout(ctx, f.Timeouts.PodStart) | defer cancel() > _, err = watchtools.Until(ctxUntil, deploymentsList.ResourceVersion, w, func(event watch.Event) (bool, error) { | if deployment, ok := event.Object.(*appsv1.Deployment); ok { | found := deployment.ObjectMeta.Name == testDeployment.Name && k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func2({0x80f8d88?, 0xc00135b600}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:452 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:854 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:841 Automatically polling progress: [sig-apps] Deployment should run the lifecycle of a Deployment [Conformance] (Spec Runtime: 6m0.741s) test/e2e/apps/deployment.go:185 In [It] (Node Runtime: 6m0.008s) test/e2e/apps/deployment.go:185 At [By Step] fetching the DeploymentStatus (Step Runtime: 4m10.812s) test/e2e/apps/deployment.go:367 Spec Goroutine goroutine 526 [select, 2 minutes] k8s.io/kubernetes/vendor/k8s.io/client-go/tools/watch.UntilWithoutRetry({0x80f0620, 0xc001106540}, {0x80cc560, 0xc001327980}, {0xc0057e1830, 0x1, 0x45d964b800?}) vendor/k8s.io/client-go/tools/watch/until.go:73 k8s.io/kubernetes/vendor/k8s.io/client-go/tools/watch.Until({0x80f0620, 0xc001106540}, {0xc0024001d7?, 0x80a8310?}, {0x80bca00?, 0xc000e8e630?}, {0xc0057e1830, 0x1, 0x1}) vendor/k8s.io/client-go/tools/watch/until.go:113 > k8s.io/kubernetes/test/e2e/apps.glob..func5.13({0x7f52910591d8?, 0xc00135b600}) test/e2e/apps/deployment.go:378 | ctxUntil, cancel = context.WithTimeout(ctx, f.Timeouts.PodStart) | defer cancel() > _, err = watchtools.Until(ctxUntil, deploymentsList.ResourceVersion, w, func(event watch.Event) (bool, error) { | if deployment, ok := event.Object.(*appsv1.Deployment); ok { | found := deployment.ObjectMeta.Name == testDeployment.Name && k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func2({0x80f8d88?, 0xc00135b600}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:452 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:854 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:841 Automatically polling progress: [sig-apps] Deployment should run the lifecycle of a Deployment [Conformance] (Spec Runtime: 6m20.744s) test/e2e/apps/deployment.go:185 In [It] (Node Runtime: 6m20.01s) test/e2e/apps/deployment.go:185 At [By Step] fetching the DeploymentStatus (Step Runtime: 4m30.815s) test/e2e/apps/deployment.go:367 Spec Goroutine goroutine 526 [select, 2 minutes] k8s.io/kubernetes/vendor/k8s.io/client-go/tools/watch.UntilWithoutRetry({0x80f0620, 0xc001106540}, {0x80cc560, 0xc001327980}, {0xc0057e1830, 0x1, 0x45d964b800?}) vendor/k8s.io/client-go/tools/watch/until.go:73 k8s.io/kubernetes/vendor/k8s.io/client-go/tools/watch.Until({0x80f0620, 0xc001106540}, {0xc0024001d7?, 0x80a8310?}, {0x80bca00?, 0xc000e8e630?}, {0xc0057e1830, 0x1, 0x1}) vendor/k8s.io/client-go/tools/watch/until.go:113 > k8s.io/kubernetes/test/e2e/apps.glob..func5.13({0x7f52910591d8?, 0xc00135b600}) test/e2e/apps/deployment.go:378 | ctxUntil, cancel = context.WithTimeout(ctx, f.Timeouts.PodStart) | defer cancel() > _, err = watchtools.Until(ctxUntil, deploymentsList.ResourceVersion, w, func(event watch.Event) (bool, error) { | if deployment, ok := event.Object.(*appsv1.Deployment); ok { | found := deployment.ObjectMeta.Name == testDeployment.Name && k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func2({0x80f8d88?, 0xc00135b600}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:452 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:854 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:841 Automatically polling progress: [sig-apps] Deployment should run the lifecycle of a Deployment [Conformance] (Spec Runtime: 6m40.746s) test/e2e/apps/deployment.go:185 In [It] (Node Runtime: 6m40.013s) test/e2e/apps/deployment.go:185 At [By Step] fetching the DeploymentStatus (Step Runtime: 4m50.817s) test/e2e/apps/deployment.go:367 Spec Goroutine goroutine 526 [select, 3 minutes] k8s.io/kubernetes/vendor/k8s.io/client-go/tools/watch.UntilWithoutRetry({0x80f0620, 0xc001106540}, {0x80cc560, 0xc001327980}, {0xc0057e1830, 0x1, 0x45d964b800?}) vendor/k8s.io/client-go/tools/watch/until.go:73 k8s.io/kubernetes/vendor/k8s.io/client-go/tools/watch.Until({0x80f0620, 0xc001106540}, {0xc0024001d7?, 0x80a8310?}, {0x80bca00?, 0xc000e8e630?}, {0xc0057e1830, 0x1, 0x1}) vendor/k8s.io/client-go/tools/watch/until.go:113 > k8s.io/kubernetes/test/e2e/apps.glob..func5.13({0x7f52910591d8?, 0xc00135b600}) test/e2e/apps/deployment.go:378 | ctxUntil, cancel = context.WithTimeout(ctx, f.Timeouts.PodStart) | defer cancel() > _, err = watchtools.Until(ctxUntil, deploymentsList.ResourceVersion, w, func(event watch.Event) (bool, error) { | if deployment, ok := event.Object.(*appsv1.Deployment); ok { | found := deployment.ObjectMeta.Name == testDeployment.Name && k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func2({0x80f8d88?, 0xc00135b600}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:452 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:854 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:841 Jan 14 18:35:22.676: INFO: observed Deployment test-deployment in namespace deployment-6429 with ReadyReplicas 3 and labels map[test-deployment:updated test-deployment-static:true] STEP: patching the DeploymentStatus - test/e2e/apps/deployment.go:394 @ 01/14/23 18:35:22.744 Jan 14 18:35:22.952: INFO: observed event type ERROR Jan 14 18:35:22.952: INFO: Unexpected error: failed to see MODIFIED event: <*errors.errorString | 0xc00056fe50>: { s: "watch closed before UntilWithoutRetry timeout", } [FAILED] failed to see MODIFIED event: watch closed before UntilWithoutRetry timeout In [It] at: test/e2e/apps/deployment.go:424 @ 01/14/23 18:35:22.952 < Exit [It] should run the lifecycle of a Deployment [Conformance] - test/e2e/apps/deployment.go:185 @ 01/14/23 18:35:22.952 (6m43.046s) > Enter [AfterEach] [sig-apps] Deployment - test/e2e/apps/deployment.go:84 @ 01/14/23 18:35:22.952 Jan 14 18:35:23.060: INFO: Deployment "test-deployment": &Deployment{ObjectMeta:{test-deployment deployment-6429 2549d3d4-b5bc-406b-a543-5d72dc5e36f8 19829 3 2023-01-14 18:28:40 +0000 UTC <nil> <nil> map[test-deployment:updated test-deployment-static:true] map[deployment.kubernetes.io/revision:3] [] [] [{e2e.test Update apps/v1 2023-01-14 18:30:28 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:test-deployment":{},"f:test-deployment-static":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:test-deployment-static":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-01-14 18:35:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*2,Selector:&v1.LabelSelector{MatchLabels:map[string]string{test-deployment-static: true,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[test-deployment-static:true] map[] [] [] []} {[] [] [{test-deployment registry.k8s.io/e2e-test-images/httpd:2.4.38-4 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc002656918 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil <nil> [] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:2,UpdatedReplicas:2,AvailableReplicas:2,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2023-01-14 18:30:58 +0000 UTC,LastTransitionTime:2023-01-14 18:30:58 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-deployment-7b7876f9d6" has successfully progressed.,LastUpdateTime:2023-01-14 18:35:22 +0000 UTC,LastTransitionTime:2023-01-14 18:28:40 +0000 UTC,},},ReadyReplicas:2,CollisionCount:nil,},} Jan 14 18:35:23.178: INFO: New ReplicaSet "test-deployment-7b7876f9d6" of Deployment "test-deployment": &ReplicaSet{ObjectMeta:{test-deployment-7b7876f9d6 deployment-6429 5f7915a8-0284-447b-97fb-9757918f8bca 19821 2 2023-01-14 18:30:28 +0000 UTC <nil> <nil> map[pod-template-hash:7b7876f9d6 test-deployment-static:true] map[deployment.kubernetes.io/desired-replicas:2 deployment.kubernetes.io/max-replicas:3 deployment.kubernetes.io/revision:3] [{apps/v1 Deployment test-deployment 2549d3d4-b5bc-406b-a543-5d72dc5e36f8 0xc0006363e7 0xc0006363e8}] [] [{kube-controller-manager Update apps/v1 2023-01-14 18:33:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2549d3d4-b5bc-406b-a543-5d72dc5e36f8\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-01-14 18:35:22 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*2,Selector:&v1.LabelSelector{MatchLabels:map[string]string{pod-template-hash: 7b7876f9d6,test-deployment-static: true,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[pod-template-hash:7b7876f9d6 test-deployment-static:true] map[] [] [] []} {[] [] [{test-deployment registry.k8s.io/e2e-test-images/httpd:2.4.38-4 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0006364a0 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil <nil> [] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:2,FullyLabeledReplicas:2,ObservedGeneration:2,ReadyReplicas:2,AvailableReplicas:2,Conditions:[]ReplicaSetCondition{},},} Jan 14 18:35:23.178: INFO: All old ReplicaSets of Deployment "test-deployment": Jan 14 18:35:23.178: INFO: &ReplicaSet{ObjectMeta:{test-deployment-f4dbc4647 deployment-6429 bcbdc001-3a56-4d8c-8ad5-7f1d95a9e873 10071 3 2023-01-14 18:28:40 +0000 UTC <nil> <nil> map[pod-template-hash:f4dbc4647 test-deployment-static:true] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-deployment 2549d3d4-b5bc-406b-a543-5d72dc5e36f8 0xc000636677 0xc000636678}] [] [{kube-controller-manager Update apps/v1 2023-01-14 18:30:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2549d3d4-b5bc-406b-a543-5d72dc5e36f8\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-01-14 18:30:28 +0000 UTC FieldsV1 {"f:status":{"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{pod-template-hash: f4dbc4647,test-deployment-static: true,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[pod-template-hash:f4dbc4647 test-deployment-static:true] map[] [] [] []} {[] [] [{test-deployment registry.k8s.io/e2e-test-images/agnhost:2.43 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc000636740 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil <nil> [] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jan 14 18:35:23.178: INFO: &ReplicaSet{ObjectMeta:{test-deployment-7df74c55ff deployment-6429 da25b88c-2ba1-46d7-839f-0800bdd9c4e3 19828 4 2023-01-14 18:29:43 +0000 UTC <nil> <nil> map[pod-template-hash:7df74c55ff test-deployment-static:true] map[deployment.kubernetes.io/desired-replicas:2 deployment.kubernetes.io/max-replicas:3 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-deployment 2549d3d4-b5bc-406b-a543-5d72dc5e36f8 0xc000636507 0xc000636508}] [] [{kube-controller-manager Update apps/v1 2023-01-14 18:35:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2549d3d4-b5bc-406b-a543-5d72dc5e36f8\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-01-14 18:35:22 +0000 UTC FieldsV1 {"f:status":{"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{pod-template-hash: 7df74c55ff,test-deployment-static: true,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[pod-template-hash:7df74c55ff test-deployment-static:true] map[] [] [] []} {[] [] [{test-deployment registry.k8s.io/pause:3.9 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0006365e0 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil <nil> [] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:4,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jan 14 18:35:23.305: INFO: Pod "test-deployment-7b7876f9d6-cjtpl" is available: &Pod{ObjectMeta:{test-deployment-7b7876f9d6-cjtpl test-deployment-7b7876f9d6- deployment-6429 c3330d93-8c51-4081-954b-d7863d2e34ec 19820 0 2023-01-14 18:33:10 +0000 UTC <nil> <nil> map[pod-template-hash:7b7876f9d6 test-deployment-static:true] map[cni.projectcalico.org/containerID:c6a21cd15c415a8f33cc19dc48d6461ef15bbb3c3058cb691a000e03458df840 cni.projectcalico.org/podIP:192.168.14.143/32 cni.projectcalico.org/podIPs:192.168.14.143/32] [{apps/v1 ReplicaSet test-deployment-7b7876f9d6 5f7915a8-0284-447b-97fb-9757918f8bca 0xc000636de7 0xc000636de8}] [] [{kube-controller-manager Update v1 2023-01-14 18:33:10 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5f7915a8-0284-447b-97fb-9757918f8bca\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {Go-http-client Update v1 2023-01-14 18:33:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2023-01-14 18:35:22 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.14.143\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-mm9p6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:test-deployment,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mm9p6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*1,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:capz-67tgp2-mp-0000001,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 18:33:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 18:35:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 18:35:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 18:33:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.1.0.5,PodIP:192.168.14.143,StartTime:2023-01-14 18:33:10 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:test-deployment,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-01-14 18:35:11 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,ImageID:registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22,ContainerID:containerd://9ae7109d0c33623928905f29299b05b518e7b32e644707fd2f2e02f93b1cd72c,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.14.143,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 14 18:35:23.306: INFO: Pod "test-deployment-7b7876f9d6-zqb4p" is available: &Pod{ObjectMeta:{test-deployment-7b7876f9d6-zqb4p test-deployment-7b7876f9d6- deployment-6429 232acc3e-d6c3-454c-a9f8-430418cc77d3 15076 0 2023-01-14 18:30:28 +0000 UTC <nil> <nil> map[pod-template-hash:7b7876f9d6 test-deployment-static:true] map[cni.projectcalico.org/containerID:75d42c2a467b83339577445f3f1418cd4a3b1af64312c272c9c68379de838b6d cni.projectcalico.org/podIP:192.168.243.197/32 cni.projectcalico.org/podIPs:192.168.243.197/32] [{apps/v1 ReplicaSet test-deployment-7b7876f9d6 5f7915a8-0284-447b-97fb-9757918f8bca 0xc000636ff7 0xc000636ff8}] [] [{kube-controller-manager Update v1 2023-01-14 18:30:28 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5f7915a8-0284-447b-97fb-9757918f8bca\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {Go-http-client Update v1 2023-01-14 18:30:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2023-01-14 18:33:10 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.243.197\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-w245t,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:test-deployment,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-w245t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*1,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:capz-67tgp2-mp-0000000,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 18:30:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 18:32:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 18:32:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 18:30:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.1.0.4,PodIP:192.168.243.197,StartTime:2023-01-14 18:30:28 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:test-deployment,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-01-14 18:32:57 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,ImageID:registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22,ContainerID:containerd://f48bc76bb53855baa4ce4d45c9931a0b48b21e0a6906aacc614e480e98d562f6,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.243.197,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 14 18:35:23.306: INFO: Pod "test-deployment-7df74c55ff-84hdq" is available: &Pod{ObjectMeta:{test-deployment-7df74c55ff-84hdq test-deployment-7df74c55ff- deployment-6429 0b99fc2d-649d-4337-bc61-d396a4df9b22 19825 0 2023-01-14 18:29:43 +0000 UTC 2023-01-14 18:35:23 +0000 UTC 0xc0006371e0 map[pod-template-hash:7df74c55ff test-deployment-static:true] map[cni.projectcalico.org/containerID:60f03ecaba0062757e05bd2cbfcf17abf48a19b1b31c860848e3400ed09ed165 cni.projectcalico.org/podIP:192.168.14.251/32 cni.projectcalico.org/podIPs:192.168.14.251/32] [{apps/v1 ReplicaSet test-deployment-7df74c55ff da25b88c-2ba1-46d7-839f-0800bdd9c4e3 0xc000637217 0xc000637218}] [] [{kube-controller-manager Update v1 2023-01-14 18:29:43 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"da25b88c-2ba1-46d7-839f-0800bdd9c4e3\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {Go-http-client Update v1 2023-01-14 18:30:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2023-01-14 18:30:28 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.14.251\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-s2j78,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:test-deployment,Image:registry.k8s.io/pause:3.9,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-s2j78,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*1,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:capz-67tgp2-mp-0000001,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 18:29:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 18:30:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 18:30:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 18:29:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.1.0.5,PodIP:192.168.14.251,StartTime:2023-01-14 18:29:43 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:test-deployment,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-01-14 18:30:23 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/pause:3.9,ImageID:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,ContainerID:containerd://af502340a96eeef8c889108b9afc9181435dfdd60ab707f008d5b119efbaee05,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.14.251,},},EphemeralContainerStatuses:[]ContainerStatus{},},} < Exit [AfterEach] [sig-apps] Deployment - test/e2e/apps/deployment.go:84 @ 01/14/23 18:35:23.306 (354ms) > Enter [AfterEach] [sig-apps] Deployment - test/e2e/framework/node/init/init.go:33 @ 01/14/23 18:35:23.306 Jan 14 18:35:23.306: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready < Exit [AfterEach] [sig-apps] Deployment - test/e2e/framework/node/init/init.go:33 @ 01/14/23 18:35:23.471 (165ms) > Enter [DeferCleanup (Each)] [sig-apps] Deployment - test/e2e/framework/metrics/init/init.go:35 @ 01/14/23 18:35:23.471 < Exit [DeferCleanup (Each)] [sig-apps] Deployment - test/e2e/framework/metrics/init/init.go:35 @ 01/14/23 18:35:23.471 (0s) > Enter [DeferCleanup (Each)] [sig-apps] Deployment - dump namespaces | framework.go:206 @ 01/14/23 18:35:23.471 STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/14/23 18:35:23.471 STEP: Collecting events from namespace "deployment-6429". - test/e2e/framework/debug/dump.go:42 @ 01/14/23 18:35:23.471 STEP: Found 48 events. - test/e2e/framework/debug/dump.go:46 @ 01/14/23 18:35:23.651 Jan 14 18:35:23.651: INFO: At 2023-01-14 18:28:40 +0000 UTC - event for test-deployment: {deployment-controller } ScalingReplicaSet: Scaled up replica set test-deployment-f4dbc4647 to 2 Jan 14 18:35:23.651: INFO: At 2023-01-14 18:28:40 +0000 UTC - event for test-deployment-f4dbc4647: {replicaset-controller } SuccessfulCreate: Created pod: test-deployment-f4dbc4647-44kwf Jan 14 18:35:23.651: INFO: At 2023-01-14 18:28:40 +0000 UTC - event for test-deployment-f4dbc4647: {replicaset-controller } SuccessfulCreate: Created pod: test-deployment-f4dbc4647-sr5jn Jan 14 18:35:23.651: INFO: At 2023-01-14 18:28:40 +0000 UTC - event for test-deployment-f4dbc4647-44kwf: {default-scheduler } Scheduled: Successfully assigned deployment-6429/test-deployment-f4dbc4647-44kwf to capz-67tgp2-mp-0000000 Jan 14 18:35:23.651: INFO: At 2023-01-14 18:28:40 +0000 UTC - event for test-deployment-f4dbc4647-sr5jn: {default-scheduler } Scheduled: Successfully assigned deployment-6429/test-deployment-f4dbc4647-sr5jn to capz-67tgp2-mp-0000001 Jan 14 18:35:23.651: INFO: At 2023-01-14 18:29:07 +0000 UTC - event for test-deployment-f4dbc4647-sr5jn: {kubelet capz-67tgp2-mp-0000001} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.43" already present on machine Jan 14 18:35:23.651: INFO: At 2023-01-14 18:29:08 +0000 UTC - event for test-deployment-f4dbc4647-44kwf: {kubelet capz-67tgp2-mp-0000000} Created: Created container test-deployment Jan 14 18:35:23.651: INFO: At 2023-01-14 18:29:08 +0000 UTC - event for test-deployment-f4dbc4647-44kwf: {kubelet capz-67tgp2-mp-0000000} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.43" already present on machine Jan 14 18:35:23.651: INFO: At 2023-01-14 18:29:08 +0000 UTC - event for test-deployment-f4dbc4647-sr5jn: {kubelet capz-67tgp2-mp-0000001} Created: Created container test-deployment Jan 14 18:35:23.651: INFO: At 2023-01-14 18:29:09 +0000 UTC - event for test-deployment-f4dbc4647-44kwf: {kubelet capz-67tgp2-mp-0000000} Started: Started container test-deployment Jan 14 18:35:23.651: INFO: At 2023-01-14 18:29:12 +0000 UTC - event for test-deployment-f4dbc4647-sr5jn: {kubelet capz-67tgp2-mp-0000001} Started: Started container test-deployment Jan 14 18:35:23.651: INFO: At 2023-01-14 18:29:43 +0000 UTC - event for test-deployment: {deployment-controller } ScalingReplicaSet: Scaled down replica set test-deployment-f4dbc4647 to 1 from 2 Jan 14 18:35:23.651: INFO: At 2023-01-14 18:29:43 +0000 UTC - event for test-deployment: {deployment-controller } ScalingReplicaSet: Scaled up replica set test-deployment-7df74c55ff to 1 Jan 14 18:35:23.651: INFO: At 2023-01-14 18:29:43 +0000 UTC - event for test-deployment-7df74c55ff: {replicaset-controller } SuccessfulCreate: Created pod: test-deployment-7df74c55ff-84hdq Jan 14 18:35:23.651: INFO: At 2023-01-14 18:29:43 +0000 UTC - event for test-deployment-7df74c55ff-84hdq: {default-scheduler } Scheduled: Successfully assigned deployment-6429/test-deployment-7df74c55ff-84hdq to capz-67tgp2-mp-0000001 Jan 14 18:35:23.651: INFO: At 2023-01-14 18:29:43 +0000 UTC - event for test-deployment-f4dbc4647: {replicaset-controller } SuccessfulDelete: Deleted pod: test-deployment-f4dbc4647-44kwf Jan 14 18:35:23.651: INFO: At 2023-01-14 18:29:43 +0000 UTC - event for test-deployment-f4dbc4647-44kwf: {kubelet capz-67tgp2-mp-0000000} Killing: Stopping container test-deployment Jan 14 18:35:23.651: INFO: At 2023-01-14 18:30:16 +0000 UTC - event for test-deployment-7df74c55ff-84hdq: {kubelet capz-67tgp2-mp-0000001} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Jan 14 18:35:23.651: INFO: At 2023-01-14 18:30:19 +0000 UTC - event for test-deployment-7df74c55ff-84hdq: {kubelet capz-67tgp2-mp-0000001} Created: Created container test-deployment Jan 14 18:35:23.651: INFO: At 2023-01-14 18:30:23 +0000 UTC - event for test-deployment-7df74c55ff-84hdq: {kubelet capz-67tgp2-mp-0000001} Started: Started container test-deployment Jan 14 18:35:23.651: INFO: At 2023-01-14 18:30:28 +0000 UTC - event for test-deployment: {deployment-controller } ScalingReplicaSet: Scaled down replica set test-deployment-f4dbc4647 to 0 from 1 Jan 14 18:35:23.651: INFO: At 2023-01-14 18:30:28 +0000 UTC - event for test-deployment: {deployment-controller } ScalingReplicaSet: Scaled up replica set test-deployment-7df74c55ff to 2 from 1 Jan 14 18:35:23.651: INFO: At 2023-01-14 18:30:28 +0000 UTC - event for test-deployment: {deployment-controller } ScalingReplicaSet: Scaled up replica set test-deployment-7b7876f9d6 to 1 Jan 14 18:35:23.651: INFO: At 2023-01-14 18:30:28 +0000 UTC - event for test-deployment-7b7876f9d6: {replicaset-controller } SuccessfulCreate: Created pod: test-deployment-7b7876f9d6-zqb4p Jan 14 18:35:23.651: INFO: At 2023-01-14 18:30:28 +0000 UTC - event for test-deployment-7df74c55ff: {replicaset-controller } SuccessfulCreate: Created pod: test-deployment-7df74c55ff-s9lvr Jan 14 18:35:23.651: INFO: At 2023-01-14 18:30:28 +0000 UTC - event for test-deployment-7df74c55ff-s9lvr: {default-scheduler } Scheduled: Successfully assigned deployment-6429/test-deployment-7df74c55ff-s9lvr to capz-67tgp2-mp-0000000 Jan 14 18:35:23.651: INFO: At 2023-01-14 18:30:28 +0000 UTC - event for test-deployment-f4dbc4647: {replicaset-controller } SuccessfulDelete: Deleted pod: test-deployment-f4dbc4647-sr5jn Jan 14 18:35:23.651: INFO: At 2023-01-14 18:30:28 +0000 UTC - event for test-deployment-f4dbc4647-sr5jn: {kubelet capz-67tgp2-mp-0000001} Killing: Stopping container test-deployment Jan 14 18:35:23.651: INFO: At 2023-01-14 18:30:29 +0000 UTC - event for test-deployment-7b7876f9d6-zqb4p: {default-scheduler } Scheduled: Successfully assigned deployment-6429/test-deployment-7b7876f9d6-zqb4p to capz-67tgp2-mp-0000000 Jan 14 18:35:23.651: INFO: At 2023-01-14 18:30:45 +0000 UTC - event for test-deployment-7b7876f9d6-zqb4p: {kubelet capz-67tgp2-mp-0000000} Pulling: Pulling image "registry.k8s.io/e2e-test-images/httpd:2.4.38-4" Jan 14 18:35:23.651: INFO: At 2023-01-14 18:30:45 +0000 UTC - event for test-deployment-7df74c55ff-s9lvr: {kubelet capz-67tgp2-mp-0000000} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Jan 14 18:35:23.651: INFO: At 2023-01-14 18:30:48 +0000 UTC - event for test-deployment-7df74c55ff-s9lvr: {kubelet capz-67tgp2-mp-0000000} Created: Created container test-deployment Jan 14 18:35:23.651: INFO: At 2023-01-14 18:30:50 +0000 UTC - event for test-deployment-7df74c55ff-s9lvr: {kubelet capz-67tgp2-mp-0000000} Started: Started container test-deployment Jan 14 18:35:23.651: INFO: At 2023-01-14 18:32:52 +0000 UTC - event for test-deployment-7b7876f9d6-zqb4p: {kubelet capz-67tgp2-mp-0000000} Pulled: Successfully pulled image "registry.k8s.io/e2e-test-images/httpd:2.4.38-4" in 3.529998755s (2m6.377886862s including waiting) Jan 14 18:35:23.651: INFO: At 2023-01-14 18:32:54 +0000 UTC - event for test-deployment-7b7876f9d6-zqb4p: {kubelet capz-67tgp2-mp-0000000} Created: Created container test-deployment Jan 14 18:35:23.651: INFO: At 2023-01-14 18:32:57 +0000 UTC - event for test-deployment-7b7876f9d6-zqb4p: {kubelet capz-67tgp2-mp-0000000} Started: Started container test-deployment Jan 14 18:35:23.651: INFO: At 2023-01-14 18:33:10 +0000 UTC - event for test-deployment: {deployment-controller } ScalingReplicaSet: Scaled down replica set test-deployment-7df74c55ff to 1 from 2 Jan 14 18:35:23.651: INFO: At 2023-01-14 18:33:10 +0000 UTC - event for test-deployment: {deployment-controller } ScalingReplicaSet: Scaled up replica set test-deployment-7b7876f9d6 to 2 from 1 Jan 14 18:35:23.651: INFO: At 2023-01-14 18:33:10 +0000 UTC - event for test-deployment-7b7876f9d6: {replicaset-controller } SuccessfulCreate: Created pod: test-deployment-7b7876f9d6-cjtpl Jan 14 18:35:23.651: INFO: At 2023-01-14 18:33:10 +0000 UTC - event for test-deployment-7b7876f9d6-cjtpl: {default-scheduler } Scheduled: Successfully assigned deployment-6429/test-deployment-7b7876f9d6-cjtpl to capz-67tgp2-mp-0000001 Jan 14 18:35:23.651: INFO: At 2023-01-14 18:33:10 +0000 UTC - event for test-deployment-7df74c55ff: {replicaset-controller } SuccessfulDelete: Deleted pod: test-deployment-7df74c55ff-s9lvr Jan 14 18:35:23.651: INFO: At 2023-01-14 18:33:10 +0000 UTC - event for test-deployment-7df74c55ff-s9lvr: {kubelet capz-67tgp2-mp-0000000} Killing: Stopping container test-deployment Jan 14 18:35:23.651: INFO: At 2023-01-14 18:33:16 +0000 UTC - event for test-deployment-7b7876f9d6-cjtpl: {kubelet capz-67tgp2-mp-0000001} Pulling: Pulling image "registry.k8s.io/e2e-test-images/httpd:2.4.38-4" Jan 14 18:35:23.651: INFO: At 2023-01-14 18:35:10 +0000 UTC - event for test-deployment-7b7876f9d6-cjtpl: {kubelet capz-67tgp2-mp-0000001} Pulled: Successfully pulled image "registry.k8s.io/e2e-test-images/httpd:2.4.38-4" in 601.673623ms (1m53.668389496s including waiting) Jan 14 18:35:23.651: INFO: At 2023-01-14 18:35:10 +0000 UTC - event for test-deployment-7b7876f9d6-cjtpl: {kubelet capz-67tgp2-mp-0000001} Created: Created container test-deployment Jan 14 18:35:23.651: INFO: At 2023-01-14 18:35:11 +0000 UTC - event for test-deployment-7b7876f9d6-cjtpl: {kubelet capz-67tgp2-mp-0000001} Started: Started container test-deployment Jan 14 18:35:23.651: INFO: At 2023-01-14 18:35:22 +0000 UTC - event for test-deployment: {deployment-controller } ScalingReplicaSet: Scaled down replica set test-deployment-7df74c55ff to 0 from 1 Jan 14 18:35:23.651: INFO: At 2023-01-14 18:35:22 +0000 UTC - event for test-deployment-7df74c55ff: {replicaset-controller } SuccessfulDelete: Deleted pod: test-deployment-7df74c55ff-84hdq Jan 14 18:35:23.774: INFO: POD NODE PHASE GRACE CONDITIONS Jan 14 18:35:23.774: INFO: test-deployment-7b7876f9d6-cjtpl capz-67tgp2-mp-0000001 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 18:33:10 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 18:35:12 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 18:35:12 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 18:33:10 +0000 UTC }] Jan 14 18:35:23.774: INFO: test-deployment-7b7876f9d6-zqb4p capz-67tgp2-mp-0000000 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 18:30:28 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 18:32:58 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 18:32:58 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 18:30:28 +0000 UTC }] Jan 14 18:35:23.774: INFO: test-deployment-7df74c55ff-84hdq capz-67tgp2-mp-0000001 Running 1s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 18:29:43 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 18:30:24 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 18:30:24 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 18:29:43 +0000 UTC }] Jan 14 18:35:23.774: INFO: Jan 14 18:35:24.519: INFO: Logging node info for node capz-67tgp2-control-plane-2chph Jan 14 18:35:24.633: INFO: Node Info: &Node{ObjectMeta:{capz-67tgp2-control-plane-2chph 28170de3-aa87-4a67-a5ad-65493aeb11b3 12074 0 2023-01-14 18:16:37 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:capz-67tgp2-control-plane-2chph kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node.kubernetes.io/exclude-from-external-load-balancers: node.kubernetes.io/instance-type:Standard_D2s_v3 topology.kubernetes.io/region:northeurope topology.kubernetes.io/zone:northeurope-2] map[cluster.x-k8s.io/cluster-name:capz-67tgp2 cluster.x-k8s.io/cluster-namespace:default cluster.x-k8s.io/machine:capz-67tgp2-control-plane-tj79f cluster.x-k8s.io/owner-kind:KubeadmControlPlane cluster.x-k8s.io/owner-name:capz-67tgp2-control-plane kubeadm.alpha.kubernetes.io/cri-socket:unix:///var/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.0.0.4/16 projectcalico.org/IPv4VXLANTunnelAddr:192.168.35.0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-14 18:16:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {kubeadm Update v1 2023-01-14 18:16:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {kube-controller-manager Update v1 2023-01-14 18:17:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}} } {Go-http-client Update v1 2023-01-14 18:17:47 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:projectcalico.org/IPv4Address":{},"f:projectcalico.org/IPv4VXLANTunnelAddr":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {cloud-node-manager Update v1 2023-01-14 18:20:44 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:taints":{}}} } {manager Update v1 2023-01-14 18:20:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cluster.x-k8s.io/cluster-name":{},"f:cluster.x-k8s.io/cluster-namespace":{},"f:cluster.x-k8s.io/machine":{},"f:cluster.x-k8s.io/owner-kind":{},"f:cluster.x-k8s.io/owner-name":{}}}} } {cloud-controller-manager Update v1 2023-01-14 18:21:06 +0000 UTC FieldsV1 {"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"192.168.0.0/24\"":{}}}} } {kubelet Update v1 2023-01-14 18:31:33 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:192.168.0.0/24,DoNotUseExternalID:,ProviderID:azure:///subscriptions/===REDACTED===/resourceGroups/capz-67tgp2/providers/Microsoft.Compute/virtualMachines/capz-67tgp2-control-plane-2chph,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/control-plane,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[192.168.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{133003395072 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{8344743936 0} {<nil>} 8149164Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{119703055367 0} {<nil>} 119703055367 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{8239886336 0} {<nil>} 8046764Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-14 18:17:47 +0000 UTC,LastTransitionTime:2023-01-14 18:17:47 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-14 18:31:33 +0000 UTC,LastTransitionTime:2023-01-14 18:16:14 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-14 18:31:33 +0000 UTC,LastTransitionTime:2023-01-14 18:16:14 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-14 18:31:33 +0000 UTC,LastTransitionTime:2023-01-14 18:16:14 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-14 18:31:33 +0000 UTC,LastTransitionTime:2023-01-14 18:17:37 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.0.0.4,},NodeAddress{Type:Hostname,Address:capz-67tgp2-control-plane-2chph,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:aa56c5629889429baa21826756529ecb,SystemUUID:744c1c53-9da3-134c-b7da-86c573f76ec3,BootID:b6ed8583-6ec6-40d3-b9e2-4bfd39a59694,KernelVersion:5.4.0-1098-azure,OSImage:Ubuntu 18.04.6 LTS,ContainerRuntimeVersion:containerd://1.6.2,KubeletVersion:v1.27.0-alpha.0.989+eabb70833a5649,KubeProxyVersion:v1.27.0-alpha.0.989+eabb70833a5649,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-apiserver:v1.27.0-alpha.0.989_eabb70833a5649 registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.0.989_eabb70833a5649 registry.k8s.io/kube-apiserver:v1.27.0-alpha.0.989_eabb70833a5649],SizeBytes:135903699,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-controller-manager:v1.27.0-alpha.0.989_eabb70833a5649 registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.0.989_eabb70833a5649 registry.k8s.io/kube-controller-manager:v1.27.0-alpha.0.989_eabb70833a5649],SizeBytes:125717305,},ContainerImage{Names:[docker.io/calico/cni@sha256:914823d144204288f881e49b93b6852febfe669074cd4e2a782860981615f521 docker.io/calico/cni:v3.23.0],SizeBytes:110494683,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[docker.io/calico/node@sha256:4763820ecb4d8e82483a2ffabfec7fcded9603318692df210a778d223a4d7474 docker.io/calico/node:v3.23.0],SizeBytes:71573794,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-scheduler:v1.27.0-alpha.0.989_eabb70833a5649 registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.0.989_eabb70833a5649 registry.k8s.io/kube-scheduler:v1.27.0-alpha.0.989_eabb70833a5649],SizeBytes:57551672,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-proxy:v1.27.0-alpha.0.989_eabb70833a5649 registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.989_eabb70833a5649 registry.k8s.io/kube-proxy:v1.27.0-alpha.0.989_eabb70833a5649],SizeBytes:52478325,},ContainerImage{Names:[registry.k8s.io/kube-apiserver@sha256:d230a0b88a3daf14e4cce03b906b992c8153f37da878677f434b1af8c4e8cc75 registry.k8s.io/kube-apiserver:v1.26.0],SizeBytes:35317868,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager@sha256:26e260b50ec46bd1da7352565cb8b34b6dd2cb006cebbd2f35170d50935fb9ec registry.k8s.io/kube-controller-manager:v1.26.0],SizeBytes:32244989,},ContainerImage{Names:[registry.k8s.io/kube-proxy@sha256:1e9bbe429e4e2b2ad32681c91deb98a334f1bf4135137df5f84f9d03689060fe registry.k8s.io/kube-proxy:v1.26.0],SizeBytes:21536465,},ContainerImage{Names:[registry.k8s.io/kube-scheduler@sha256:34a142549f94312b41d4a6cd98e7fddabff484767a199333acb7503bf46d7410 registry.k8s.io/kube-scheduler:v1.26.0],SizeBytes:17484038,},ContainerImage{Names:[capzci.azurecr.io/azure-cloud-controller-manager@sha256:a52d9377e1464d9e2d827e6555d7edf9082b5d85b60676d2fd74b87e202bad0c capzci.azurecr.io/azure-cloud-controller-manager:63c1cd3],SizeBytes:16980267,},ContainerImage{Names:[capzci.azurecr.io/azure-cloud-node-manager@sha256:45259845bc04cb115596dd16d88262d84214a1099fe085531240b24fa03021cf capzci.azurecr.io/azure-cloud-node-manager:63c1cd3],SizeBytes:16704716,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 14 18:35:24.634: INFO: Logging kubelet events for node capz-67tgp2-control-plane-2chph Jan 14 18:35:24.740: INFO: Logging pods the kubelet thinks is on node capz-67tgp2-control-plane-2chph Jan 14 18:35:24.953: INFO: kube-proxy-j74l7 started at 2023-01-14 18:16:44 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:24.953: INFO: Container kube-proxy ready: true, restart count 0 Jan 14 18:35:24.953: INFO: calico-node-g5dqz started at 2023-01-14 18:17:11 +0000 UTC (2+1 container statuses recorded) Jan 14 18:35:24.953: INFO: Init container upgrade-ipam ready: true, restart count 0 Jan 14 18:35:24.953: INFO: Init container install-cni ready: true, restart count 0 Jan 14 18:35:24.953: INFO: Container calico-node ready: true, restart count 0 Jan 14 18:35:24.953: INFO: cloud-node-manager-5qlnt started at 2023-01-14 18:20:38 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:24.953: INFO: Container cloud-node-manager ready: true, restart count 0 Jan 14 18:35:24.953: INFO: cloud-controller-manager-64479fbc67-xdds2 started at 2023-01-14 18:20:44 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:24.953: INFO: Container cloud-controller-manager ready: true, restart count 0 Jan 14 18:35:24.953: INFO: etcd-capz-67tgp2-control-plane-2chph started at 2023-01-14 18:16:44 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:24.953: INFO: Container etcd ready: true, restart count 0 Jan 14 18:35:24.953: INFO: kube-apiserver-capz-67tgp2-control-plane-2chph started at 2023-01-14 18:16:44 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:24.953: INFO: Container kube-apiserver ready: true, restart count 0 Jan 14 18:35:24.953: INFO: kube-scheduler-capz-67tgp2-control-plane-2chph started at 2023-01-14 18:16:45 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:24.953: INFO: Container kube-scheduler ready: true, restart count 0 Jan 14 18:35:24.953: INFO: kube-controller-manager-capz-67tgp2-control-plane-2chph started at 2023-01-14 18:16:44 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:24.953: INFO: Container kube-controller-manager ready: true, restart count 0 Jan 14 18:35:25.488: INFO: Latency metrics for node capz-67tgp2-control-plane-2chph Jan 14 18:35:25.488: INFO: Logging node info for node capz-67tgp2-mp-0000000 Jan 14 18:35:25.595: INFO: Node Info: &Node{ObjectMeta:{capz-67tgp2-mp-0000000 d6bf69fc-90f8-43c8-9623-356f58ea157f 16641 0 2023-01-14 18:19:00 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_D2s_v3 beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:northeurope failure-domain.beta.kubernetes.io/zone:0 kubernetes.io/arch:amd64 kubernetes.io/hostname:capz-67tgp2-mp-0000000 kubernetes.io/os:linux node.kubernetes.io/instance-type:Standard_D2s_v3 topology.kubernetes.io/region:northeurope topology.kubernetes.io/zone:0] map[cluster.x-k8s.io/cluster-name:capz-67tgp2 cluster.x-k8s.io/cluster-namespace:default cluster.x-k8s.io/owner-kind:MachinePool cluster.x-k8s.io/owner-name:capz-67tgp2-mp-0 kubeadm.alpha.kubernetes.io/cri-socket:unix:///var/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.1.0.4/16 projectcalico.org/IPv4VXLANTunnelAddr:192.168.243.192 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2023-01-14 18:19:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet Update v1 2023-01-14 18:19:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-14 18:19:48 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}} } {Go-http-client Update v1 2023-01-14 18:20:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:projectcalico.org/IPv4Address":{},"f:projectcalico.org/IPv4VXLANTunnelAddr":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {cloud-controller-manager Update v1 2023-01-14 18:21:06 +0000 UTC FieldsV1 {"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"192.168.1.0/24\"":{}}}} } {manager Update v1 2023-01-14 18:21:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cluster.x-k8s.io/cluster-name":{},"f:cluster.x-k8s.io/cluster-namespace":{},"f:cluster.x-k8s.io/owner-kind":{},"f:cluster.x-k8s.io/owner-name":{}}}} } {kubelet Update v1 2023-01-14 18:33:54 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:192.168.1.0/24,DoNotUseExternalID:,ProviderID:azure:///subscriptions/===REDACTED===/resourceGroups/capz-67tgp2/providers/Microsoft.Compute/virtualMachineScaleSets/capz-67tgp2-mp-0/virtualMachines/0,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[192.168.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{4 0} {<nil>} 4 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{31025332224 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{8344743936 0} {<nil>} 8149164Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{4 0} {<nil>} 4 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{27922798956 0} {<nil>} 27922798956 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{8239886336 0} {<nil>} 8046764Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-14 18:20:12 +0000 UTC,LastTransitionTime:2023-01-14 18:20:12 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-14 18:33:54 +0000 UTC,LastTransitionTime:2023-01-14 18:19:00 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-14 18:33:54 +0000 UTC,LastTransitionTime:2023-01-14 18:19:00 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-14 18:33:54 +0000 UTC,LastTransitionTime:2023-01-14 18:19:00 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-14 18:33:54 +0000 UTC,LastTransitionTime:2023-01-14 18:19:48 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:capz-67tgp2-mp-0000000,},NodeAddress{Type:InternalIP,Address:10.1.0.4,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:95d9ab6ead5141e2b46b1d18fec95432,SystemUUID:3fc8a171-f25a-2049-95d3-3c4be76d51a7,BootID:b9ac1a12-eff5-45ad-b970-9df972ef339e,KernelVersion:5.4.0-1098-azure,OSImage:Ubuntu 18.04.6 LTS,ContainerRuntimeVersion:containerd://1.6.2,KubeletVersion:v1.27.0-alpha.0.989+eabb70833a5649,KubeProxyVersion:v1.27.0-alpha.0.989+eabb70833a5649,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-apiserver:v1.27.0-alpha.0.989_eabb70833a5649 registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.0.989_eabb70833a5649 registry.k8s.io/kube-apiserver:v1.27.0-alpha.0.989_eabb70833a5649],SizeBytes:135903699,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-controller-manager:v1.27.0-alpha.0.989_eabb70833a5649 registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.0.989_eabb70833a5649 registry.k8s.io/kube-controller-manager:v1.27.0-alpha.0.989_eabb70833a5649],SizeBytes:125717305,},ContainerImage{Names:[docker.io/calico/cni@sha256:914823d144204288f881e49b93b6852febfe669074cd4e2a782860981615f521 docker.io/calico/cni:v3.23.0],SizeBytes:110494683,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[docker.io/calico/node@sha256:4763820ecb4d8e82483a2ffabfec7fcded9603318692df210a778d223a4d7474 docker.io/calico/node:v3.23.0],SizeBytes:71573794,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-scheduler:v1.27.0-alpha.0.989_eabb70833a5649 registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.0.989_eabb70833a5649 registry.k8s.io/kube-scheduler:v1.27.0-alpha.0.989_eabb70833a5649],SizeBytes:57551672,},ContainerImage{Names:[docker.io/calico/kube-controllers@sha256:78bc199299f966b0694dc4044501aee2d7ebd6862b2b0a00bca3ee8d3813c82f docker.io/calico/kube-controllers:v3.23.0],SizeBytes:56343954,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-proxy:v1.27.0-alpha.0.989_eabb70833a5649 registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.989_eabb70833a5649 registry.k8s.io/kube-proxy:v1.27.0-alpha.0.989_eabb70833a5649],SizeBytes:52478325,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nautilus@sha256:80ba6c8c44f9623f06e868a1aa66026c8ec438ad814f9ec95e9333b415fe3550 registry.k8s.io/e2e-test-images/nautilus:1.7],SizeBytes:49641698,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/kube-apiserver@sha256:d230a0b88a3daf14e4cce03b906b992c8153f37da878677f434b1af8c4e8cc75 registry.k8s.io/kube-apiserver:v1.26.0],SizeBytes:35317868,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager@sha256:26e260b50ec46bd1da7352565cb8b34b6dd2cb006cebbd2f35170d50935fb9ec registry.k8s.io/kube-controller-manager:v1.26.0],SizeBytes:32244989,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/kube-proxy@sha256:1e9bbe429e4e2b2ad32681c91deb98a334f1bf4135137df5f84f9d03689060fe registry.k8s.io/kube-proxy:v1.26.0],SizeBytes:21536465,},ContainerImage{Names:[registry.k8s.io/kube-scheduler@sha256:34a142549f94312b41d4a6cd98e7fddabff484767a199333acb7503bf46d7410 registry.k8s.io/kube-scheduler:v1.26.0],SizeBytes:17484038,},ContainerImage{Names:[capzci.azurecr.io/azure-cloud-node-manager@sha256:45259845bc04cb115596dd16d88262d84214a1099fe085531240b24fa03021cf capzci.azurecr.io/azure-cloud-node-manager:63c1cd3],SizeBytes:16704716,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:6978614,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 14 18:35:25.596: INFO: Logging kubelet events for node capz-67tgp2-mp-0000000 Jan 14 18:35:25.704: INFO: Logging pods the kubelet thinks is on node capz-67tgp2-mp-0000000 Jan 14 18:35:25.883: INFO: tester started at 2023-01-14 18:35:08 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:25.883: INFO: Container tester ready: true, restart count 0 Jan 14 18:35:25.883: INFO: coredns-56f4c55bf9-zp98j started at 2023-01-14 18:19:48 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:25.883: INFO: Container coredns ready: true, restart count 0 Jan 14 18:35:25.883: INFO: sample-webhook-deployment-865554f4d9-bb228 started at 2023-01-14 18:35:20 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:25.883: INFO: Container sample-webhook ready: false, restart count 0 Jan 14 18:35:25.883: INFO: test-rolling-update-deployment-7549d9f46d-pklnz started at 2023-01-14 18:35:20 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:25.883: INFO: Container agnhost ready: false, restart count 0 Jan 14 18:35:25.883: INFO: kube-proxy-8jftq started at 2023-01-14 18:19:05 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:25.883: INFO: Container kube-proxy ready: true, restart count 0 Jan 14 18:35:25.883: INFO: downward-api-1073a5a4-0d5f-4af3-9e34-20a20f87b5ea started at 2023-01-14 18:35:19 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:25.883: INFO: Container dapi-container ready: false, restart count 0 Jan 14 18:35:25.883: INFO: pod-qos-class-bddd171a-e154-4523-9abb-837b2095dfbb started at 2023-01-14 18:33:27 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:25.883: INFO: Container agnhost ready: false, restart count 0 Jan 14 18:35:25.883: INFO: pod1 started at 2023-01-14 18:34:38 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:25.883: INFO: Container agnhost-container ready: true, restart count 0 Jan 14 18:35:25.883: INFO: dns-test-eeb40b41-fc0f-431a-8cac-0735a1f4243b started at 2023-01-14 18:33:46 +0000 UTC (0+3 container statuses recorded) Jan 14 18:35:25.883: INFO: Container jessie-querier ready: false, restart count 0 Jan 14 18:35:25.883: INFO: Container querier ready: false, restart count 0 Jan 14 18:35:25.883: INFO: Container webserver ready: false, restart count 0 Jan 14 18:35:25.883: INFO: update-demo-nautilus-mcn6g started at 2023-01-14 18:32:15 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:25.883: INFO: Container update-demo ready: true, restart count 0 Jan 14 18:35:25.883: INFO: pod-secrets-79279ea7-8705-47e4-a9ab-02b8a5479454 started at <nil> (0+0 container statuses recorded) Jan 14 18:35:25.883: INFO: execpodg9czm started at 2023-01-14 18:35:00 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:25.883: INFO: Container agnhost-container ready: true, restart count 0 Jan 14 18:35:25.883: INFO: test-deployment-7b7876f9d6-zqb4p started at 2023-01-14 18:30:28 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:25.883: INFO: Container test-deployment ready: true, restart count 0 Jan 14 18:35:25.883: INFO: alpine-nnp-true-a4de807c-d017-4cbf-80dd-9efa36816371 started at 2023-01-14 18:33:49 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:25.883: INFO: Container alpine-nnp-true-a4de807c-d017-4cbf-80dd-9efa36816371 ready: false, restart count 0 Jan 14 18:35:25.883: INFO: metrics-server-795d765ff8-rskk8 started at 2023-01-14 18:19:48 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:25.883: INFO: Container metrics-server ready: true, restart count 0 Jan 14 18:35:25.883: INFO: ss2-0 started at 2023-01-14 18:31:08 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:25.883: INFO: Container webserver ready: true, restart count 0 Jan 14 18:35:25.883: INFO: externalname-service-pq2wx started at 2023-01-14 18:34:36 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:25.883: INFO: Container externalname-service ready: true, restart count 0 Jan 14 18:35:25.883: INFO: coredns-56f4c55bf9-4pfjc started at 2023-01-14 18:19:48 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:25.883: INFO: Container coredns ready: true, restart count 0 Jan 14 18:35:25.883: INFO: ss2-2 started at 2023-01-14 18:35:21 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:25.883: INFO: Container webserver ready: false, restart count 0 Jan 14 18:35:25.883: INFO: ss2-0 started at 2023-01-14 18:34:37 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:25.883: INFO: Container webserver ready: true, restart count 0 Jan 14 18:35:25.883: INFO: cloud-node-manager-l846f started at 2023-01-14 18:20:38 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:25.883: INFO: Container cloud-node-manager ready: true, restart count 0 Jan 14 18:35:25.883: INFO: test-ss-0 started at 2023-01-14 18:28:36 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:25.883: INFO: Container webserver ready: true, restart count 0 Jan 14 18:35:25.883: INFO: calico-kube-controllers-657b584867-tn8lq started at 2023-01-14 18:19:48 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:25.883: INFO: Container calico-kube-controllers ready: true, restart count 0 Jan 14 18:35:25.883: INFO: calico-node-t5npc started at 2023-01-14 18:19:05 +0000 UTC (2+1 container statuses recorded) Jan 14 18:35:25.883: INFO: Init container upgrade-ipam ready: true, restart count 0 Jan 14 18:35:25.883: INFO: Init container install-cni ready: true, restart count 0 Jan 14 18:35:25.883: INFO: Container calico-node ready: true, restart count 0 Jan 14 18:35:25.883: INFO: pod-configmaps-dfa2997d-eef5-48de-9ba4-2617684da066 started at <nil> (0+0 container statuses recorded) Jan 14 18:35:25.883: INFO: pod-secrets-5a523e88-d1f1-46b1-b8c2-7b0072c2daca started at 2023-01-14 18:35:20 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:25.883: INFO: Container secret-volume-test ready: false, restart count 0 Jan 14 18:35:26.991: INFO: Latency metrics for node capz-67tgp2-mp-0000000 Jan 14 18:35:26.991: INFO: Logging node info for node capz-67tgp2-mp-0000001 Jan 14 18:35:27.103: INFO: Node Info: &Node{ObjectMeta:{capz-67tgp2-mp-0000001 a57d1a46-19d4-4265-8229-3bb32b89963d 19871 0 2023-01-14 18:18:59 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_D2s_v3 beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:northeurope failure-domain.beta.kubernetes.io/zone:1 kubernetes.io/arch:amd64 kubernetes.io/hostname:capz-67tgp2-mp-0000001 kubernetes.io/os:linux node.kubernetes.io/instance-type:Standard_D2s_v3 topology.kubernetes.io/region:northeurope topology.kubernetes.io/zone:1] map[cluster.x-k8s.io/cluster-name:capz-67tgp2 cluster.x-k8s.io/cluster-namespace:default cluster.x-k8s.io/owner-kind:MachinePool cluster.x-k8s.io/owner-name:capz-67tgp2-mp-0 kubeadm.alpha.kubernetes.io/cri-socket:unix:///var/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.1.0.5/16 projectcalico.org/IPv4VXLANTunnelAddr:192.168.14.128 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-14 18:18:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2023-01-14 18:19:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kube-controller-manager Update v1 2023-01-14 18:20:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}} } {Go-http-client Update v1 2023-01-14 18:20:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:projectcalico.org/IPv4Address":{},"f:projectcalico.org/IPv4VXLANTunnelAddr":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {cloud-controller-manager Update v1 2023-01-14 18:21:06 +0000 UTC FieldsV1 {"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"192.168.2.0/24\"":{}}}} } {manager Update v1 2023-01-14 18:21:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cluster.x-k8s.io/cluster-name":{},"f:cluster.x-k8s.io/cluster-namespace":{},"f:cluster.x-k8s.io/owner-kind":{},"f:cluster.x-k8s.io/owner-name":{}}}} } {kubelet Update v1 2023-01-14 18:35:23 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:192.168.2.0/24,DoNotUseExternalID:,ProviderID:azure:///subscriptions/===REDACTED===/resourceGroups/capz-67tgp2/providers/Microsoft.Compute/virtualMachineScaleSets/capz-67tgp2-mp-0/virtualMachines/1,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[192.168.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{4 0} {<nil>} 4 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{31025332224 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{8344743936 0} {<nil>} 8149164Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{4 0} {<nil>} 4 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{27922798956 0} {<nil>} 27922798956 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{8239886336 0} {<nil>} 8046764Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-14 18:20:41 +0000 UTC,LastTransitionTime:2023-01-14 18:20:41 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-14 18:35:23 +0000 UTC,LastTransitionTime:2023-01-14 18:18:59 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-14 18:35:23 +0000 UTC,LastTransitionTime:2023-01-14 18:18:59 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-14 18:35:23 +0000 UTC,LastTransitionTime:2023-01-14 18:18:59 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-14 18:35:23 +0000 UTC,LastTransitionTime:2023-01-14 18:20:32 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:capz-67tgp2-mp-0000001,},NodeAddress{Type:InternalIP,Address:10.1.0.5,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:e38f17c71746485985c8ebe9f1d87480,SystemUUID:31667858-013a-6c49-bd37-41a0bfb4cd7c,BootID:a61dc5b1-073f-4988-b019-c5aa35ecae86,KernelVersion:5.4.0-1098-azure,OSImage:Ubuntu 18.04.6 LTS,ContainerRuntimeVersion:containerd://1.6.2,KubeletVersion:v1.27.0-alpha.0.989+eabb70833a5649,KubeProxyVersion:v1.27.0-alpha.0.989+eabb70833a5649,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-apiserver:v1.27.0-alpha.0.989_eabb70833a5649 registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.0.989_eabb70833a5649 registry.k8s.io/kube-apiserver:v1.27.0-alpha.0.989_eabb70833a5649],SizeBytes:135903699,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-controller-manager:v1.27.0-alpha.0.989_eabb70833a5649 registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.0.989_eabb70833a5649 registry.k8s.io/kube-controller-manager:v1.27.0-alpha.0.989_eabb70833a5649],SizeBytes:125717305,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[docker.io/calico/cni@sha256:914823d144204288f881e49b93b6852febfe669074cd4e2a782860981615f521 docker.io/calico/cni:v3.23.0],SizeBytes:110494683,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[docker.io/calico/node@sha256:4763820ecb4d8e82483a2ffabfec7fcded9603318692df210a778d223a4d7474 docker.io/calico/node:v3.23.0],SizeBytes:71573794,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-scheduler:v1.27.0-alpha.0.989_eabb70833a5649 registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.0.989_eabb70833a5649 registry.k8s.io/kube-scheduler:v1.27.0-alpha.0.989_eabb70833a5649],SizeBytes:57551672,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-proxy:v1.27.0-alpha.0.989_eabb70833a5649 registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.989_eabb70833a5649 registry.k8s.io/kube-proxy:v1.27.0-alpha.0.989_eabb70833a5649],SizeBytes:52478325,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nautilus@sha256:80ba6c8c44f9623f06e868a1aa66026c8ec438ad814f9ec95e9333b415fe3550 registry.k8s.io/e2e-test-images/nautilus:1.7],SizeBytes:49641698,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/kube-apiserver@sha256:d230a0b88a3daf14e4cce03b906b992c8153f37da878677f434b1af8c4e8cc75 registry.k8s.io/kube-apiserver:v1.26.0],SizeBytes:35317868,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager@sha256:26e260b50ec46bd1da7352565cb8b34b6dd2cb006cebbd2f35170d50935fb9ec registry.k8s.io/kube-controller-manager:v1.26.0],SizeBytes:32244989,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/sample-apiserver@sha256:8d70890151aa5d096f331cb9da1b9cd5be0412b7363fe67b5c3befdcaa2a28d0 registry.k8s.io/e2e-test-images/sample-apiserver:1.17.7],SizeBytes:25667066,},ContainerImage{Names:[registry.k8s.io/kube-proxy@sha256:1e9bbe429e4e2b2ad32681c91deb98a334f1bf4135137df5f84f9d03689060fe registry.k8s.io/kube-proxy:v1.26.0],SizeBytes:21536465,},ContainerImage{Names:[registry.k8s.io/kube-scheduler@sha256:34a142549f94312b41d4a6cd98e7fddabff484767a199333acb7503bf46d7410 registry.k8s.io/kube-scheduler:v1.26.0],SizeBytes:17484038,},ContainerImage{Names:[capzci.azurecr.io/azure-cloud-node-manager@sha256:45259845bc04cb115596dd16d88262d84214a1099fe085531240b24fa03021cf capzci.azurecr.io/azure-cloud-node-manager:63c1cd3],SizeBytes:16704716,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:6978614,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 14 18:35:27.103: INFO: Logging kubelet events for node capz-67tgp2-mp-0000001 Jan 14 18:35:27.209: INFO: Logging pods the kubelet thinks is on node capz-67tgp2-mp-0000001 Jan 14 18:35:27.379: INFO: ss2-1 started at 2023-01-14 18:33:26 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:27.379: INFO: Container webserver ready: true, restart count 0 Jan 14 18:35:27.379: INFO: update-demo-nautilus-gtnf9 started at 2023-01-14 18:32:15 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:27.379: INFO: Container update-demo ready: true, restart count 0 Jan 14 18:35:27.379: INFO: pod-ready started at 2023-01-14 18:34:20 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:27.379: INFO: Container pod-readiness-gate ready: true, restart count 0 Jan 14 18:35:27.379: INFO: cloud-node-manager-c24hp started at 2023-01-14 18:20:38 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:27.379: INFO: Container cloud-node-manager ready: true, restart count 0 Jan 14 18:35:27.379: INFO: test-deployment-7df74c55ff-84hdq started at 2023-01-14 18:29:43 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:27.379: INFO: Container test-deployment ready: true, restart count 0 Jan 14 18:35:27.379: INFO: ss2-1 started at 2023-01-14 18:33:52 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:27.379: INFO: Container webserver ready: true, restart count 0 Jan 14 18:35:27.379: INFO: sample-apiserver-deployment-55bd96fd47-ff7kc started at 2023-01-14 18:31:43 +0000 UTC (0+2 container statuses recorded) Jan 14 18:35:27.379: INFO: Container etcd ready: true, restart count 0 Jan 14 18:35:27.379: INFO: Container sample-apiserver ready: false, restart count 0 Jan 14 18:35:27.379: INFO: ss-0 started at 2023-01-14 18:35:08 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:27.379: INFO: Container webserver ready: true, restart count 0 Jan 14 18:35:27.379: INFO: busybox-81487092-f501-4426-acf5-c16c8471c3c4 started at 2023-01-14 18:34:50 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:27.379: INFO: Container busybox ready: false, restart count 0 Jan 14 18:35:27.379: INFO: test-rolling-update-controller-lh8rd started at 2023-01-14 18:31:44 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:27.379: INFO: Container httpd ready: true, restart count 0 Jan 14 18:35:27.379: INFO: ss-0 started at 2023-01-14 18:34:43 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:27.379: INFO: Container webserver ready: true, restart count 0 Jan 14 18:35:27.379: INFO: busybox-user-65534-e1188811-c39c-4714-8d9f-b3aad5e7e12b started at 2023-01-14 18:35:11 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:27.379: INFO: Container busybox-user-65534-e1188811-c39c-4714-8d9f-b3aad5e7e12b ready: false, restart count 0 Jan 14 18:35:27.379: INFO: downwardapi-volume-b58576c1-737b-42c9-aeb6-1d8e6a721d70 started at 2023-01-14 18:35:16 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:27.379: INFO: Container client-container ready: true, restart count 0 Jan 14 18:35:27.379: INFO: var-expansion-ad6ba4d8-d241-42f0-b086-ce254fed1d9a started at <nil> (0+0 container statuses recorded) Jan 14 18:35:27.380: INFO: pod-configmaps-36d07591-4990-4769-bfcb-b3813928fe8c started at 2023-01-14 18:35:15 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:27.380: INFO: Container env-test ready: false, restart count 0 Jan 14 18:35:27.380: INFO: test-ss-1 started at 2023-01-14 18:31:37 +0000 UTC (0+2 container statuses recorded) Jan 14 18:35:27.380: INFO: Container test-ss ready: true, restart count 0 Jan 14 18:35:27.380: INFO: Container webserver ready: true, restart count 0 Jan 14 18:35:27.380: INFO: kube-proxy-xd8xz started at 2023-01-14 18:19:07 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:27.380: INFO: Container kube-proxy ready: true, restart count 0 Jan 14 18:35:27.380: INFO: pod2 started at 2023-01-14 18:35:17 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:27.380: INFO: Container agnhost-container ready: false, restart count 0 Jan 14 18:35:27.380: INFO: calico-node-lzp55 started at 2023-01-14 18:19:07 +0000 UTC (2+1 container statuses recorded) Jan 14 18:35:27.380: INFO: Init container upgrade-ipam ready: true, restart count 0 Jan 14 18:35:27.380: INFO: Init container install-cni ready: true, restart count 0 Jan 14 18:35:27.380: INFO: Container calico-node ready: true, restart count 0 Jan 14 18:35:27.380: INFO: image-pull-testdb5f66f7-9de7-465c-888d-fcd0f2ef78f0 started at 2023-01-14 18:34:48 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:27.380: INFO: Container image-pull-test ready: false, restart count 0 Jan 14 18:35:27.380: INFO: test-rs-46njb started at 2023-01-14 18:31:59 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:27.380: INFO: Container httpd ready: true, restart count 0 Jan 14 18:35:27.380: INFO: test-deployment-7b7876f9d6-cjtpl started at 2023-01-14 18:33:10 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:27.380: INFO: Container test-deployment ready: true, restart count 0 Jan 14 18:35:27.380: INFO: ss2-2 started at 2023-01-14 18:34:53 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:27.380: INFO: Container webserver ready: false, restart count 0 Jan 14 18:35:27.380: INFO: sample-webhook-deployment-865554f4d9-9s6vn started at <nil> (0+0 container statuses recorded) Jan 14 18:35:27.380: INFO: sample-webhook-deployment-865554f4d9-xz65d started at 2023-01-14 18:35:13 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:27.380: INFO: Container sample-webhook ready: false, restart count 0 Jan 14 18:35:27.380: INFO: pod-init-e3f25dbe-5e64-4732-8132-bc1e8e27a112 started at 2023-01-14 18:35:14 +0000 UTC (2+1 container statuses recorded) Jan 14 18:35:27.380: INFO: Init container init1 ready: true, restart count 0 Jan 14 18:35:27.380: INFO: Init container init2 ready: false, restart count 0 Jan 14 18:35:27.380: INFO: Container run1 ready: false, restart count 0 Jan 14 18:35:27.380: INFO: update-demo-nautilus-9757j started at 2023-01-14 18:30:58 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:27.380: INFO: Container update-demo ready: false, restart count 0 Jan 14 18:35:27.380: INFO: externalname-service-2nvd6 started at 2023-01-14 18:34:36 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:27.380: INFO: Container externalname-service ready: true, restart count 0 Jan 14 18:35:28.034: INFO: Latency metrics for node capz-67tgp2-mp-0000001 END STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/14/23 18:35:28.034 (4.563s) < Exit [DeferCleanup (Each)] [sig-apps] Deployment - dump namespaces | framework.go:206 @ 01/14/23 18:35:28.034 (4.563s) > Enter [DeferCleanup (Each)] [sig-apps] Deployment - tear down framework | framework.go:203 @ 01/14/23 18:35:28.034 STEP: Destroying namespace "deployment-6429" for this suite. - test/e2e/framework/framework.go:347 @ 01/14/23 18:35:28.034 < Exit [DeferCleanup (Each)] [sig-apps] Deployment - tear down framework | framework.go:203 @ 01/14/23 18:35:28.145 (111ms) > Enter [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/14/23 18:35:28.145 < Exit [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/14/23 18:35:28.145 (0s)
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-apps\]\sDeployment\sshould\srun\sthe\slifecycle\sof\sa\sDeployment\s\[Conformance\]$'
[FAILED] failed to see MODIFIED event: watch closed before UntilWithoutRetry timeout In [It] at: test/e2e/apps/deployment.go:424 @ 01/14/23 18:35:22.952from junit_01.xml
> Enter [BeforeEach] [sig-apps] Deployment - set up framework | framework.go:188 @ 01/14/23 18:28:39.173 STEP: Creating a kubernetes client - test/e2e/framework/framework.go:208 @ 01/14/23 18:28:39.173 Jan 14 18:28:39.173: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/kubeconfig STEP: Building a namespace api object, basename deployment - test/e2e/framework/framework.go:247 @ 01/14/23 18:28:39.174 STEP: Waiting for a default service account to be provisioned in namespace - test/e2e/framework/framework.go:256 @ 01/14/23 18:28:39.495 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace - test/e2e/framework/framework.go:259 @ 01/14/23 18:28:39.701 < Exit [BeforeEach] [sig-apps] Deployment - set up framework | framework.go:188 @ 01/14/23 18:28:39.906 (733ms) > Enter [BeforeEach] [sig-apps] Deployment - test/e2e/framework/metrics/init/init.go:33 @ 01/14/23 18:28:39.906 < Exit [BeforeEach] [sig-apps] Deployment - test/e2e/framework/metrics/init/init.go:33 @ 01/14/23 18:28:39.906 (0s) > Enter [BeforeEach] [sig-apps] Deployment - test/e2e/apps/deployment.go:91 @ 01/14/23 18:28:39.906 < Exit [BeforeEach] [sig-apps] Deployment - test/e2e/apps/deployment.go:91 @ 01/14/23 18:28:39.906 (0s) > Enter [It] should run the lifecycle of a Deployment [Conformance] - test/e2e/apps/deployment.go:185 @ 01/14/23 18:28:39.906 STEP: creating a Deployment - test/e2e/apps/deployment.go:207 @ 01/14/23 18:28:40.017 STEP: waiting for Deployment to be created - test/e2e/apps/deployment.go:217 @ 01/14/23 18:28:40.13 STEP: waiting for all Replicas to be Ready - test/e2e/apps/deployment.go:235 @ 01/14/23 18:28:40.252 Jan 14 18:28:40.372: INFO: observed Deployment test-deployment in namespace deployment-6429 with ReadyReplicas 0 and labels map[test-deployment-static:true] Jan 14 18:28:40.372: INFO: observed Deployment test-deployment in namespace deployment-6429 with ReadyReplicas 0 and labels map[test-deployment-static:true] Jan 14 18:28:40.372: INFO: observed Deployment test-deployment in namespace deployment-6429 with ReadyReplicas 0 and labels map[test-deployment-static:true] Jan 14 18:28:40.372: INFO: observed Deployment test-deployment in namespace deployment-6429 with ReadyReplicas 0 and labels map[test-deployment-static:true] Jan 14 18:28:40.372: INFO: observed Deployment test-deployment in namespace deployment-6429 with ReadyReplicas 0 and labels map[test-deployment-static:true] Jan 14 18:28:40.372: INFO: observed Deployment test-deployment in namespace deployment-6429 with ReadyReplicas 0 and labels map[test-deployment-static:true] Jan 14 18:28:40.417: INFO: observed Deployment test-deployment in namespace deployment-6429 with ReadyReplicas 0 and labels map[test-deployment-static:true] Jan 14 18:28:40.417: INFO: observed Deployment test-deployment in namespace deployment-6429 with ReadyReplicas 0 and labels map[test-deployment-static:true] Jan 14 18:29:16.405: INFO: observed Deployment test-deployment in namespace deployment-6429 with ReadyReplicas 1 and labels map[test-deployment-static:true] Jan 14 18:29:16.405: INFO: observed Deployment test-deployment in namespace deployment-6429 with ReadyReplicas 1 and labels map[test-deployment-static:true] Jan 14 18:29:43.324: INFO: observed Deployment test-deployment in namespace deployment-6429 with ReadyReplicas 2 and labels map[test-deployment-static:true] STEP: patching the Deployment - test/e2e/apps/deployment.go:253 @ 01/14/23 18:29:43.324 W0114 18:29:43.483877 60153 warnings.go:70] unknown field "spec.template.spec.TerminationGracePeriodSeconds" Jan 14 18:29:43.597: INFO: observed event type ADDED STEP: waiting for Replicas to scale - test/e2e/apps/deployment.go:294 @ 01/14/23 18:29:43.597 Jan 14 18:29:43.724: INFO: observed Deployment test-deployment in namespace deployment-6429 with ReadyReplicas 0 Jan 14 18:29:43.724: INFO: observed Deployment test-deployment in namespace deployment-6429 with ReadyReplicas 0 Jan 14 18:29:43.724: INFO: observed Deployment test-deployment in namespace deployment-6429 with ReadyReplicas 0 Jan 14 18:29:43.724: INFO: observed Deployment test-deployment in namespace deployment-6429 with ReadyReplicas 0 Jan 14 18:29:43.730: INFO: observed Deployment test-deployment in namespace deployment-6429 with ReadyReplicas 0 Jan 14 18:29:43.730: INFO: observed Deployment test-deployment in namespace deployment-6429 with ReadyReplicas 0 Jan 14 18:29:43.730: INFO: observed Deployment test-deployment in namespace deployment-6429 with ReadyReplicas 0 Jan 14 18:29:43.730: INFO: observed Deployment test-deployment in namespace deployment-6429 with ReadyReplicas 0 Jan 14 18:29:43.741: INFO: observed Deployment test-deployment in namespace deployment-6429 with ReadyReplicas 1 Jan 14 18:29:43.741: INFO: observed Deployment test-deployment in namespace deployment-6429 with ReadyReplicas 1 Jan 14 18:29:43.741: INFO: observed Deployment test-deployment in namespace deployment-6429 with ReadyReplicas 2 Jan 14 18:29:43.741: INFO: observed Deployment test-deployment in namespace deployment-6429 with ReadyReplicas 2 Jan 14 18:29:43.748: INFO: observed Deployment test-deployment in namespace deployment-6429 with ReadyReplicas 2 Jan 14 18:29:43.748: INFO: observed Deployment test-deployment in namespace deployment-6429 with ReadyReplicas 2 Jan 14 18:29:43.748: INFO: observed Deployment test-deployment in namespace deployment-6429 with ReadyReplicas 2 Jan 14 18:29:43.748: INFO: observed Deployment test-deployment in namespace deployment-6429 with ReadyReplicas 2 Jan 14 18:29:43.753: INFO: observed Deployment test-deployment in namespace deployment-6429 with ReadyReplicas 2 Jan 14 18:29:43.753: INFO: observed Deployment test-deployment in namespace deployment-6429 with ReadyReplicas 2 Jan 14 18:29:43.810: INFO: observed Deployment test-deployment in namespace deployment-6429 with ReadyReplicas 1 Jan 14 18:29:43.810: INFO: observed Deployment test-deployment in namespace deployment-6429 with ReadyReplicas 1 Jan 14 18:29:43.861: INFO: observed Deployment test-deployment in namespace deployment-6429 with ReadyReplicas 1 Jan 14 18:29:43.861: INFO: observed Deployment test-deployment in namespace deployment-6429 with ReadyReplicas 1 Jan 14 18:30:28.716: INFO: observed Deployment test-deployment in namespace deployment-6429 with ReadyReplicas 2 Jan 14 18:30:28.716: INFO: observed Deployment test-deployment in namespace deployment-6429 with ReadyReplicas 2 Jan 14 18:30:28.768: INFO: observed Deployment test-deployment in namespace deployment-6429 with ReadyReplicas 1 STEP: listing Deployments - test/e2e/apps/deployment.go:315 @ 01/14/23 18:30:28.768 Jan 14 18:30:28.872: INFO: Found test-deployment with labels: map[test-deployment:patched test-deployment-static:true] STEP: updating the Deployment - test/e2e/apps/deployment.go:332 @ 01/14/23 18:30:28.872 Jan 14 18:30:29.102: INFO: observed Deployment test-deployment in namespace deployment-6429 with ReadyReplicas 1 STEP: fetching the DeploymentStatus - test/e2e/apps/deployment.go:367 @ 01/14/23 18:30:29.102 Jan 14 18:30:29.325: INFO: observed Deployment test-deployment in namespace deployment-6429 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Jan 14 18:30:29.331: INFO: observed Deployment test-deployment in namespace deployment-6429 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Jan 14 18:30:29.331: INFO: observed Deployment test-deployment in namespace deployment-6429 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Jan 14 18:30:29.340: INFO: observed Deployment test-deployment in namespace deployment-6429 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Jan 14 18:30:29.340: INFO: observed Deployment test-deployment in namespace deployment-6429 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Jan 14 18:30:58.657: INFO: observed Deployment test-deployment in namespace deployment-6429 with ReadyReplicas 2 and labels map[test-deployment:updated test-deployment-static:true] Jan 14 18:33:10.214: INFO: observed Deployment test-deployment in namespace deployment-6429 with ReadyReplicas 3 and labels map[test-deployment:updated test-deployment-static:true] Jan 14 18:33:10.250: INFO: observed Deployment test-deployment in namespace deployment-6429 with ReadyReplicas 2 and labels map[test-deployment:updated test-deployment-static:true] Jan 14 18:33:10.296: INFO: observed Deployment test-deployment in namespace deployment-6429 with ReadyReplicas 2 and labels map[test-deployment:updated test-deployment-static:true] Automatically polling progress: [sig-apps] Deployment should run the lifecycle of a Deployment [Conformance] (Spec Runtime: 5m0.734s) test/e2e/apps/deployment.go:185 In [It] (Node Runtime: 5m0.001s) test/e2e/apps/deployment.go:185 At [By Step] fetching the DeploymentStatus (Step Runtime: 3m10.805s) test/e2e/apps/deployment.go:367 Spec Goroutine goroutine 526 [select] k8s.io/kubernetes/vendor/k8s.io/client-go/tools/watch.UntilWithoutRetry({0x80f0620, 0xc001106540}, {0x80cc560, 0xc001327980}, {0xc0057e1830, 0x1, 0x45d964b800?}) vendor/k8s.io/client-go/tools/watch/until.go:73 k8s.io/kubernetes/vendor/k8s.io/client-go/tools/watch.Until({0x80f0620, 0xc001106540}, {0xc0024001d7?, 0x80a8310?}, {0x80bca00?, 0xc000e8e630?}, {0xc0057e1830, 0x1, 0x1}) vendor/k8s.io/client-go/tools/watch/until.go:113 > k8s.io/kubernetes/test/e2e/apps.glob..func5.13({0x7f52910591d8?, 0xc00135b600}) test/e2e/apps/deployment.go:378 | ctxUntil, cancel = context.WithTimeout(ctx, f.Timeouts.PodStart) | defer cancel() > _, err = watchtools.Until(ctxUntil, deploymentsList.ResourceVersion, w, func(event watch.Event) (bool, error) { | if deployment, ok := event.Object.(*appsv1.Deployment); ok { | found := deployment.ObjectMeta.Name == testDeployment.Name && k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func2({0x80f8d88?, 0xc00135b600}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:452 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:854 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:841 Automatically polling progress: [sig-apps] Deployment should run the lifecycle of a Deployment [Conformance] (Spec Runtime: 5m20.737s) test/e2e/apps/deployment.go:185 In [It] (Node Runtime: 5m20.004s) test/e2e/apps/deployment.go:185 At [By Step] fetching the DeploymentStatus (Step Runtime: 3m30.808s) test/e2e/apps/deployment.go:367 Spec Goroutine goroutine 526 [select] k8s.io/kubernetes/vendor/k8s.io/client-go/tools/watch.UntilWithoutRetry({0x80f0620, 0xc001106540}, {0x80cc560, 0xc001327980}, {0xc0057e1830, 0x1, 0x45d964b800?}) vendor/k8s.io/client-go/tools/watch/until.go:73 k8s.io/kubernetes/vendor/k8s.io/client-go/tools/watch.Until({0x80f0620, 0xc001106540}, {0xc0024001d7?, 0x80a8310?}, {0x80bca00?, 0xc000e8e630?}, {0xc0057e1830, 0x1, 0x1}) vendor/k8s.io/client-go/tools/watch/until.go:113 > k8s.io/kubernetes/test/e2e/apps.glob..func5.13({0x7f52910591d8?, 0xc00135b600}) test/e2e/apps/deployment.go:378 | ctxUntil, cancel = context.WithTimeout(ctx, f.Timeouts.PodStart) | defer cancel() > _, err = watchtools.Until(ctxUntil, deploymentsList.ResourceVersion, w, func(event watch.Event) (bool, error) { | if deployment, ok := event.Object.(*appsv1.Deployment); ok { | found := deployment.ObjectMeta.Name == testDeployment.Name && k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func2({0x80f8d88?, 0xc00135b600}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:452 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:854 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:841 Automatically polling progress: [sig-apps] Deployment should run the lifecycle of a Deployment [Conformance] (Spec Runtime: 5m40.739s) test/e2e/apps/deployment.go:185 In [It] (Node Runtime: 5m40.006s) test/e2e/apps/deployment.go:185 At [By Step] fetching the DeploymentStatus (Step Runtime: 3m50.81s) test/e2e/apps/deployment.go:367 Spec Goroutine goroutine 526 [select, 2 minutes] k8s.io/kubernetes/vendor/k8s.io/client-go/tools/watch.UntilWithoutRetry({0x80f0620, 0xc001106540}, {0x80cc560, 0xc001327980}, {0xc0057e1830, 0x1, 0x45d964b800?}) vendor/k8s.io/client-go/tools/watch/until.go:73 k8s.io/kubernetes/vendor/k8s.io/client-go/tools/watch.Until({0x80f0620, 0xc001106540}, {0xc0024001d7?, 0x80a8310?}, {0x80bca00?, 0xc000e8e630?}, {0xc0057e1830, 0x1, 0x1}) vendor/k8s.io/client-go/tools/watch/until.go:113 > k8s.io/kubernetes/test/e2e/apps.glob..func5.13({0x7f52910591d8?, 0xc00135b600}) test/e2e/apps/deployment.go:378 | ctxUntil, cancel = context.WithTimeout(ctx, f.Timeouts.PodStart) | defer cancel() > _, err = watchtools.Until(ctxUntil, deploymentsList.ResourceVersion, w, func(event watch.Event) (bool, error) { | if deployment, ok := event.Object.(*appsv1.Deployment); ok { | found := deployment.ObjectMeta.Name == testDeployment.Name && k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func2({0x80f8d88?, 0xc00135b600}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:452 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:854 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:841 Automatically polling progress: [sig-apps] Deployment should run the lifecycle of a Deployment [Conformance] (Spec Runtime: 6m0.741s) test/e2e/apps/deployment.go:185 In [It] (Node Runtime: 6m0.008s) test/e2e/apps/deployment.go:185 At [By Step] fetching the DeploymentStatus (Step Runtime: 4m10.812s) test/e2e/apps/deployment.go:367 Spec Goroutine goroutine 526 [select, 2 minutes] k8s.io/kubernetes/vendor/k8s.io/client-go/tools/watch.UntilWithoutRetry({0x80f0620, 0xc001106540}, {0x80cc560, 0xc001327980}, {0xc0057e1830, 0x1, 0x45d964b800?}) vendor/k8s.io/client-go/tools/watch/until.go:73 k8s.io/kubernetes/vendor/k8s.io/client-go/tools/watch.Until({0x80f0620, 0xc001106540}, {0xc0024001d7?, 0x80a8310?}, {0x80bca00?, 0xc000e8e630?}, {0xc0057e1830, 0x1, 0x1}) vendor/k8s.io/client-go/tools/watch/until.go:113 > k8s.io/kubernetes/test/e2e/apps.glob..func5.13({0x7f52910591d8?, 0xc00135b600}) test/e2e/apps/deployment.go:378 | ctxUntil, cancel = context.WithTimeout(ctx, f.Timeouts.PodStart) | defer cancel() > _, err = watchtools.Until(ctxUntil, deploymentsList.ResourceVersion, w, func(event watch.Event) (bool, error) { | if deployment, ok := event.Object.(*appsv1.Deployment); ok { | found := deployment.ObjectMeta.Name == testDeployment.Name && k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func2({0x80f8d88?, 0xc00135b600}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:452 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:854 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:841 Automatically polling progress: [sig-apps] Deployment should run the lifecycle of a Deployment [Conformance] (Spec Runtime: 6m20.744s) test/e2e/apps/deployment.go:185 In [It] (Node Runtime: 6m20.01s) test/e2e/apps/deployment.go:185 At [By Step] fetching the DeploymentStatus (Step Runtime: 4m30.815s) test/e2e/apps/deployment.go:367 Spec Goroutine goroutine 526 [select, 2 minutes] k8s.io/kubernetes/vendor/k8s.io/client-go/tools/watch.UntilWithoutRetry({0x80f0620, 0xc001106540}, {0x80cc560, 0xc001327980}, {0xc0057e1830, 0x1, 0x45d964b800?}) vendor/k8s.io/client-go/tools/watch/until.go:73 k8s.io/kubernetes/vendor/k8s.io/client-go/tools/watch.Until({0x80f0620, 0xc001106540}, {0xc0024001d7?, 0x80a8310?}, {0x80bca00?, 0xc000e8e630?}, {0xc0057e1830, 0x1, 0x1}) vendor/k8s.io/client-go/tools/watch/until.go:113 > k8s.io/kubernetes/test/e2e/apps.glob..func5.13({0x7f52910591d8?, 0xc00135b600}) test/e2e/apps/deployment.go:378 | ctxUntil, cancel = context.WithTimeout(ctx, f.Timeouts.PodStart) | defer cancel() > _, err = watchtools.Until(ctxUntil, deploymentsList.ResourceVersion, w, func(event watch.Event) (bool, error) { | if deployment, ok := event.Object.(*appsv1.Deployment); ok { | found := deployment.ObjectMeta.Name == testDeployment.Name && k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func2({0x80f8d88?, 0xc00135b600}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:452 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:854 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:841 Automatically polling progress: [sig-apps] Deployment should run the lifecycle of a Deployment [Conformance] (Spec Runtime: 6m40.746s) test/e2e/apps/deployment.go:185 In [It] (Node Runtime: 6m40.013s) test/e2e/apps/deployment.go:185 At [By Step] fetching the DeploymentStatus (Step Runtime: 4m50.817s) test/e2e/apps/deployment.go:367 Spec Goroutine goroutine 526 [select, 3 minutes] k8s.io/kubernetes/vendor/k8s.io/client-go/tools/watch.UntilWithoutRetry({0x80f0620, 0xc001106540}, {0x80cc560, 0xc001327980}, {0xc0057e1830, 0x1, 0x45d964b800?}) vendor/k8s.io/client-go/tools/watch/until.go:73 k8s.io/kubernetes/vendor/k8s.io/client-go/tools/watch.Until({0x80f0620, 0xc001106540}, {0xc0024001d7?, 0x80a8310?}, {0x80bca00?, 0xc000e8e630?}, {0xc0057e1830, 0x1, 0x1}) vendor/k8s.io/client-go/tools/watch/until.go:113 > k8s.io/kubernetes/test/e2e/apps.glob..func5.13({0x7f52910591d8?, 0xc00135b600}) test/e2e/apps/deployment.go:378 | ctxUntil, cancel = context.WithTimeout(ctx, f.Timeouts.PodStart) | defer cancel() > _, err = watchtools.Until(ctxUntil, deploymentsList.ResourceVersion, w, func(event watch.Event) (bool, error) { | if deployment, ok := event.Object.(*appsv1.Deployment); ok { | found := deployment.ObjectMeta.Name == testDeployment.Name && k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func2({0x80f8d88?, 0xc00135b600}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:452 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:854 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:841 Jan 14 18:35:22.676: INFO: observed Deployment test-deployment in namespace deployment-6429 with ReadyReplicas 3 and labels map[test-deployment:updated test-deployment-static:true] STEP: patching the DeploymentStatus - test/e2e/apps/deployment.go:394 @ 01/14/23 18:35:22.744 Jan 14 18:35:22.952: INFO: observed event type ERROR Jan 14 18:35:22.952: INFO: Unexpected error: failed to see MODIFIED event: <*errors.errorString | 0xc00056fe50>: { s: "watch closed before UntilWithoutRetry timeout", } [FAILED] failed to see MODIFIED event: watch closed before UntilWithoutRetry timeout In [It] at: test/e2e/apps/deployment.go:424 @ 01/14/23 18:35:22.952 < Exit [It] should run the lifecycle of a Deployment [Conformance] - test/e2e/apps/deployment.go:185 @ 01/14/23 18:35:22.952 (6m43.046s) > Enter [AfterEach] [sig-apps] Deployment - test/e2e/apps/deployment.go:84 @ 01/14/23 18:35:22.952 Jan 14 18:35:23.060: INFO: Deployment "test-deployment": &Deployment{ObjectMeta:{test-deployment deployment-6429 2549d3d4-b5bc-406b-a543-5d72dc5e36f8 19829 3 2023-01-14 18:28:40 +0000 UTC <nil> <nil> map[test-deployment:updated test-deployment-static:true] map[deployment.kubernetes.io/revision:3] [] [] [{e2e.test Update apps/v1 2023-01-14 18:30:28 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:test-deployment":{},"f:test-deployment-static":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:test-deployment-static":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-01-14 18:35:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*2,Selector:&v1.LabelSelector{MatchLabels:map[string]string{test-deployment-static: true,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[test-deployment-static:true] map[] [] [] []} {[] [] [{test-deployment registry.k8s.io/e2e-test-images/httpd:2.4.38-4 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc002656918 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil <nil> [] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:2,UpdatedReplicas:2,AvailableReplicas:2,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2023-01-14 18:30:58 +0000 UTC,LastTransitionTime:2023-01-14 18:30:58 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-deployment-7b7876f9d6" has successfully progressed.,LastUpdateTime:2023-01-14 18:35:22 +0000 UTC,LastTransitionTime:2023-01-14 18:28:40 +0000 UTC,},},ReadyReplicas:2,CollisionCount:nil,},} Jan 14 18:35:23.178: INFO: New ReplicaSet "test-deployment-7b7876f9d6" of Deployment "test-deployment": &ReplicaSet{ObjectMeta:{test-deployment-7b7876f9d6 deployment-6429 5f7915a8-0284-447b-97fb-9757918f8bca 19821 2 2023-01-14 18:30:28 +0000 UTC <nil> <nil> map[pod-template-hash:7b7876f9d6 test-deployment-static:true] map[deployment.kubernetes.io/desired-replicas:2 deployment.kubernetes.io/max-replicas:3 deployment.kubernetes.io/revision:3] [{apps/v1 Deployment test-deployment 2549d3d4-b5bc-406b-a543-5d72dc5e36f8 0xc0006363e7 0xc0006363e8}] [] [{kube-controller-manager Update apps/v1 2023-01-14 18:33:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2549d3d4-b5bc-406b-a543-5d72dc5e36f8\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-01-14 18:35:22 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*2,Selector:&v1.LabelSelector{MatchLabels:map[string]string{pod-template-hash: 7b7876f9d6,test-deployment-static: true,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[pod-template-hash:7b7876f9d6 test-deployment-static:true] map[] [] [] []} {[] [] [{test-deployment registry.k8s.io/e2e-test-images/httpd:2.4.38-4 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0006364a0 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil <nil> [] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:2,FullyLabeledReplicas:2,ObservedGeneration:2,ReadyReplicas:2,AvailableReplicas:2,Conditions:[]ReplicaSetCondition{},},} Jan 14 18:35:23.178: INFO: All old ReplicaSets of Deployment "test-deployment": Jan 14 18:35:23.178: INFO: &ReplicaSet{ObjectMeta:{test-deployment-f4dbc4647 deployment-6429 bcbdc001-3a56-4d8c-8ad5-7f1d95a9e873 10071 3 2023-01-14 18:28:40 +0000 UTC <nil> <nil> map[pod-template-hash:f4dbc4647 test-deployment-static:true] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-deployment 2549d3d4-b5bc-406b-a543-5d72dc5e36f8 0xc000636677 0xc000636678}] [] [{kube-controller-manager Update apps/v1 2023-01-14 18:30:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2549d3d4-b5bc-406b-a543-5d72dc5e36f8\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-01-14 18:30:28 +0000 UTC FieldsV1 {"f:status":{"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{pod-template-hash: f4dbc4647,test-deployment-static: true,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[pod-template-hash:f4dbc4647 test-deployment-static:true] map[] [] [] []} {[] [] [{test-deployment registry.k8s.io/e2e-test-images/agnhost:2.43 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc000636740 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil <nil> [] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jan 14 18:35:23.178: INFO: &ReplicaSet{ObjectMeta:{test-deployment-7df74c55ff deployment-6429 da25b88c-2ba1-46d7-839f-0800bdd9c4e3 19828 4 2023-01-14 18:29:43 +0000 UTC <nil> <nil> map[pod-template-hash:7df74c55ff test-deployment-static:true] map[deployment.kubernetes.io/desired-replicas:2 deployment.kubernetes.io/max-replicas:3 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-deployment 2549d3d4-b5bc-406b-a543-5d72dc5e36f8 0xc000636507 0xc000636508}] [] [{kube-controller-manager Update apps/v1 2023-01-14 18:35:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2549d3d4-b5bc-406b-a543-5d72dc5e36f8\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-01-14 18:35:22 +0000 UTC FieldsV1 {"f:status":{"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{pod-template-hash: 7df74c55ff,test-deployment-static: true,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[pod-template-hash:7df74c55ff test-deployment-static:true] map[] [] [] []} {[] [] [{test-deployment registry.k8s.io/pause:3.9 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0006365e0 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil <nil> [] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:4,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jan 14 18:35:23.305: INFO: Pod "test-deployment-7b7876f9d6-cjtpl" is available: &Pod{ObjectMeta:{test-deployment-7b7876f9d6-cjtpl test-deployment-7b7876f9d6- deployment-6429 c3330d93-8c51-4081-954b-d7863d2e34ec 19820 0 2023-01-14 18:33:10 +0000 UTC <nil> <nil> map[pod-template-hash:7b7876f9d6 test-deployment-static:true] map[cni.projectcalico.org/containerID:c6a21cd15c415a8f33cc19dc48d6461ef15bbb3c3058cb691a000e03458df840 cni.projectcalico.org/podIP:192.168.14.143/32 cni.projectcalico.org/podIPs:192.168.14.143/32] [{apps/v1 ReplicaSet test-deployment-7b7876f9d6 5f7915a8-0284-447b-97fb-9757918f8bca 0xc000636de7 0xc000636de8}] [] [{kube-controller-manager Update v1 2023-01-14 18:33:10 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5f7915a8-0284-447b-97fb-9757918f8bca\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {Go-http-client Update v1 2023-01-14 18:33:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2023-01-14 18:35:22 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.14.143\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-mm9p6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:test-deployment,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mm9p6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*1,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:capz-67tgp2-mp-0000001,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 18:33:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 18:35:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 18:35:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 18:33:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.1.0.5,PodIP:192.168.14.143,StartTime:2023-01-14 18:33:10 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:test-deployment,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-01-14 18:35:11 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,ImageID:registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22,ContainerID:containerd://9ae7109d0c33623928905f29299b05b518e7b32e644707fd2f2e02f93b1cd72c,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.14.143,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 14 18:35:23.306: INFO: Pod "test-deployment-7b7876f9d6-zqb4p" is available: &Pod{ObjectMeta:{test-deployment-7b7876f9d6-zqb4p test-deployment-7b7876f9d6- deployment-6429 232acc3e-d6c3-454c-a9f8-430418cc77d3 15076 0 2023-01-14 18:30:28 +0000 UTC <nil> <nil> map[pod-template-hash:7b7876f9d6 test-deployment-static:true] map[cni.projectcalico.org/containerID:75d42c2a467b83339577445f3f1418cd4a3b1af64312c272c9c68379de838b6d cni.projectcalico.org/podIP:192.168.243.197/32 cni.projectcalico.org/podIPs:192.168.243.197/32] [{apps/v1 ReplicaSet test-deployment-7b7876f9d6 5f7915a8-0284-447b-97fb-9757918f8bca 0xc000636ff7 0xc000636ff8}] [] [{kube-controller-manager Update v1 2023-01-14 18:30:28 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5f7915a8-0284-447b-97fb-9757918f8bca\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {Go-http-client Update v1 2023-01-14 18:30:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2023-01-14 18:33:10 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.243.197\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-w245t,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:test-deployment,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-w245t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*1,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:capz-67tgp2-mp-0000000,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 18:30:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 18:32:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 18:32:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 18:30:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.1.0.4,PodIP:192.168.243.197,StartTime:2023-01-14 18:30:28 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:test-deployment,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-01-14 18:32:57 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,ImageID:registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22,ContainerID:containerd://f48bc76bb53855baa4ce4d45c9931a0b48b21e0a6906aacc614e480e98d562f6,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.243.197,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 14 18:35:23.306: INFO: Pod "test-deployment-7df74c55ff-84hdq" is available: &Pod{ObjectMeta:{test-deployment-7df74c55ff-84hdq test-deployment-7df74c55ff- deployment-6429 0b99fc2d-649d-4337-bc61-d396a4df9b22 19825 0 2023-01-14 18:29:43 +0000 UTC 2023-01-14 18:35:23 +0000 UTC 0xc0006371e0 map[pod-template-hash:7df74c55ff test-deployment-static:true] map[cni.projectcalico.org/containerID:60f03ecaba0062757e05bd2cbfcf17abf48a19b1b31c860848e3400ed09ed165 cni.projectcalico.org/podIP:192.168.14.251/32 cni.projectcalico.org/podIPs:192.168.14.251/32] [{apps/v1 ReplicaSet test-deployment-7df74c55ff da25b88c-2ba1-46d7-839f-0800bdd9c4e3 0xc000637217 0xc000637218}] [] [{kube-controller-manager Update v1 2023-01-14 18:29:43 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"da25b88c-2ba1-46d7-839f-0800bdd9c4e3\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {Go-http-client Update v1 2023-01-14 18:30:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2023-01-14 18:30:28 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.14.251\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-s2j78,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:test-deployment,Image:registry.k8s.io/pause:3.9,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-s2j78,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*1,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:capz-67tgp2-mp-0000001,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 18:29:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 18:30:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 18:30:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 18:29:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.1.0.5,PodIP:192.168.14.251,StartTime:2023-01-14 18:29:43 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:test-deployment,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-01-14 18:30:23 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/pause:3.9,ImageID:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,ContainerID:containerd://af502340a96eeef8c889108b9afc9181435dfdd60ab707f008d5b119efbaee05,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.14.251,},},EphemeralContainerStatuses:[]ContainerStatus{},},} < Exit [AfterEach] [sig-apps] Deployment - test/e2e/apps/deployment.go:84 @ 01/14/23 18:35:23.306 (354ms) > Enter [AfterEach] [sig-apps] Deployment - test/e2e/framework/node/init/init.go:33 @ 01/14/23 18:35:23.306 Jan 14 18:35:23.306: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready < Exit [AfterEach] [sig-apps] Deployment - test/e2e/framework/node/init/init.go:33 @ 01/14/23 18:35:23.471 (165ms) > Enter [DeferCleanup (Each)] [sig-apps] Deployment - test/e2e/framework/metrics/init/init.go:35 @ 01/14/23 18:35:23.471 < Exit [DeferCleanup (Each)] [sig-apps] Deployment - test/e2e/framework/metrics/init/init.go:35 @ 01/14/23 18:35:23.471 (0s) > Enter [DeferCleanup (Each)] [sig-apps] Deployment - dump namespaces | framework.go:206 @ 01/14/23 18:35:23.471 STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/14/23 18:35:23.471 STEP: Collecting events from namespace "deployment-6429". - test/e2e/framework/debug/dump.go:42 @ 01/14/23 18:35:23.471 STEP: Found 48 events. - test/e2e/framework/debug/dump.go:46 @ 01/14/23 18:35:23.651 Jan 14 18:35:23.651: INFO: At 2023-01-14 18:28:40 +0000 UTC - event for test-deployment: {deployment-controller } ScalingReplicaSet: Scaled up replica set test-deployment-f4dbc4647 to 2 Jan 14 18:35:23.651: INFO: At 2023-01-14 18:28:40 +0000 UTC - event for test-deployment-f4dbc4647: {replicaset-controller } SuccessfulCreate: Created pod: test-deployment-f4dbc4647-44kwf Jan 14 18:35:23.651: INFO: At 2023-01-14 18:28:40 +0000 UTC - event for test-deployment-f4dbc4647: {replicaset-controller } SuccessfulCreate: Created pod: test-deployment-f4dbc4647-sr5jn Jan 14 18:35:23.651: INFO: At 2023-01-14 18:28:40 +0000 UTC - event for test-deployment-f4dbc4647-44kwf: {default-scheduler } Scheduled: Successfully assigned deployment-6429/test-deployment-f4dbc4647-44kwf to capz-67tgp2-mp-0000000 Jan 14 18:35:23.651: INFO: At 2023-01-14 18:28:40 +0000 UTC - event for test-deployment-f4dbc4647-sr5jn: {default-scheduler } Scheduled: Successfully assigned deployment-6429/test-deployment-f4dbc4647-sr5jn to capz-67tgp2-mp-0000001 Jan 14 18:35:23.651: INFO: At 2023-01-14 18:29:07 +0000 UTC - event for test-deployment-f4dbc4647-sr5jn: {kubelet capz-67tgp2-mp-0000001} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.43" already present on machine Jan 14 18:35:23.651: INFO: At 2023-01-14 18:29:08 +0000 UTC - event for test-deployment-f4dbc4647-44kwf: {kubelet capz-67tgp2-mp-0000000} Created: Created container test-deployment Jan 14 18:35:23.651: INFO: At 2023-01-14 18:29:08 +0000 UTC - event for test-deployment-f4dbc4647-44kwf: {kubelet capz-67tgp2-mp-0000000} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.43" already present on machine Jan 14 18:35:23.651: INFO: At 2023-01-14 18:29:08 +0000 UTC - event for test-deployment-f4dbc4647-sr5jn: {kubelet capz-67tgp2-mp-0000001} Created: Created container test-deployment Jan 14 18:35:23.651: INFO: At 2023-01-14 18:29:09 +0000 UTC - event for test-deployment-f4dbc4647-44kwf: {kubelet capz-67tgp2-mp-0000000} Started: Started container test-deployment Jan 14 18:35:23.651: INFO: At 2023-01-14 18:29:12 +0000 UTC - event for test-deployment-f4dbc4647-sr5jn: {kubelet capz-67tgp2-mp-0000001} Started: Started container test-deployment Jan 14 18:35:23.651: INFO: At 2023-01-14 18:29:43 +0000 UTC - event for test-deployment: {deployment-controller } ScalingReplicaSet: Scaled down replica set test-deployment-f4dbc4647 to 1 from 2 Jan 14 18:35:23.651: INFO: At 2023-01-14 18:29:43 +0000 UTC - event for test-deployment: {deployment-controller } ScalingReplicaSet: Scaled up replica set test-deployment-7df74c55ff to 1 Jan 14 18:35:23.651: INFO: At 2023-01-14 18:29:43 +0000 UTC - event for test-deployment-7df74c55ff: {replicaset-controller } SuccessfulCreate: Created pod: test-deployment-7df74c55ff-84hdq Jan 14 18:35:23.651: INFO: At 2023-01-14 18:29:43 +0000 UTC - event for test-deployment-7df74c55ff-84hdq: {default-scheduler } Scheduled: Successfully assigned deployment-6429/test-deployment-7df74c55ff-84hdq to capz-67tgp2-mp-0000001 Jan 14 18:35:23.651: INFO: At 2023-01-14 18:29:43 +0000 UTC - event for test-deployment-f4dbc4647: {replicaset-controller } SuccessfulDelete: Deleted pod: test-deployment-f4dbc4647-44kwf Jan 14 18:35:23.651: INFO: At 2023-01-14 18:29:43 +0000 UTC - event for test-deployment-f4dbc4647-44kwf: {kubelet capz-67tgp2-mp-0000000} Killing: Stopping container test-deployment Jan 14 18:35:23.651: INFO: At 2023-01-14 18:30:16 +0000 UTC - event for test-deployment-7df74c55ff-84hdq: {kubelet capz-67tgp2-mp-0000001} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Jan 14 18:35:23.651: INFO: At 2023-01-14 18:30:19 +0000 UTC - event for test-deployment-7df74c55ff-84hdq: {kubelet capz-67tgp2-mp-0000001} Created: Created container test-deployment Jan 14 18:35:23.651: INFO: At 2023-01-14 18:30:23 +0000 UTC - event for test-deployment-7df74c55ff-84hdq: {kubelet capz-67tgp2-mp-0000001} Started: Started container test-deployment Jan 14 18:35:23.651: INFO: At 2023-01-14 18:30:28 +0000 UTC - event for test-deployment: {deployment-controller } ScalingReplicaSet: Scaled down replica set test-deployment-f4dbc4647 to 0 from 1 Jan 14 18:35:23.651: INFO: At 2023-01-14 18:30:28 +0000 UTC - event for test-deployment: {deployment-controller } ScalingReplicaSet: Scaled up replica set test-deployment-7df74c55ff to 2 from 1 Jan 14 18:35:23.651: INFO: At 2023-01-14 18:30:28 +0000 UTC - event for test-deployment: {deployment-controller } ScalingReplicaSet: Scaled up replica set test-deployment-7b7876f9d6 to 1 Jan 14 18:35:23.651: INFO: At 2023-01-14 18:30:28 +0000 UTC - event for test-deployment-7b7876f9d6: {replicaset-controller } SuccessfulCreate: Created pod: test-deployment-7b7876f9d6-zqb4p Jan 14 18:35:23.651: INFO: At 2023-01-14 18:30:28 +0000 UTC - event for test-deployment-7df74c55ff: {replicaset-controller } SuccessfulCreate: Created pod: test-deployment-7df74c55ff-s9lvr Jan 14 18:35:23.651: INFO: At 2023-01-14 18:30:28 +0000 UTC - event for test-deployment-7df74c55ff-s9lvr: {default-scheduler } Scheduled: Successfully assigned deployment-6429/test-deployment-7df74c55ff-s9lvr to capz-67tgp2-mp-0000000 Jan 14 18:35:23.651: INFO: At 2023-01-14 18:30:28 +0000 UTC - event for test-deployment-f4dbc4647: {replicaset-controller } SuccessfulDelete: Deleted pod: test-deployment-f4dbc4647-sr5jn Jan 14 18:35:23.651: INFO: At 2023-01-14 18:30:28 +0000 UTC - event for test-deployment-f4dbc4647-sr5jn: {kubelet capz-67tgp2-mp-0000001} Killing: Stopping container test-deployment Jan 14 18:35:23.651: INFO: At 2023-01-14 18:30:29 +0000 UTC - event for test-deployment-7b7876f9d6-zqb4p: {default-scheduler } Scheduled: Successfully assigned deployment-6429/test-deployment-7b7876f9d6-zqb4p to capz-67tgp2-mp-0000000 Jan 14 18:35:23.651: INFO: At 2023-01-14 18:30:45 +0000 UTC - event for test-deployment-7b7876f9d6-zqb4p: {kubelet capz-67tgp2-mp-0000000} Pulling: Pulling image "registry.k8s.io/e2e-test-images/httpd:2.4.38-4" Jan 14 18:35:23.651: INFO: At 2023-01-14 18:30:45 +0000 UTC - event for test-deployment-7df74c55ff-s9lvr: {kubelet capz-67tgp2-mp-0000000} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Jan 14 18:35:23.651: INFO: At 2023-01-14 18:30:48 +0000 UTC - event for test-deployment-7df74c55ff-s9lvr: {kubelet capz-67tgp2-mp-0000000} Created: Created container test-deployment Jan 14 18:35:23.651: INFO: At 2023-01-14 18:30:50 +0000 UTC - event for test-deployment-7df74c55ff-s9lvr: {kubelet capz-67tgp2-mp-0000000} Started: Started container test-deployment Jan 14 18:35:23.651: INFO: At 2023-01-14 18:32:52 +0000 UTC - event for test-deployment-7b7876f9d6-zqb4p: {kubelet capz-67tgp2-mp-0000000} Pulled: Successfully pulled image "registry.k8s.io/e2e-test-images/httpd:2.4.38-4" in 3.529998755s (2m6.377886862s including waiting) Jan 14 18:35:23.651: INFO: At 2023-01-14 18:32:54 +0000 UTC - event for test-deployment-7b7876f9d6-zqb4p: {kubelet capz-67tgp2-mp-0000000} Created: Created container test-deployment Jan 14 18:35:23.651: INFO: At 2023-01-14 18:32:57 +0000 UTC - event for test-deployment-7b7876f9d6-zqb4p: {kubelet capz-67tgp2-mp-0000000} Started: Started container test-deployment Jan 14 18:35:23.651: INFO: At 2023-01-14 18:33:10 +0000 UTC - event for test-deployment: {deployment-controller } ScalingReplicaSet: Scaled down replica set test-deployment-7df74c55ff to 1 from 2 Jan 14 18:35:23.651: INFO: At 2023-01-14 18:33:10 +0000 UTC - event for test-deployment: {deployment-controller } ScalingReplicaSet: Scaled up replica set test-deployment-7b7876f9d6 to 2 from 1 Jan 14 18:35:23.651: INFO: At 2023-01-14 18:33:10 +0000 UTC - event for test-deployment-7b7876f9d6: {replicaset-controller } SuccessfulCreate: Created pod: test-deployment-7b7876f9d6-cjtpl Jan 14 18:35:23.651: INFO: At 2023-01-14 18:33:10 +0000 UTC - event for test-deployment-7b7876f9d6-cjtpl: {default-scheduler } Scheduled: Successfully assigned deployment-6429/test-deployment-7b7876f9d6-cjtpl to capz-67tgp2-mp-0000001 Jan 14 18:35:23.651: INFO: At 2023-01-14 18:33:10 +0000 UTC - event for test-deployment-7df74c55ff: {replicaset-controller } SuccessfulDelete: Deleted pod: test-deployment-7df74c55ff-s9lvr Jan 14 18:35:23.651: INFO: At 2023-01-14 18:33:10 +0000 UTC - event for test-deployment-7df74c55ff-s9lvr: {kubelet capz-67tgp2-mp-0000000} Killing: Stopping container test-deployment Jan 14 18:35:23.651: INFO: At 2023-01-14 18:33:16 +0000 UTC - event for test-deployment-7b7876f9d6-cjtpl: {kubelet capz-67tgp2-mp-0000001} Pulling: Pulling image "registry.k8s.io/e2e-test-images/httpd:2.4.38-4" Jan 14 18:35:23.651: INFO: At 2023-01-14 18:35:10 +0000 UTC - event for test-deployment-7b7876f9d6-cjtpl: {kubelet capz-67tgp2-mp-0000001} Pulled: Successfully pulled image "registry.k8s.io/e2e-test-images/httpd:2.4.38-4" in 601.673623ms (1m53.668389496s including waiting) Jan 14 18:35:23.651: INFO: At 2023-01-14 18:35:10 +0000 UTC - event for test-deployment-7b7876f9d6-cjtpl: {kubelet capz-67tgp2-mp-0000001} Created: Created container test-deployment Jan 14 18:35:23.651: INFO: At 2023-01-14 18:35:11 +0000 UTC - event for test-deployment-7b7876f9d6-cjtpl: {kubelet capz-67tgp2-mp-0000001} Started: Started container test-deployment Jan 14 18:35:23.651: INFO: At 2023-01-14 18:35:22 +0000 UTC - event for test-deployment: {deployment-controller } ScalingReplicaSet: Scaled down replica set test-deployment-7df74c55ff to 0 from 1 Jan 14 18:35:23.651: INFO: At 2023-01-14 18:35:22 +0000 UTC - event for test-deployment-7df74c55ff: {replicaset-controller } SuccessfulDelete: Deleted pod: test-deployment-7df74c55ff-84hdq Jan 14 18:35:23.774: INFO: POD NODE PHASE GRACE CONDITIONS Jan 14 18:35:23.774: INFO: test-deployment-7b7876f9d6-cjtpl capz-67tgp2-mp-0000001 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 18:33:10 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 18:35:12 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 18:35:12 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 18:33:10 +0000 UTC }] Jan 14 18:35:23.774: INFO: test-deployment-7b7876f9d6-zqb4p capz-67tgp2-mp-0000000 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 18:30:28 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 18:32:58 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 18:32:58 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 18:30:28 +0000 UTC }] Jan 14 18:35:23.774: INFO: test-deployment-7df74c55ff-84hdq capz-67tgp2-mp-0000001 Running 1s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 18:29:43 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 18:30:24 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 18:30:24 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 18:29:43 +0000 UTC }] Jan 14 18:35:23.774: INFO: Jan 14 18:35:24.519: INFO: Logging node info for node capz-67tgp2-control-plane-2chph Jan 14 18:35:24.633: INFO: Node Info: &Node{ObjectMeta:{capz-67tgp2-control-plane-2chph 28170de3-aa87-4a67-a5ad-65493aeb11b3 12074 0 2023-01-14 18:16:37 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:capz-67tgp2-control-plane-2chph kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node.kubernetes.io/exclude-from-external-load-balancers: node.kubernetes.io/instance-type:Standard_D2s_v3 topology.kubernetes.io/region:northeurope topology.kubernetes.io/zone:northeurope-2] map[cluster.x-k8s.io/cluster-name:capz-67tgp2 cluster.x-k8s.io/cluster-namespace:default cluster.x-k8s.io/machine:capz-67tgp2-control-plane-tj79f cluster.x-k8s.io/owner-kind:KubeadmControlPlane cluster.x-k8s.io/owner-name:capz-67tgp2-control-plane kubeadm.alpha.kubernetes.io/cri-socket:unix:///var/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.0.0.4/16 projectcalico.org/IPv4VXLANTunnelAddr:192.168.35.0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-14 18:16:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {kubeadm Update v1 2023-01-14 18:16:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {kube-controller-manager Update v1 2023-01-14 18:17:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}} } {Go-http-client Update v1 2023-01-14 18:17:47 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:projectcalico.org/IPv4Address":{},"f:projectcalico.org/IPv4VXLANTunnelAddr":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {cloud-node-manager Update v1 2023-01-14 18:20:44 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:taints":{}}} } {manager Update v1 2023-01-14 18:20:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cluster.x-k8s.io/cluster-name":{},"f:cluster.x-k8s.io/cluster-namespace":{},"f:cluster.x-k8s.io/machine":{},"f:cluster.x-k8s.io/owner-kind":{},"f:cluster.x-k8s.io/owner-name":{}}}} } {cloud-controller-manager Update v1 2023-01-14 18:21:06 +0000 UTC FieldsV1 {"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"192.168.0.0/24\"":{}}}} } {kubelet Update v1 2023-01-14 18:31:33 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:192.168.0.0/24,DoNotUseExternalID:,ProviderID:azure:///subscriptions/===REDACTED===/resourceGroups/capz-67tgp2/providers/Microsoft.Compute/virtualMachines/capz-67tgp2-control-plane-2chph,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/control-plane,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[192.168.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{133003395072 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{8344743936 0} {<nil>} 8149164Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{119703055367 0} {<nil>} 119703055367 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{8239886336 0} {<nil>} 8046764Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-14 18:17:47 +0000 UTC,LastTransitionTime:2023-01-14 18:17:47 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-14 18:31:33 +0000 UTC,LastTransitionTime:2023-01-14 18:16:14 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-14 18:31:33 +0000 UTC,LastTransitionTime:2023-01-14 18:16:14 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-14 18:31:33 +0000 UTC,LastTransitionTime:2023-01-14 18:16:14 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-14 18:31:33 +0000 UTC,LastTransitionTime:2023-01-14 18:17:37 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.0.0.4,},NodeAddress{Type:Hostname,Address:capz-67tgp2-control-plane-2chph,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:aa56c5629889429baa21826756529ecb,SystemUUID:744c1c53-9da3-134c-b7da-86c573f76ec3,BootID:b6ed8583-6ec6-40d3-b9e2-4bfd39a59694,KernelVersion:5.4.0-1098-azure,OSImage:Ubuntu 18.04.6 LTS,ContainerRuntimeVersion:containerd://1.6.2,KubeletVersion:v1.27.0-alpha.0.989+eabb70833a5649,KubeProxyVersion:v1.27.0-alpha.0.989+eabb70833a5649,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-apiserver:v1.27.0-alpha.0.989_eabb70833a5649 registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.0.989_eabb70833a5649 registry.k8s.io/kube-apiserver:v1.27.0-alpha.0.989_eabb70833a5649],SizeBytes:135903699,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-controller-manager:v1.27.0-alpha.0.989_eabb70833a5649 registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.0.989_eabb70833a5649 registry.k8s.io/kube-controller-manager:v1.27.0-alpha.0.989_eabb70833a5649],SizeBytes:125717305,},ContainerImage{Names:[docker.io/calico/cni@sha256:914823d144204288f881e49b93b6852febfe669074cd4e2a782860981615f521 docker.io/calico/cni:v3.23.0],SizeBytes:110494683,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[docker.io/calico/node@sha256:4763820ecb4d8e82483a2ffabfec7fcded9603318692df210a778d223a4d7474 docker.io/calico/node:v3.23.0],SizeBytes:71573794,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-scheduler:v1.27.0-alpha.0.989_eabb70833a5649 registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.0.989_eabb70833a5649 registry.k8s.io/kube-scheduler:v1.27.0-alpha.0.989_eabb70833a5649],SizeBytes:57551672,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-proxy:v1.27.0-alpha.0.989_eabb70833a5649 registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.989_eabb70833a5649 registry.k8s.io/kube-proxy:v1.27.0-alpha.0.989_eabb70833a5649],SizeBytes:52478325,},ContainerImage{Names:[registry.k8s.io/kube-apiserver@sha256:d230a0b88a3daf14e4cce03b906b992c8153f37da878677f434b1af8c4e8cc75 registry.k8s.io/kube-apiserver:v1.26.0],SizeBytes:35317868,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager@sha256:26e260b50ec46bd1da7352565cb8b34b6dd2cb006cebbd2f35170d50935fb9ec registry.k8s.io/kube-controller-manager:v1.26.0],SizeBytes:32244989,},ContainerImage{Names:[registry.k8s.io/kube-proxy@sha256:1e9bbe429e4e2b2ad32681c91deb98a334f1bf4135137df5f84f9d03689060fe registry.k8s.io/kube-proxy:v1.26.0],SizeBytes:21536465,},ContainerImage{Names:[registry.k8s.io/kube-scheduler@sha256:34a142549f94312b41d4a6cd98e7fddabff484767a199333acb7503bf46d7410 registry.k8s.io/kube-scheduler:v1.26.0],SizeBytes:17484038,},ContainerImage{Names:[capzci.azurecr.io/azure-cloud-controller-manager@sha256:a52d9377e1464d9e2d827e6555d7edf9082b5d85b60676d2fd74b87e202bad0c capzci.azurecr.io/azure-cloud-controller-manager:63c1cd3],SizeBytes:16980267,},ContainerImage{Names:[capzci.azurecr.io/azure-cloud-node-manager@sha256:45259845bc04cb115596dd16d88262d84214a1099fe085531240b24fa03021cf capzci.azurecr.io/azure-cloud-node-manager:63c1cd3],SizeBytes:16704716,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 14 18:35:24.634: INFO: Logging kubelet events for node capz-67tgp2-control-plane-2chph Jan 14 18:35:24.740: INFO: Logging pods the kubelet thinks is on node capz-67tgp2-control-plane-2chph Jan 14 18:35:24.953: INFO: kube-proxy-j74l7 started at 2023-01-14 18:16:44 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:24.953: INFO: Container kube-proxy ready: true, restart count 0 Jan 14 18:35:24.953: INFO: calico-node-g5dqz started at 2023-01-14 18:17:11 +0000 UTC (2+1 container statuses recorded) Jan 14 18:35:24.953: INFO: Init container upgrade-ipam ready: true, restart count 0 Jan 14 18:35:24.953: INFO: Init container install-cni ready: true, restart count 0 Jan 14 18:35:24.953: INFO: Container calico-node ready: true, restart count 0 Jan 14 18:35:24.953: INFO: cloud-node-manager-5qlnt started at 2023-01-14 18:20:38 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:24.953: INFO: Container cloud-node-manager ready: true, restart count 0 Jan 14 18:35:24.953: INFO: cloud-controller-manager-64479fbc67-xdds2 started at 2023-01-14 18:20:44 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:24.953: INFO: Container cloud-controller-manager ready: true, restart count 0 Jan 14 18:35:24.953: INFO: etcd-capz-67tgp2-control-plane-2chph started at 2023-01-14 18:16:44 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:24.953: INFO: Container etcd ready: true, restart count 0 Jan 14 18:35:24.953: INFO: kube-apiserver-capz-67tgp2-control-plane-2chph started at 2023-01-14 18:16:44 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:24.953: INFO: Container kube-apiserver ready: true, restart count 0 Jan 14 18:35:24.953: INFO: kube-scheduler-capz-67tgp2-control-plane-2chph started at 2023-01-14 18:16:45 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:24.953: INFO: Container kube-scheduler ready: true, restart count 0 Jan 14 18:35:24.953: INFO: kube-controller-manager-capz-67tgp2-control-plane-2chph started at 2023-01-14 18:16:44 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:24.953: INFO: Container kube-controller-manager ready: true, restart count 0 Jan 14 18:35:25.488: INFO: Latency metrics for node capz-67tgp2-control-plane-2chph Jan 14 18:35:25.488: INFO: Logging node info for node capz-67tgp2-mp-0000000 Jan 14 18:35:25.595: INFO: Node Info: &Node{ObjectMeta:{capz-67tgp2-mp-0000000 d6bf69fc-90f8-43c8-9623-356f58ea157f 16641 0 2023-01-14 18:19:00 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_D2s_v3 beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:northeurope failure-domain.beta.kubernetes.io/zone:0 kubernetes.io/arch:amd64 kubernetes.io/hostname:capz-67tgp2-mp-0000000 kubernetes.io/os:linux node.kubernetes.io/instance-type:Standard_D2s_v3 topology.kubernetes.io/region:northeurope topology.kubernetes.io/zone:0] map[cluster.x-k8s.io/cluster-name:capz-67tgp2 cluster.x-k8s.io/cluster-namespace:default cluster.x-k8s.io/owner-kind:MachinePool cluster.x-k8s.io/owner-name:capz-67tgp2-mp-0 kubeadm.alpha.kubernetes.io/cri-socket:unix:///var/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.1.0.4/16 projectcalico.org/IPv4VXLANTunnelAddr:192.168.243.192 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2023-01-14 18:19:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet Update v1 2023-01-14 18:19:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-14 18:19:48 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}} } {Go-http-client Update v1 2023-01-14 18:20:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:projectcalico.org/IPv4Address":{},"f:projectcalico.org/IPv4VXLANTunnelAddr":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {cloud-controller-manager Update v1 2023-01-14 18:21:06 +0000 UTC FieldsV1 {"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"192.168.1.0/24\"":{}}}} } {manager Update v1 2023-01-14 18:21:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cluster.x-k8s.io/cluster-name":{},"f:cluster.x-k8s.io/cluster-namespace":{},"f:cluster.x-k8s.io/owner-kind":{},"f:cluster.x-k8s.io/owner-name":{}}}} } {kubelet Update v1 2023-01-14 18:33:54 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:192.168.1.0/24,DoNotUseExternalID:,ProviderID:azure:///subscriptions/===REDACTED===/resourceGroups/capz-67tgp2/providers/Microsoft.Compute/virtualMachineScaleSets/capz-67tgp2-mp-0/virtualMachines/0,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[192.168.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{4 0} {<nil>} 4 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{31025332224 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{8344743936 0} {<nil>} 8149164Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{4 0} {<nil>} 4 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{27922798956 0} {<nil>} 27922798956 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{8239886336 0} {<nil>} 8046764Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-14 18:20:12 +0000 UTC,LastTransitionTime:2023-01-14 18:20:12 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-14 18:33:54 +0000 UTC,LastTransitionTime:2023-01-14 18:19:00 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-14 18:33:54 +0000 UTC,LastTransitionTime:2023-01-14 18:19:00 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-14 18:33:54 +0000 UTC,LastTransitionTime:2023-01-14 18:19:00 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-14 18:33:54 +0000 UTC,LastTransitionTime:2023-01-14 18:19:48 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:capz-67tgp2-mp-0000000,},NodeAddress{Type:InternalIP,Address:10.1.0.4,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:95d9ab6ead5141e2b46b1d18fec95432,SystemUUID:3fc8a171-f25a-2049-95d3-3c4be76d51a7,BootID:b9ac1a12-eff5-45ad-b970-9df972ef339e,KernelVersion:5.4.0-1098-azure,OSImage:Ubuntu 18.04.6 LTS,ContainerRuntimeVersion:containerd://1.6.2,KubeletVersion:v1.27.0-alpha.0.989+eabb70833a5649,KubeProxyVersion:v1.27.0-alpha.0.989+eabb70833a5649,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-apiserver:v1.27.0-alpha.0.989_eabb70833a5649 registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.0.989_eabb70833a5649 registry.k8s.io/kube-apiserver:v1.27.0-alpha.0.989_eabb70833a5649],SizeBytes:135903699,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-controller-manager:v1.27.0-alpha.0.989_eabb70833a5649 registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.0.989_eabb70833a5649 registry.k8s.io/kube-controller-manager:v1.27.0-alpha.0.989_eabb70833a5649],SizeBytes:125717305,},ContainerImage{Names:[docker.io/calico/cni@sha256:914823d144204288f881e49b93b6852febfe669074cd4e2a782860981615f521 docker.io/calico/cni:v3.23.0],SizeBytes:110494683,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[docker.io/calico/node@sha256:4763820ecb4d8e82483a2ffabfec7fcded9603318692df210a778d223a4d7474 docker.io/calico/node:v3.23.0],SizeBytes:71573794,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-scheduler:v1.27.0-alpha.0.989_eabb70833a5649 registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.0.989_eabb70833a5649 registry.k8s.io/kube-scheduler:v1.27.0-alpha.0.989_eabb70833a5649],SizeBytes:57551672,},ContainerImage{Names:[docker.io/calico/kube-controllers@sha256:78bc199299f966b0694dc4044501aee2d7ebd6862b2b0a00bca3ee8d3813c82f docker.io/calico/kube-controllers:v3.23.0],SizeBytes:56343954,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-proxy:v1.27.0-alpha.0.989_eabb70833a5649 registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.989_eabb70833a5649 registry.k8s.io/kube-proxy:v1.27.0-alpha.0.989_eabb70833a5649],SizeBytes:52478325,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nautilus@sha256:80ba6c8c44f9623f06e868a1aa66026c8ec438ad814f9ec95e9333b415fe3550 registry.k8s.io/e2e-test-images/nautilus:1.7],SizeBytes:49641698,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/kube-apiserver@sha256:d230a0b88a3daf14e4cce03b906b992c8153f37da878677f434b1af8c4e8cc75 registry.k8s.io/kube-apiserver:v1.26.0],SizeBytes:35317868,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager@sha256:26e260b50ec46bd1da7352565cb8b34b6dd2cb006cebbd2f35170d50935fb9ec registry.k8s.io/kube-controller-manager:v1.26.0],SizeBytes:32244989,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/kube-proxy@sha256:1e9bbe429e4e2b2ad32681c91deb98a334f1bf4135137df5f84f9d03689060fe registry.k8s.io/kube-proxy:v1.26.0],SizeBytes:21536465,},ContainerImage{Names:[registry.k8s.io/kube-scheduler@sha256:34a142549f94312b41d4a6cd98e7fddabff484767a199333acb7503bf46d7410 registry.k8s.io/kube-scheduler:v1.26.0],SizeBytes:17484038,},ContainerImage{Names:[capzci.azurecr.io/azure-cloud-node-manager@sha256:45259845bc04cb115596dd16d88262d84214a1099fe085531240b24fa03021cf capzci.azurecr.io/azure-cloud-node-manager:63c1cd3],SizeBytes:16704716,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:6978614,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 14 18:35:25.596: INFO: Logging kubelet events for node capz-67tgp2-mp-0000000 Jan 14 18:35:25.704: INFO: Logging pods the kubelet thinks is on node capz-67tgp2-mp-0000000 Jan 14 18:35:25.883: INFO: tester started at 2023-01-14 18:35:08 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:25.883: INFO: Container tester ready: true, restart count 0 Jan 14 18:35:25.883: INFO: coredns-56f4c55bf9-zp98j started at 2023-01-14 18:19:48 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:25.883: INFO: Container coredns ready: true, restart count 0 Jan 14 18:35:25.883: INFO: sample-webhook-deployment-865554f4d9-bb228 started at 2023-01-14 18:35:20 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:25.883: INFO: Container sample-webhook ready: false, restart count 0 Jan 14 18:35:25.883: INFO: test-rolling-update-deployment-7549d9f46d-pklnz started at 2023-01-14 18:35:20 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:25.883: INFO: Container agnhost ready: false, restart count 0 Jan 14 18:35:25.883: INFO: kube-proxy-8jftq started at 2023-01-14 18:19:05 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:25.883: INFO: Container kube-proxy ready: true, restart count 0 Jan 14 18:35:25.883: INFO: downward-api-1073a5a4-0d5f-4af3-9e34-20a20f87b5ea started at 2023-01-14 18:35:19 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:25.883: INFO: Container dapi-container ready: false, restart count 0 Jan 14 18:35:25.883: INFO: pod-qos-class-bddd171a-e154-4523-9abb-837b2095dfbb started at 2023-01-14 18:33:27 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:25.883: INFO: Container agnhost ready: false, restart count 0 Jan 14 18:35:25.883: INFO: pod1 started at 2023-01-14 18:34:38 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:25.883: INFO: Container agnhost-container ready: true, restart count 0 Jan 14 18:35:25.883: INFO: dns-test-eeb40b41-fc0f-431a-8cac-0735a1f4243b started at 2023-01-14 18:33:46 +0000 UTC (0+3 container statuses recorded) Jan 14 18:35:25.883: INFO: Container jessie-querier ready: false, restart count 0 Jan 14 18:35:25.883: INFO: Container querier ready: false, restart count 0 Jan 14 18:35:25.883: INFO: Container webserver ready: false, restart count 0 Jan 14 18:35:25.883: INFO: update-demo-nautilus-mcn6g started at 2023-01-14 18:32:15 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:25.883: INFO: Container update-demo ready: true, restart count 0 Jan 14 18:35:25.883: INFO: pod-secrets-79279ea7-8705-47e4-a9ab-02b8a5479454 started at <nil> (0+0 container statuses recorded) Jan 14 18:35:25.883: INFO: execpodg9czm started at 2023-01-14 18:35:00 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:25.883: INFO: Container agnhost-container ready: true, restart count 0 Jan 14 18:35:25.883: INFO: test-deployment-7b7876f9d6-zqb4p started at 2023-01-14 18:30:28 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:25.883: INFO: Container test-deployment ready: true, restart count 0 Jan 14 18:35:25.883: INFO: alpine-nnp-true-a4de807c-d017-4cbf-80dd-9efa36816371 started at 2023-01-14 18:33:49 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:25.883: INFO: Container alpine-nnp-true-a4de807c-d017-4cbf-80dd-9efa36816371 ready: false, restart count 0 Jan 14 18:35:25.883: INFO: metrics-server-795d765ff8-rskk8 started at 2023-01-14 18:19:48 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:25.883: INFO: Container metrics-server ready: true, restart count 0 Jan 14 18:35:25.883: INFO: ss2-0 started at 2023-01-14 18:31:08 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:25.883: INFO: Container webserver ready: true, restart count 0 Jan 14 18:35:25.883: INFO: externalname-service-pq2wx started at 2023-01-14 18:34:36 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:25.883: INFO: Container externalname-service ready: true, restart count 0 Jan 14 18:35:25.883: INFO: coredns-56f4c55bf9-4pfjc started at 2023-01-14 18:19:48 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:25.883: INFO: Container coredns ready: true, restart count 0 Jan 14 18:35:25.883: INFO: ss2-2 started at 2023-01-14 18:35:21 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:25.883: INFO: Container webserver ready: false, restart count 0 Jan 14 18:35:25.883: INFO: ss2-0 started at 2023-01-14 18:34:37 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:25.883: INFO: Container webserver ready: true, restart count 0 Jan 14 18:35:25.883: INFO: cloud-node-manager-l846f started at 2023-01-14 18:20:38 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:25.883: INFO: Container cloud-node-manager ready: true, restart count 0 Jan 14 18:35:25.883: INFO: test-ss-0 started at 2023-01-14 18:28:36 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:25.883: INFO: Container webserver ready: true, restart count 0 Jan 14 18:35:25.883: INFO: calico-kube-controllers-657b584867-tn8lq started at 2023-01-14 18:19:48 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:25.883: INFO: Container calico-kube-controllers ready: true, restart count 0 Jan 14 18:35:25.883: INFO: calico-node-t5npc started at 2023-01-14 18:19:05 +0000 UTC (2+1 container statuses recorded) Jan 14 18:35:25.883: INFO: Init container upgrade-ipam ready: true, restart count 0 Jan 14 18:35:25.883: INFO: Init container install-cni ready: true, restart count 0 Jan 14 18:35:25.883: INFO: Container calico-node ready: true, restart count 0 Jan 14 18:35:25.883: INFO: pod-configmaps-dfa2997d-eef5-48de-9ba4-2617684da066 started at <nil> (0+0 container statuses recorded) Jan 14 18:35:25.883: INFO: pod-secrets-5a523e88-d1f1-46b1-b8c2-7b0072c2daca started at 2023-01-14 18:35:20 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:25.883: INFO: Container secret-volume-test ready: false, restart count 0 Jan 14 18:35:26.991: INFO: Latency metrics for node capz-67tgp2-mp-0000000 Jan 14 18:35:26.991: INFO: Logging node info for node capz-67tgp2-mp-0000001 Jan 14 18:35:27.103: INFO: Node Info: &Node{ObjectMeta:{capz-67tgp2-mp-0000001 a57d1a46-19d4-4265-8229-3bb32b89963d 19871 0 2023-01-14 18:18:59 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_D2s_v3 beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:northeurope failure-domain.beta.kubernetes.io/zone:1 kubernetes.io/arch:amd64 kubernetes.io/hostname:capz-67tgp2-mp-0000001 kubernetes.io/os:linux node.kubernetes.io/instance-type:Standard_D2s_v3 topology.kubernetes.io/region:northeurope topology.kubernetes.io/zone:1] map[cluster.x-k8s.io/cluster-name:capz-67tgp2 cluster.x-k8s.io/cluster-namespace:default cluster.x-k8s.io/owner-kind:MachinePool cluster.x-k8s.io/owner-name:capz-67tgp2-mp-0 kubeadm.alpha.kubernetes.io/cri-socket:unix:///var/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.1.0.5/16 projectcalico.org/IPv4VXLANTunnelAddr:192.168.14.128 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-14 18:18:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2023-01-14 18:19:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kube-controller-manager Update v1 2023-01-14 18:20:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}} } {Go-http-client Update v1 2023-01-14 18:20:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:projectcalico.org/IPv4Address":{},"f:projectcalico.org/IPv4VXLANTunnelAddr":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {cloud-controller-manager Update v1 2023-01-14 18:21:06 +0000 UTC FieldsV1 {"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"192.168.2.0/24\"":{}}}} } {manager Update v1 2023-01-14 18:21:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cluster.x-k8s.io/cluster-name":{},"f:cluster.x-k8s.io/cluster-namespace":{},"f:cluster.x-k8s.io/owner-kind":{},"f:cluster.x-k8s.io/owner-name":{}}}} } {kubelet Update v1 2023-01-14 18:35:23 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:192.168.2.0/24,DoNotUseExternalID:,ProviderID:azure:///subscriptions/===REDACTED===/resourceGroups/capz-67tgp2/providers/Microsoft.Compute/virtualMachineScaleSets/capz-67tgp2-mp-0/virtualMachines/1,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[192.168.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{4 0} {<nil>} 4 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{31025332224 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{8344743936 0} {<nil>} 8149164Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{4 0} {<nil>} 4 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{27922798956 0} {<nil>} 27922798956 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{8239886336 0} {<nil>} 8046764Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-14 18:20:41 +0000 UTC,LastTransitionTime:2023-01-14 18:20:41 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-14 18:35:23 +0000 UTC,LastTransitionTime:2023-01-14 18:18:59 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-14 18:35:23 +0000 UTC,LastTransitionTime:2023-01-14 18:18:59 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-14 18:35:23 +0000 UTC,LastTransitionTime:2023-01-14 18:18:59 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-14 18:35:23 +0000 UTC,LastTransitionTime:2023-01-14 18:20:32 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:capz-67tgp2-mp-0000001,},NodeAddress{Type:InternalIP,Address:10.1.0.5,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:e38f17c71746485985c8ebe9f1d87480,SystemUUID:31667858-013a-6c49-bd37-41a0bfb4cd7c,BootID:a61dc5b1-073f-4988-b019-c5aa35ecae86,KernelVersion:5.4.0-1098-azure,OSImage:Ubuntu 18.04.6 LTS,ContainerRuntimeVersion:containerd://1.6.2,KubeletVersion:v1.27.0-alpha.0.989+eabb70833a5649,KubeProxyVersion:v1.27.0-alpha.0.989+eabb70833a5649,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-apiserver:v1.27.0-alpha.0.989_eabb70833a5649 registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.0.989_eabb70833a5649 registry.k8s.io/kube-apiserver:v1.27.0-alpha.0.989_eabb70833a5649],SizeBytes:135903699,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-controller-manager:v1.27.0-alpha.0.989_eabb70833a5649 registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.0.989_eabb70833a5649 registry.k8s.io/kube-controller-manager:v1.27.0-alpha.0.989_eabb70833a5649],SizeBytes:125717305,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[docker.io/calico/cni@sha256:914823d144204288f881e49b93b6852febfe669074cd4e2a782860981615f521 docker.io/calico/cni:v3.23.0],SizeBytes:110494683,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[docker.io/calico/node@sha256:4763820ecb4d8e82483a2ffabfec7fcded9603318692df210a778d223a4d7474 docker.io/calico/node:v3.23.0],SizeBytes:71573794,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-scheduler:v1.27.0-alpha.0.989_eabb70833a5649 registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.0.989_eabb70833a5649 registry.k8s.io/kube-scheduler:v1.27.0-alpha.0.989_eabb70833a5649],SizeBytes:57551672,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-proxy:v1.27.0-alpha.0.989_eabb70833a5649 registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.989_eabb70833a5649 registry.k8s.io/kube-proxy:v1.27.0-alpha.0.989_eabb70833a5649],SizeBytes:52478325,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nautilus@sha256:80ba6c8c44f9623f06e868a1aa66026c8ec438ad814f9ec95e9333b415fe3550 registry.k8s.io/e2e-test-images/nautilus:1.7],SizeBytes:49641698,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/kube-apiserver@sha256:d230a0b88a3daf14e4cce03b906b992c8153f37da878677f434b1af8c4e8cc75 registry.k8s.io/kube-apiserver:v1.26.0],SizeBytes:35317868,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager@sha256:26e260b50ec46bd1da7352565cb8b34b6dd2cb006cebbd2f35170d50935fb9ec registry.k8s.io/kube-controller-manager:v1.26.0],SizeBytes:32244989,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/sample-apiserver@sha256:8d70890151aa5d096f331cb9da1b9cd5be0412b7363fe67b5c3befdcaa2a28d0 registry.k8s.io/e2e-test-images/sample-apiserver:1.17.7],SizeBytes:25667066,},ContainerImage{Names:[registry.k8s.io/kube-proxy@sha256:1e9bbe429e4e2b2ad32681c91deb98a334f1bf4135137df5f84f9d03689060fe registry.k8s.io/kube-proxy:v1.26.0],SizeBytes:21536465,},ContainerImage{Names:[registry.k8s.io/kube-scheduler@sha256:34a142549f94312b41d4a6cd98e7fddabff484767a199333acb7503bf46d7410 registry.k8s.io/kube-scheduler:v1.26.0],SizeBytes:17484038,},ContainerImage{Names:[capzci.azurecr.io/azure-cloud-node-manager@sha256:45259845bc04cb115596dd16d88262d84214a1099fe085531240b24fa03021cf capzci.azurecr.io/azure-cloud-node-manager:63c1cd3],SizeBytes:16704716,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:6978614,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 14 18:35:27.103: INFO: Logging kubelet events for node capz-67tgp2-mp-0000001 Jan 14 18:35:27.209: INFO: Logging pods the kubelet thinks is on node capz-67tgp2-mp-0000001 Jan 14 18:35:27.379: INFO: ss2-1 started at 2023-01-14 18:33:26 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:27.379: INFO: Container webserver ready: true, restart count 0 Jan 14 18:35:27.379: INFO: update-demo-nautilus-gtnf9 started at 2023-01-14 18:32:15 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:27.379: INFO: Container update-demo ready: true, restart count 0 Jan 14 18:35:27.379: INFO: pod-ready started at 2023-01-14 18:34:20 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:27.379: INFO: Container pod-readiness-gate ready: true, restart count 0 Jan 14 18:35:27.379: INFO: cloud-node-manager-c24hp started at 2023-01-14 18:20:38 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:27.379: INFO: Container cloud-node-manager ready: true, restart count 0 Jan 14 18:35:27.379: INFO: test-deployment-7df74c55ff-84hdq started at 2023-01-14 18:29:43 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:27.379: INFO: Container test-deployment ready: true, restart count 0 Jan 14 18:35:27.379: INFO: ss2-1 started at 2023-01-14 18:33:52 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:27.379: INFO: Container webserver ready: true, restart count 0 Jan 14 18:35:27.379: INFO: sample-apiserver-deployment-55bd96fd47-ff7kc started at 2023-01-14 18:31:43 +0000 UTC (0+2 container statuses recorded) Jan 14 18:35:27.379: INFO: Container etcd ready: true, restart count 0 Jan 14 18:35:27.379: INFO: Container sample-apiserver ready: false, restart count 0 Jan 14 18:35:27.379: INFO: ss-0 started at 2023-01-14 18:35:08 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:27.379: INFO: Container webserver ready: true, restart count 0 Jan 14 18:35:27.379: INFO: busybox-81487092-f501-4426-acf5-c16c8471c3c4 started at 2023-01-14 18:34:50 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:27.379: INFO: Container busybox ready: false, restart count 0 Jan 14 18:35:27.379: INFO: test-rolling-update-controller-lh8rd started at 2023-01-14 18:31:44 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:27.379: INFO: Container httpd ready: true, restart count 0 Jan 14 18:35:27.379: INFO: ss-0 started at 2023-01-14 18:34:43 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:27.379: INFO: Container webserver ready: true, restart count 0 Jan 14 18:35:27.379: INFO: busybox-user-65534-e1188811-c39c-4714-8d9f-b3aad5e7e12b started at 2023-01-14 18:35:11 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:27.379: INFO: Container busybox-user-65534-e1188811-c39c-4714-8d9f-b3aad5e7e12b ready: false, restart count 0 Jan 14 18:35:27.379: INFO: downwardapi-volume-b58576c1-737b-42c9-aeb6-1d8e6a721d70 started at 2023-01-14 18:35:16 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:27.379: INFO: Container client-container ready: true, restart count 0 Jan 14 18:35:27.379: INFO: var-expansion-ad6ba4d8-d241-42f0-b086-ce254fed1d9a started at <nil> (0+0 container statuses recorded) Jan 14 18:35:27.380: INFO: pod-configmaps-36d07591-4990-4769-bfcb-b3813928fe8c started at 2023-01-14 18:35:15 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:27.380: INFO: Container env-test ready: false, restart count 0 Jan 14 18:35:27.380: INFO: test-ss-1 started at 2023-01-14 18:31:37 +0000 UTC (0+2 container statuses recorded) Jan 14 18:35:27.380: INFO: Container test-ss ready: true, restart count 0 Jan 14 18:35:27.380: INFO: Container webserver ready: true, restart count 0 Jan 14 18:35:27.380: INFO: kube-proxy-xd8xz started at 2023-01-14 18:19:07 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:27.380: INFO: Container kube-proxy ready: true, restart count 0 Jan 14 18:35:27.380: INFO: pod2 started at 2023-01-14 18:35:17 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:27.380: INFO: Container agnhost-container ready: false, restart count 0 Jan 14 18:35:27.380: INFO: calico-node-lzp55 started at 2023-01-14 18:19:07 +0000 UTC (2+1 container statuses recorded) Jan 14 18:35:27.380: INFO: Init container upgrade-ipam ready: true, restart count 0 Jan 14 18:35:27.380: INFO: Init container install-cni ready: true, restart count 0 Jan 14 18:35:27.380: INFO: Container calico-node ready: true, restart count 0 Jan 14 18:35:27.380: INFO: image-pull-testdb5f66f7-9de7-465c-888d-fcd0f2ef78f0 started at 2023-01-14 18:34:48 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:27.380: INFO: Container image-pull-test ready: false, restart count 0 Jan 14 18:35:27.380: INFO: test-rs-46njb started at 2023-01-14 18:31:59 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:27.380: INFO: Container httpd ready: true, restart count 0 Jan 14 18:35:27.380: INFO: test-deployment-7b7876f9d6-cjtpl started at 2023-01-14 18:33:10 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:27.380: INFO: Container test-deployment ready: true, restart count 0 Jan 14 18:35:27.380: INFO: ss2-2 started at 2023-01-14 18:34:53 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:27.380: INFO: Container webserver ready: false, restart count 0 Jan 14 18:35:27.380: INFO: sample-webhook-deployment-865554f4d9-9s6vn started at <nil> (0+0 container statuses recorded) Jan 14 18:35:27.380: INFO: sample-webhook-deployment-865554f4d9-xz65d started at 2023-01-14 18:35:13 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:27.380: INFO: Container sample-webhook ready: false, restart count 0 Jan 14 18:35:27.380: INFO: pod-init-e3f25dbe-5e64-4732-8132-bc1e8e27a112 started at 2023-01-14 18:35:14 +0000 UTC (2+1 container statuses recorded) Jan 14 18:35:27.380: INFO: Init container init1 ready: true, restart count 0 Jan 14 18:35:27.380: INFO: Init container init2 ready: false, restart count 0 Jan 14 18:35:27.380: INFO: Container run1 ready: false, restart count 0 Jan 14 18:35:27.380: INFO: update-demo-nautilus-9757j started at 2023-01-14 18:30:58 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:27.380: INFO: Container update-demo ready: false, restart count 0 Jan 14 18:35:27.380: INFO: externalname-service-2nvd6 started at 2023-01-14 18:34:36 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:27.380: INFO: Container externalname-service ready: true, restart count 0 Jan 14 18:35:28.034: INFO: Latency metrics for node capz-67tgp2-mp-0000001 END STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/14/23 18:35:28.034 (4.563s) < Exit [DeferCleanup (Each)] [sig-apps] Deployment - dump namespaces | framework.go:206 @ 01/14/23 18:35:28.034 (4.563s) > Enter [DeferCleanup (Each)] [sig-apps] Deployment - tear down framework | framework.go:203 @ 01/14/23 18:35:28.034 STEP: Destroying namespace "deployment-6429" for this suite. - test/e2e/framework/framework.go:347 @ 01/14/23 18:35:28.034 < Exit [DeferCleanup (Each)] [sig-apps] Deployment - tear down framework | framework.go:203 @ 01/14/23 18:35:28.145 (111ms) > Enter [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/14/23 18:35:28.145 < Exit [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/14/23 18:35:28.145 (0s)
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-apps\]\sReplicaSet\sshould\svalidate\sReplicaset\sStatus\sendpoints\s\[Conformance\]$'
[FAILED] failed to locate replicaset test-rs in namespace replicaset-4894: watch closed before UntilWithoutRetry timeout In [It] at: test/e2e/apps/replica_set.go:697 @ 01/14/23 18:35:20.7from ginkgo_report.xml
> Enter [BeforeEach] [sig-apps] ReplicaSet - set up framework | framework.go:188 @ 01/14/23 18:31:59.018 STEP: Creating a kubernetes client - test/e2e/framework/framework.go:208 @ 01/14/23 18:31:59.019 Jan 14 18:31:59.019: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/kubeconfig STEP: Building a namespace api object, basename replicaset - test/e2e/framework/framework.go:247 @ 01/14/23 18:31:59.019 STEP: Waiting for a default service account to be provisioned in namespace - test/e2e/framework/framework.go:256 @ 01/14/23 18:31:59.331 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace - test/e2e/framework/framework.go:259 @ 01/14/23 18:31:59.533 < Exit [BeforeEach] [sig-apps] ReplicaSet - set up framework | framework.go:188 @ 01/14/23 18:31:59.735 (716ms) > Enter [BeforeEach] [sig-apps] ReplicaSet - test/e2e/framework/metrics/init/init.go:33 @ 01/14/23 18:31:59.735 < Exit [BeforeEach] [sig-apps] ReplicaSet - test/e2e/framework/metrics/init/init.go:33 @ 01/14/23 18:31:59.735 (0s) > Enter [It] should validate Replicaset Status endpoints [Conformance] - test/e2e/apps/replica_set.go:176 @ 01/14/23 18:31:59.735 STEP: Create a Replicaset - test/e2e/apps/replica_set.go:629 @ 01/14/23 18:31:59.838 STEP: Verify that the required pods have come up. - test/e2e/apps/replica_set.go:634 @ 01/14/23 18:31:59.946 Jan 14 18:32:00.050: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running - test/e2e/framework/pod/resource.go:227 @ 01/14/23 18:32:00.05 Jan 14 18:32:00.050: INFO: Waiting up to 5m0s for pod "test-rs-46njb" in namespace "replicaset-4894" to be "running" Jan 14 18:32:00.152: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 102.453968ms Jan 14 18:32:02.257: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.206898516s Jan 14 18:32:04.263: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.212999397s Jan 14 18:32:06.258: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.208493253s Jan 14 18:32:08.368: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 8.317760408s Jan 14 18:32:10.255: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 10.204947056s Jan 14 18:32:12.255: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 12.205635054s Jan 14 18:32:14.255: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 14.205524654s Jan 14 18:32:16.254: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 16.204463264s Jan 14 18:32:18.269: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 18.219347834s Jan 14 18:32:20.257: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 20.20717081s Jan 14 18:32:22.257: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 22.207209075s Jan 14 18:32:24.255: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 24.205696782s Jan 14 18:32:26.255: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 26.205301748s Jan 14 18:32:28.258: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 28.20794967s Jan 14 18:32:30.255: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 30.205136336s Jan 14 18:32:32.268: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 32.218500269s Jan 14 18:32:34.261: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 34.211243473s Jan 14 18:32:36.255: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 36.205598963s Jan 14 18:32:38.254: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 38.204740672s Jan 14 18:32:40.254: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 40.204299867s Jan 14 18:32:42.255: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 42.204979165s Jan 14 18:32:44.255: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 44.205153471s Jan 14 18:32:46.255: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 46.205132432s Jan 14 18:32:48.254: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 48.204419149s Jan 14 18:32:50.255: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 50.204971359s Jan 14 18:32:52.255: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 52.205620152s Jan 14 18:32:54.256: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 54.206314043s Jan 14 18:32:56.255: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 56.20531297s Jan 14 18:32:58.259: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 58.209108856s Jan 14 18:33:00.263: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 1m0.21303587s Jan 14 18:33:02.256: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 1m2.206439733s Jan 14 18:33:04.255: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 1m4.205576379s Jan 14 18:33:06.255: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 1m6.204884986s Jan 14 18:33:08.256: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 1m8.205985554s Jan 14 18:33:10.266: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 1m10.215839851s Jan 14 18:33:12.255: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 1m12.205737514s Jan 14 18:33:14.254: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 1m14.204558249s Jan 14 18:33:16.255: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 1m16.205654811s Jan 14 18:33:18.262: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 1m18.212642894s Jan 14 18:33:20.254: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 1m20.204397222s Jan 14 18:33:22.254: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 1m22.204324076s Jan 14 18:33:24.256: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 1m24.205802561s Jan 14 18:33:26.255: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 1m26.204804699s Jan 14 18:33:28.258: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 1m28.207771063s Jan 14 18:33:30.255: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 1m30.20509442s Jan 14 18:33:32.255: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 1m32.204806532s Jan 14 18:33:34.255: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 1m34.205159847s Jan 14 18:33:36.263: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 1m36.21374215s Jan 14 18:33:38.257: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 1m38.206890009s Jan 14 18:33:40.262: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 1m40.212012842s Jan 14 18:33:42.255: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 1m42.205183251s Jan 14 18:33:44.258: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 1m44.20842548s Jan 14 18:33:46.261: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 1m46.210778858s Jan 14 18:33:48.280: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 1m48.230218919s Jan 14 18:33:50.255: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 1m50.20540792s Jan 14 18:33:52.257: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 1m52.207535386s Jan 14 18:33:54.259: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 1m54.209417695s Jan 14 18:33:56.256: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 1m56.206166881s Jan 14 18:33:58.258: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 1m58.208330455s Jan 14 18:34:00.254: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 2m0.204724396s Jan 14 18:34:02.256: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 2m2.206237885s Jan 14 18:34:04.255: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 2m4.204945099s Jan 14 18:34:06.264: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 2m6.213970901s Jan 14 18:34:08.254: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 2m8.204713415s Jan 14 18:34:10.264: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 2m10.21445277s Jan 14 18:34:12.254: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 2m12.204530545s Jan 14 18:34:14.264: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 2m14.213891315s Jan 14 18:34:16.259: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 2m16.2095037s Jan 14 18:34:18.277: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 2m18.226965746s Jan 14 18:34:20.255: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 2m20.205177974s Jan 14 18:34:22.255: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 2m22.205350967s Jan 14 18:34:24.256: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 2m24.20637934s Jan 14 18:34:26.259: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 2m26.208904846s Jan 14 18:34:28.255: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 2m28.205141373s Jan 14 18:34:30.262: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 2m30.211862398s Jan 14 18:34:32.255: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 2m32.205233021s Jan 14 18:34:34.275: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 2m34.225485845s Jan 14 18:34:36.266: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 2m36.216167695s Jan 14 18:34:38.257: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 2m38.207042445s Jan 14 18:34:40.259: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 2m40.20942118s Jan 14 18:34:42.259: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 2m42.209428396s Jan 14 18:34:44.261: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 2m44.21127193s Jan 14 18:34:46.254: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 2m46.204543639s Jan 14 18:34:48.258: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 2m48.2077949s Jan 14 18:34:50.255: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 2m50.20563898s Jan 14 18:34:52.256: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 2m52.206465509s Jan 14 18:34:54.260: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 2m54.210198298s Jan 14 18:34:56.257: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 2m56.207721726s Jan 14 18:34:58.259: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 2m58.208846185s Jan 14 18:35:00.263: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 3m0.213132741s Jan 14 18:35:02.254: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 3m2.204041227s Jan 14 18:35:04.259: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 3m4.209665884s Jan 14 18:35:06.258: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 3m6.208204646s Jan 14 18:35:08.256: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 3m8.205952141s Jan 14 18:35:10.265: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 3m10.215021899s Jan 14 18:35:12.258: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 3m12.208531951s Jan 14 18:35:14.272: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 3m14.221802963s Jan 14 18:35:16.255: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 3m16.204858762s Jan 14 18:35:18.261: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 3m18.211692007s Jan 14 18:35:20.278: INFO: Pod "test-rs-46njb": Phase="Running", Reason="", readiness=true. Elapsed: 3m20.227779299s Jan 14 18:35:20.278: INFO: Pod "test-rs-46njb" satisfied condition "running" STEP: Getting /status - test/e2e/apps/replica_set.go:638 @ 01/14/23 18:35:20.278 Jan 14 18:35:20.383: INFO: Replicaset test-rs has Conditions: [] STEP: updating the Replicaset Status - test/e2e/apps/replica_set.go:650 @ 01/14/23 18:35:20.383 Jan 14 18:35:20.597: INFO: updatedStatus.Conditions: []v1.ReplicaSetCondition{v1.ReplicaSetCondition{Type:"StatusUpdate", Status:"True", LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"E2E", Message:"Set from e2e test"}} STEP: watching for the ReplicaSet status to be updated - test/e2e/apps/replica_set.go:670 @ 01/14/23 18:35:20.597 Jan 14 18:35:20.700: INFO: Observed &Status event: ERROR Jan 14 18:35:20.700: INFO: Unexpected error: failed to locate replicaset test-rs in namespace replicaset-4894: <*errors.errorString | 0xc0006c1480>: { s: "watch closed before UntilWithoutRetry timeout", } [FAILED] failed to locate replicaset test-rs in namespace replicaset-4894: watch closed before UntilWithoutRetry timeout In [It] at: test/e2e/apps/replica_set.go:697 @ 01/14/23 18:35:20.7 < Exit [It] should validate Replicaset Status endpoints [Conformance] - test/e2e/apps/replica_set.go:176 @ 01/14/23 18:35:20.701 (3m20.965s) > Enter [AfterEach] [sig-apps] ReplicaSet - test/e2e/framework/node/init/init.go:33 @ 01/14/23 18:35:20.701 Jan 14 18:35:20.701: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready < Exit [AfterEach] [sig-apps] ReplicaSet - test/e2e/framework/node/init/init.go:33 @ 01/14/23 18:35:20.85 (150ms) > Enter [DeferCleanup (Each)] [sig-apps] ReplicaSet - test/e2e/framework/metrics/init/init.go:35 @ 01/14/23 18:35:20.85 < Exit [DeferCleanup (Each)] [sig-apps] ReplicaSet - test/e2e/framework/metrics/init/init.go:35 @ 01/14/23 18:35:20.85 (0s) > Enter [DeferCleanup (Each)] [sig-apps] ReplicaSet - dump namespaces | framework.go:206 @ 01/14/23 18:35:20.85 STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/14/23 18:35:20.85 STEP: Collecting events from namespace "replicaset-4894". - test/e2e/framework/debug/dump.go:42 @ 01/14/23 18:35:20.85 STEP: Found 6 events. - test/e2e/framework/debug/dump.go:46 @ 01/14/23 18:35:20.958 Jan 14 18:35:20.958: INFO: At 2023-01-14 18:31:59 +0000 UTC - event for test-rs: {replicaset-controller } SuccessfulCreate: Created pod: test-rs-46njb Jan 14 18:35:20.958: INFO: At 2023-01-14 18:31:59 +0000 UTC - event for test-rs-46njb: {default-scheduler } Scheduled: Successfully assigned replicaset-4894/test-rs-46njb to capz-67tgp2-mp-0000001 Jan 14 18:35:20.958: INFO: At 2023-01-14 18:32:09 +0000 UTC - event for test-rs-46njb: {kubelet capz-67tgp2-mp-0000001} Pulling: Pulling image "registry.k8s.io/e2e-test-images/httpd:2.4.38-4" Jan 14 18:35:20.958: INFO: At 2023-01-14 18:35:08 +0000 UTC - event for test-rs-46njb: {kubelet capz-67tgp2-mp-0000001} Pulled: Successfully pulled image "registry.k8s.io/e2e-test-images/httpd:2.4.38-4" in 385.300462ms (2m59.182466633s including waiting) Jan 14 18:35:20.958: INFO: At 2023-01-14 18:35:09 +0000 UTC - event for test-rs-46njb: {kubelet capz-67tgp2-mp-0000001} Created: Created container httpd Jan 14 18:35:20.958: INFO: At 2023-01-14 18:35:10 +0000 UTC - event for test-rs-46njb: {kubelet capz-67tgp2-mp-0000001} Started: Started container httpd Jan 14 18:35:21.070: INFO: POD NODE PHASE GRACE CONDITIONS Jan 14 18:35:21.070: INFO: test-rs-46njb capz-67tgp2-mp-0000001 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 18:31:59 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 18:35:10 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 18:35:10 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 18:31:59 +0000 UTC }] Jan 14 18:35:21.070: INFO: Jan 14 18:35:21.351: INFO: Logging node info for node capz-67tgp2-control-plane-2chph Jan 14 18:35:21.472: INFO: Node Info: &Node{ObjectMeta:{capz-67tgp2-control-plane-2chph 28170de3-aa87-4a67-a5ad-65493aeb11b3 12074 0 2023-01-14 18:16:37 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:capz-67tgp2-control-plane-2chph kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node.kubernetes.io/exclude-from-external-load-balancers: node.kubernetes.io/instance-type:Standard_D2s_v3 topology.kubernetes.io/region:northeurope topology.kubernetes.io/zone:northeurope-2] map[cluster.x-k8s.io/cluster-name:capz-67tgp2 cluster.x-k8s.io/cluster-namespace:default cluster.x-k8s.io/machine:capz-67tgp2-control-plane-tj79f cluster.x-k8s.io/owner-kind:KubeadmControlPlane cluster.x-k8s.io/owner-name:capz-67tgp2-control-plane kubeadm.alpha.kubernetes.io/cri-socket:unix:///var/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.0.0.4/16 projectcalico.org/IPv4VXLANTunnelAddr:192.168.35.0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-14 18:16:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {kubeadm Update v1 2023-01-14 18:16:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {kube-controller-manager Update v1 2023-01-14 18:17:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}} } {Go-http-client Update v1 2023-01-14 18:17:47 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:projectcalico.org/IPv4Address":{},"f:projectcalico.org/IPv4VXLANTunnelAddr":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {cloud-node-manager Update v1 2023-01-14 18:20:44 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:taints":{}}} } {manager Update v1 2023-01-14 18:20:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cluster.x-k8s.io/cluster-name":{},"f:cluster.x-k8s.io/cluster-namespace":{},"f:cluster.x-k8s.io/machine":{},"f:cluster.x-k8s.io/owner-kind":{},"f:cluster.x-k8s.io/owner-name":{}}}} } {cloud-controller-manager Update v1 2023-01-14 18:21:06 +0000 UTC FieldsV1 {"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"192.168.0.0/24\"":{}}}} } {kubelet Update v1 2023-01-14 18:31:33 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:192.168.0.0/24,DoNotUseExternalID:,ProviderID:azure:///subscriptions/===REDACTED===/resourceGroups/capz-67tgp2/providers/Microsoft.Compute/virtualMachines/capz-67tgp2-control-plane-2chph,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/control-plane,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[192.168.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{133003395072 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{8344743936 0} {<nil>} 8149164Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{119703055367 0} {<nil>} 119703055367 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{8239886336 0} {<nil>} 8046764Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-14 18:17:47 +0000 UTC,LastTransitionTime:2023-01-14 18:17:47 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-14 18:31:33 +0000 UTC,LastTransitionTime:2023-01-14 18:16:14 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-14 18:31:33 +0000 UTC,LastTransitionTime:2023-01-14 18:16:14 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-14 18:31:33 +0000 UTC,LastTransitionTime:2023-01-14 18:16:14 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-14 18:31:33 +0000 UTC,LastTransitionTime:2023-01-14 18:17:37 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.0.0.4,},NodeAddress{Type:Hostname,Address:capz-67tgp2-control-plane-2chph,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:aa56c5629889429baa21826756529ecb,SystemUUID:744c1c53-9da3-134c-b7da-86c573f76ec3,BootID:b6ed8583-6ec6-40d3-b9e2-4bfd39a59694,KernelVersion:5.4.0-1098-azure,OSImage:Ubuntu 18.04.6 LTS,ContainerRuntimeVersion:containerd://1.6.2,KubeletVersion:v1.27.0-alpha.0.989+eabb70833a5649,KubeProxyVersion:v1.27.0-alpha.0.989+eabb70833a5649,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-apiserver:v1.27.0-alpha.0.989_eabb70833a5649 registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.0.989_eabb70833a5649 registry.k8s.io/kube-apiserver:v1.27.0-alpha.0.989_eabb70833a5649],SizeBytes:135903699,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-controller-manager:v1.27.0-alpha.0.989_eabb70833a5649 registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.0.989_eabb70833a5649 registry.k8s.io/kube-controller-manager:v1.27.0-alpha.0.989_eabb70833a5649],SizeBytes:125717305,},ContainerImage{Names:[docker.io/calico/cni@sha256:914823d144204288f881e49b93b6852febfe669074cd4e2a782860981615f521 docker.io/calico/cni:v3.23.0],SizeBytes:110494683,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[docker.io/calico/node@sha256:4763820ecb4d8e82483a2ffabfec7fcded9603318692df210a778d223a4d7474 docker.io/calico/node:v3.23.0],SizeBytes:71573794,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-scheduler:v1.27.0-alpha.0.989_eabb70833a5649 registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.0.989_eabb70833a5649 registry.k8s.io/kube-scheduler:v1.27.0-alpha.0.989_eabb70833a5649],SizeBytes:57551672,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-proxy:v1.27.0-alpha.0.989_eabb70833a5649 registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.989_eabb70833a5649 registry.k8s.io/kube-proxy:v1.27.0-alpha.0.989_eabb70833a5649],SizeBytes:52478325,},ContainerImage{Names:[registry.k8s.io/kube-apiserver@sha256:d230a0b88a3daf14e4cce03b906b992c8153f37da878677f434b1af8c4e8cc75 registry.k8s.io/kube-apiserver:v1.26.0],SizeBytes:35317868,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager@sha256:26e260b50ec46bd1da7352565cb8b34b6dd2cb006cebbd2f35170d50935fb9ec registry.k8s.io/kube-controller-manager:v1.26.0],SizeBytes:32244989,},ContainerImage{Names:[registry.k8s.io/kube-proxy@sha256:1e9bbe429e4e2b2ad32681c91deb98a334f1bf4135137df5f84f9d03689060fe registry.k8s.io/kube-proxy:v1.26.0],SizeBytes:21536465,},ContainerImage{Names:[registry.k8s.io/kube-scheduler@sha256:34a142549f94312b41d4a6cd98e7fddabff484767a199333acb7503bf46d7410 registry.k8s.io/kube-scheduler:v1.26.0],SizeBytes:17484038,},ContainerImage{Names:[capzci.azurecr.io/azure-cloud-controller-manager@sha256:a52d9377e1464d9e2d827e6555d7edf9082b5d85b60676d2fd74b87e202bad0c capzci.azurecr.io/azure-cloud-controller-manager:63c1cd3],SizeBytes:16980267,},ContainerImage{Names:[capzci.azurecr.io/azure-cloud-node-manager@sha256:45259845bc04cb115596dd16d88262d84214a1099fe085531240b24fa03021cf capzci.azurecr.io/azure-cloud-node-manager:63c1cd3],SizeBytes:16704716,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 14 18:35:21.473: INFO: Logging kubelet events for node capz-67tgp2-control-plane-2chph Jan 14 18:35:21.589: INFO: Logging pods the kubelet thinks is on node capz-67tgp2-control-plane-2chph Jan 14 18:35:21.797: INFO: kube-scheduler-capz-67tgp2-control-plane-2chph started at 2023-01-14 18:16:45 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:21.797: INFO: Container kube-scheduler ready: true, restart count 0 Jan 14 18:35:21.797: INFO: kube-proxy-j74l7 started at 2023-01-14 18:16:44 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:21.797: INFO: Container kube-proxy ready: true, restart count 0 Jan 14 18:35:21.797: INFO: calico-node-g5dqz started at 2023-01-14 18:17:11 +0000 UTC (2+1 container statuses recorded) Jan 14 18:35:21.797: INFO: Init container upgrade-ipam ready: true, restart count 0 Jan 14 18:35:21.797: INFO: Init container install-cni ready: true, restart count 0 Jan 14 18:35:21.797: INFO: Container calico-node ready: true, restart count 0 Jan 14 18:35:21.797: INFO: cloud-node-manager-5qlnt started at 2023-01-14 18:20:38 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:21.797: INFO: Container cloud-node-manager ready: true, restart count 0 Jan 14 18:35:21.797: INFO: cloud-controller-manager-64479fbc67-xdds2 started at 2023-01-14 18:20:44 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:21.797: INFO: Container cloud-controller-manager ready: true, restart count 0 Jan 14 18:35:21.797: INFO: etcd-capz-67tgp2-control-plane-2chph started at 2023-01-14 18:16:44 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:21.797: INFO: Container etcd ready: true, restart count 0 Jan 14 18:35:21.797: INFO: kube-apiserver-capz-67tgp2-control-plane-2chph started at 2023-01-14 18:16:44 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:21.797: INFO: Container kube-apiserver ready: true, restart count 0 Jan 14 18:35:21.797: INFO: kube-controller-manager-capz-67tgp2-control-plane-2chph started at 2023-01-14 18:16:44 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:21.797: INFO: Container kube-controller-manager ready: true, restart count 0 Jan 14 18:35:22.369: INFO: Latency metrics for node capz-67tgp2-control-plane-2chph Jan 14 18:35:22.369: INFO: Logging node info for node capz-67tgp2-mp-0000000 Jan 14 18:35:22.474: INFO: Node Info: &Node{ObjectMeta:{capz-67tgp2-mp-0000000 d6bf69fc-90f8-43c8-9623-356f58ea157f 16641 0 2023-01-14 18:19:00 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_D2s_v3 beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:northeurope failure-domain.beta.kubernetes.io/zone:0 kubernetes.io/arch:amd64 kubernetes.io/hostname:capz-67tgp2-mp-0000000 kubernetes.io/os:linux node.kubernetes.io/instance-type:Standard_D2s_v3 topology.kubernetes.io/region:northeurope topology.kubernetes.io/zone:0] map[cluster.x-k8s.io/cluster-name:capz-67tgp2 cluster.x-k8s.io/cluster-namespace:default cluster.x-k8s.io/owner-kind:MachinePool cluster.x-k8s.io/owner-name:capz-67tgp2-mp-0 kubeadm.alpha.kubernetes.io/cri-socket:unix:///var/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.1.0.4/16 projectcalico.org/IPv4VXLANTunnelAddr:192.168.243.192 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2023-01-14 18:19:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet Update v1 2023-01-14 18:19:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-14 18:19:48 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}} } {Go-http-client Update v1 2023-01-14 18:20:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:projectcalico.org/IPv4Address":{},"f:projectcalico.org/IPv4VXLANTunnelAddr":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {cloud-controller-manager Update v1 2023-01-14 18:21:06 +0000 UTC FieldsV1 {"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"192.168.1.0/24\"":{}}}} } {manager Update v1 2023-01-14 18:21:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cluster.x-k8s.io/cluster-name":{},"f:cluster.x-k8s.io/cluster-namespace":{},"f:cluster.x-k8s.io/owner-kind":{},"f:cluster.x-k8s.io/owner-name":{}}}} } {kubelet Update v1 2023-01-14 18:33:54 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:192.168.1.0/24,DoNotUseExternalID:,ProviderID:azure:///subscriptions/===REDACTED===/resourceGroups/capz-67tgp2/providers/Microsoft.Compute/virtualMachineScaleSets/capz-67tgp2-mp-0/virtualMachines/0,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[192.168.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{4 0} {<nil>} 4 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{31025332224 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{8344743936 0} {<nil>} 8149164Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{4 0} {<nil>} 4 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{27922798956 0} {<nil>} 27922798956 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{8239886336 0} {<nil>} 8046764Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-14 18:20:12 +0000 UTC,LastTransitionTime:2023-01-14 18:20:12 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-14 18:33:54 +0000 UTC,LastTransitionTime:2023-01-14 18:19:00 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-14 18:33:54 +0000 UTC,LastTransitionTime:2023-01-14 18:19:00 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-14 18:33:54 +0000 UTC,LastTransitionTime:2023-01-14 18:19:00 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-14 18:33:54 +0000 UTC,LastTransitionTime:2023-01-14 18:19:48 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:capz-67tgp2-mp-0000000,},NodeAddress{Type:InternalIP,Address:10.1.0.4,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:95d9ab6ead5141e2b46b1d18fec95432,SystemUUID:3fc8a171-f25a-2049-95d3-3c4be76d51a7,BootID:b9ac1a12-eff5-45ad-b970-9df972ef339e,KernelVersion:5.4.0-1098-azure,OSImage:Ubuntu 18.04.6 LTS,ContainerRuntimeVersion:containerd://1.6.2,KubeletVersion:v1.27.0-alpha.0.989+eabb70833a5649,KubeProxyVersion:v1.27.0-alpha.0.989+eabb70833a5649,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-apiserver:v1.27.0-alpha.0.989_eabb70833a5649 registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.0.989_eabb70833a5649 registry.k8s.io/kube-apiserver:v1.27.0-alpha.0.989_eabb70833a5649],SizeBytes:135903699,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-controller-manager:v1.27.0-alpha.0.989_eabb70833a5649 registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.0.989_eabb70833a5649 registry.k8s.io/kube-controller-manager:v1.27.0-alpha.0.989_eabb70833a5649],SizeBytes:125717305,},ContainerImage{Names:[docker.io/calico/cni@sha256:914823d144204288f881e49b93b6852febfe669074cd4e2a782860981615f521 docker.io/calico/cni:v3.23.0],SizeBytes:110494683,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[docker.io/calico/node@sha256:4763820ecb4d8e82483a2ffabfec7fcded9603318692df210a778d223a4d7474 docker.io/calico/node:v3.23.0],SizeBytes:71573794,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-scheduler:v1.27.0-alpha.0.989_eabb70833a5649 registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.0.989_eabb70833a5649 registry.k8s.io/kube-scheduler:v1.27.0-alpha.0.989_eabb70833a5649],SizeBytes:57551672,},ContainerImage{Names:[docker.io/calico/kube-controllers@sha256:78bc199299f966b0694dc4044501aee2d7ebd6862b2b0a00bca3ee8d3813c82f docker.io/calico/kube-controllers:v3.23.0],SizeBytes:56343954,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-proxy:v1.27.0-alpha.0.989_eabb70833a5649 registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.989_eabb70833a5649 registry.k8s.io/kube-proxy:v1.27.0-alpha.0.989_eabb70833a5649],SizeBytes:52478325,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nautilus@sha256:80ba6c8c44f9623f06e868a1aa66026c8ec438ad814f9ec95e9333b415fe3550 registry.k8s.io/e2e-test-images/nautilus:1.7],SizeBytes:49641698,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/kube-apiserver@sha256:d230a0b88a3daf14e4cce03b906b992c8153f37da878677f434b1af8c4e8cc75 registry.k8s.io/kube-apiserver:v1.26.0],SizeBytes:35317868,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager@sha256:26e260b50ec46bd1da7352565cb8b34b6dd2cb006cebbd2f35170d50935fb9ec registry.k8s.io/kube-controller-manager:v1.26.0],SizeBytes:32244989,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/kube-proxy@sha256:1e9bbe429e4e2b2ad32681c91deb98a334f1bf4135137df5f84f9d03689060fe registry.k8s.io/kube-proxy:v1.26.0],SizeBytes:21536465,},ContainerImage{Names:[registry.k8s.io/kube-scheduler@sha256:34a142549f94312b41d4a6cd98e7fddabff484767a199333acb7503bf46d7410 registry.k8s.io/kube-scheduler:v1.26.0],SizeBytes:17484038,},ContainerImage{Names:[capzci.azurecr.io/azure-cloud-node-manager@sha256:45259845bc04cb115596dd16d88262d84214a1099fe085531240b24fa03021cf capzci.azurecr.io/azure-cloud-node-manager:63c1cd3],SizeBytes:16704716,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:6978614,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 14 18:35:22.474: INFO: Logging kubelet events for node capz-67tgp2-mp-0000000 Jan 14 18:35:22.579: INFO: Logging pods the kubelet thinks is on node capz-67tgp2-mp-0000000 Jan 14 18:35:22.758: INFO: dns-test-eeb40b41-fc0f-431a-8cac-0735a1f4243b started at 2023-01-14 18:33:46 +0000 UTC (0+3 container statuses recorded) Jan 14 18:35:22.758: INFO: Container jessie-querier ready: false, restart count 0 Jan 14 18:35:22.758: INFO: Container querier ready: false, restart count 0 Jan 14 18:35:22.758: INFO: Container webserver ready: false, restart count 0 Jan 14 18:35:22.758: INFO: pod1 started at 2023-01-14 18:34:38 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:22.758: INFO: Container agnhost-container ready: true, restart count 0 Jan 14 18:35:22.758: INFO: execpodg9czm started at 2023-01-14 18:35:00 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:22.758: INFO: Container agnhost-container ready: true, restart count 0 Jan 14 18:35:22.758: INFO: update-demo-nautilus-mcn6g started at 2023-01-14 18:32:15 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:22.758: INFO: Container update-demo ready: true, restart count 0 Jan 14 18:35:22.758: INFO: test-deployment-7b7876f9d6-zqb4p started at 2023-01-14 18:30:28 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:22.758: INFO: Container test-deployment ready: true, restart count 0 Jan 14 18:35:22.758: INFO: alpine-nnp-true-a4de807c-d017-4cbf-80dd-9efa36816371 started at 2023-01-14 18:33:49 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:22.758: INFO: Container alpine-nnp-true-a4de807c-d017-4cbf-80dd-9efa36816371 ready: false, restart count 0 Jan 14 18:35:22.758: INFO: execpodmhfjd started at 2023-01-14 18:34:57 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:22.758: INFO: Container agnhost-container ready: true, restart count 0 Jan 14 18:35:22.758: INFO: externalname-service-pq2wx started at 2023-01-14 18:34:36 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:22.758: INFO: Container externalname-service ready: true, restart count 0 Jan 14 18:35:22.758: INFO: coredns-56f4c55bf9-4pfjc started at 2023-01-14 18:19:48 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:22.758: INFO: Container coredns ready: true, restart count 0 Jan 14 18:35:22.758: INFO: metrics-server-795d765ff8-rskk8 started at 2023-01-14 18:19:48 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:22.758: INFO: Container metrics-server ready: true, restart count 0 Jan 14 18:35:22.758: INFO: ss2-0 started at 2023-01-14 18:31:08 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:22.758: INFO: Container webserver ready: true, restart count 0 Jan 14 18:35:22.758: INFO: ss2-0 started at 2023-01-14 18:34:37 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:22.758: INFO: Container webserver ready: true, restart count 0 Jan 14 18:35:22.758: INFO: cloud-node-manager-l846f started at 2023-01-14 18:20:38 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:22.758: INFO: Container cloud-node-manager ready: true, restart count 0 Jan 14 18:35:22.758: INFO: ss2-2 started at <nil> (0+0 container statuses recorded) Jan 14 18:35:22.758: INFO: test-ss-0 started at 2023-01-14 18:28:36 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:22.758: INFO: Container webserver ready: true, restart count 0 Jan 14 18:35:22.758: INFO: calico-node-t5npc started at 2023-01-14 18:19:05 +0000 UTC (2+1 container statuses recorded) Jan 14 18:35:22.758: INFO: Init container upgrade-ipam ready: true, restart count 0 Jan 14 18:35:22.758: INFO: Init container install-cni ready: true, restart count 0 Jan 14 18:35:22.758: INFO: Container calico-node ready: true, restart count 0 Jan 14 18:35:22.758: INFO: calico-kube-controllers-657b584867-tn8lq started at 2023-01-14 18:19:48 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:22.758: INFO: Container calico-kube-controllers ready: true, restart count 0 Jan 14 18:35:22.758: INFO: pod-secrets-5a523e88-d1f1-46b1-b8c2-7b0072c2daca started at <nil> (0+0 container statuses recorded) Jan 14 18:35:22.758: INFO: coredns-56f4c55bf9-zp98j started at 2023-01-14 18:19:48 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:22.758: INFO: Container coredns ready: true, restart count 0 Jan 14 18:35:22.758: INFO: tester started at 2023-01-14 18:35:08 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:22.758: INFO: Container tester ready: true, restart count 0 Jan 14 18:35:22.758: INFO: ss-1 started at 2023-01-14 18:35:14 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:22.758: INFO: Container webserver ready: false, restart count 0 Jan 14 18:35:22.758: INFO: sample-webhook-deployment-865554f4d9-bb228 started at 2023-01-14 18:35:20 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:22.758: INFO: Container sample-webhook ready: false, restart count 0 Jan 14 18:35:22.758: INFO: sysctl-f358e086-4a11-4d39-95fe-d645fb791239 started at 2023-01-14 18:35:15 +0000 UTC (0+0 container statuses recorded) Jan 14 18:35:22.758: INFO: kube-proxy-8jftq started at 2023-01-14 18:19:05 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:22.758: INFO: Container kube-proxy ready: true, restart count 0 Jan 14 18:35:22.758: INFO: test-rolling-update-deployment-7549d9f46d-pklnz started at <nil> (0+0 container statuses recorded) Jan 14 18:35:22.758: INFO: downward-api-1073a5a4-0d5f-4af3-9e34-20a20f87b5ea started at 2023-01-14 18:35:19 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:22.758: INFO: Container dapi-container ready: false, restart count 0 Jan 14 18:35:22.758: INFO: pod-qos-class-bddd171a-e154-4523-9abb-837b2095dfbb started at 2023-01-14 18:33:27 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:22.758: INFO: Container agnhost ready: false, restart count 0 Jan 14 18:35:23.838: INFO: Latency metrics for node capz-67tgp2-mp-0000000 Jan 14 18:35:23.838: INFO: Logging node info for node capz-67tgp2-mp-0000001 Jan 14 18:35:23.943: INFO: Node Info: &Node{ObjectMeta:{capz-67tgp2-mp-0000001 a57d1a46-19d4-4265-8229-3bb32b89963d 19871 0 2023-01-14 18:18:59 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_D2s_v3 beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:northeurope failure-domain.beta.kubernetes.io/zone:1 kubernetes.io/arch:amd64 kubernetes.io/hostname:capz-67tgp2-mp-0000001 kubernetes.io/os:linux node.kubernetes.io/instance-type:Standard_D2s_v3 topology.kubernetes.io/region:northeurope topology.kubernetes.io/zone:1] map[cluster.x-k8s.io/cluster-name:capz-67tgp2 cluster.x-k8s.io/cluster-namespace:default cluster.x-k8s.io/owner-kind:MachinePool cluster.x-k8s.io/owner-name:capz-67tgp2-mp-0 kubeadm.alpha.kubernetes.io/cri-socket:unix:///var/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.1.0.5/16 projectcalico.org/IPv4VXLANTunnelAddr:192.168.14.128 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-14 18:18:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2023-01-14 18:19:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kube-controller-manager Update v1 2023-01-14 18:20:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}} } {Go-http-client Update v1 2023-01-14 18:20:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:projectcalico.org/IPv4Address":{},"f:projectcalico.org/IPv4VXLANTunnelAddr":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {cloud-controller-manager Update v1 2023-01-14 18:21:06 +0000 UTC FieldsV1 {"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"192.168.2.0/24\"":{}}}} } {manager Update v1 2023-01-14 18:21:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cluster.x-k8s.io/cluster-name":{},"f:cluster.x-k8s.io/cluster-namespace":{},"f:cluster.x-k8s.io/owner-kind":{},"f:cluster.x-k8s.io/owner-name":{}}}} } {kubelet Update v1 2023-01-14 18:35:23 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:192.168.2.0/24,DoNotUseExternalID:,ProviderID:azure:///subscriptions/===REDACTED===/resourceGroups/capz-67tgp2/providers/Microsoft.Compute/virtualMachineScaleSets/capz-67tgp2-mp-0/virtualMachines/1,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[192.168.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{4 0} {<nil>} 4 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{31025332224 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{8344743936 0} {<nil>} 8149164Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{4 0} {<nil>} 4 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{27922798956 0} {<nil>} 27922798956 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{8239886336 0} {<nil>} 8046764Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-14 18:20:41 +0000 UTC,LastTransitionTime:2023-01-14 18:20:41 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-14 18:35:23 +0000 UTC,LastTransitionTime:2023-01-14 18:18:59 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-14 18:35:23 +0000 UTC,LastTransitionTime:2023-01-14 18:18:59 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-14 18:35:23 +0000 UTC,LastTransitionTime:2023-01-14 18:18:59 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-14 18:35:23 +0000 UTC,LastTransitionTime:2023-01-14 18:20:32 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:capz-67tgp2-mp-0000001,},NodeAddress{Type:InternalIP,Address:10.1.0.5,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:e38f17c71746485985c8ebe9f1d87480,SystemUUID:31667858-013a-6c49-bd37-41a0bfb4cd7c,BootID:a61dc5b1-073f-4988-b019-c5aa35ecae86,KernelVersion:5.4.0-1098-azure,OSImage:Ubuntu 18.04.6 LTS,ContainerRuntimeVersion:containerd://1.6.2,KubeletVersion:v1.27.0-alpha.0.989+eabb70833a5649,KubeProxyVersion:v1.27.0-alpha.0.989+eabb70833a5649,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-apiserver:v1.27.0-alpha.0.989_eabb70833a5649 registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.0.989_eabb70833a5649 registry.k8s.io/kube-apiserver:v1.27.0-alpha.0.989_eabb70833a5649],SizeBytes:135903699,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-controller-manager:v1.27.0-alpha.0.989_eabb70833a5649 registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.0.989_eabb70833a5649 registry.k8s.io/kube-controller-manager:v1.27.0-alpha.0.989_eabb70833a5649],SizeBytes:125717305,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[docker.io/calico/cni@sha256:914823d144204288f881e49b93b6852febfe669074cd4e2a782860981615f521 docker.io/calico/cni:v3.23.0],SizeBytes:110494683,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[docker.io/calico/node@sha256:4763820ecb4d8e82483a2ffabfec7fcded9603318692df210a778d223a4d7474 docker.io/calico/node:v3.23.0],SizeBytes:71573794,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-scheduler:v1.27.0-alpha.0.989_eabb70833a5649 registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.0.989_eabb70833a5649 registry.k8s.io/kube-scheduler:v1.27.0-alpha.0.989_eabb70833a5649],SizeBytes:57551672,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-proxy:v1.27.0-alpha.0.989_eabb70833a5649 registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.989_eabb70833a5649 registry.k8s.io/kube-proxy:v1.27.0-alpha.0.989_eabb70833a5649],SizeBytes:52478325,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nautilus@sha256:80ba6c8c44f9623f06e868a1aa66026c8ec438ad814f9ec95e9333b415fe3550 registry.k8s.io/e2e-test-images/nautilus:1.7],SizeBytes:49641698,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/kube-apiserver@sha256:d230a0b88a3daf14e4cce03b906b992c8153f37da878677f434b1af8c4e8cc75 registry.k8s.io/kube-apiserver:v1.26.0],SizeBytes:35317868,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager@sha256:26e260b50ec46bd1da7352565cb8b34b6dd2cb006cebbd2f35170d50935fb9ec registry.k8s.io/kube-controller-manager:v1.26.0],SizeBytes:32244989,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/sample-apiserver@sha256:8d70890151aa5d096f331cb9da1b9cd5be0412b7363fe67b5c3befdcaa2a28d0 registry.k8s.io/e2e-test-images/sample-apiserver:1.17.7],SizeBytes:25667066,},ContainerImage{Names:[registry.k8s.io/kube-proxy@sha256:1e9bbe429e4e2b2ad32681c91deb98a334f1bf4135137df5f84f9d03689060fe registry.k8s.io/kube-proxy:v1.26.0],SizeBytes:21536465,},ContainerImage{Names:[registry.k8s.io/kube-scheduler@sha256:34a142549f94312b41d4a6cd98e7fddabff484767a199333acb7503bf46d7410 registry.k8s.io/kube-scheduler:v1.26.0],SizeBytes:17484038,},ContainerImage{Names:[capzci.azurecr.io/azure-cloud-node-manager@sha256:45259845bc04cb115596dd16d88262d84214a1099fe085531240b24fa03021cf capzci.azurecr.io/azure-cloud-node-manager:63c1cd3],SizeBytes:16704716,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:6978614,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 14 18:35:23.943: INFO: Logging kubelet events for node capz-67tgp2-mp-0000001 Jan 14 18:35:24.049: INFO: Logging pods the kubelet thinks is on node capz-67tgp2-mp-0000001 Jan 14 18:35:24.215: INFO: pod-init-e3f25dbe-5e64-4732-8132-bc1e8e27a112 started at 2023-01-14 18:35:14 +0000 UTC (2+1 container statuses recorded) Jan 14 18:35:24.215: INFO: Init container init1 ready: false, restart count 0 Jan 14 18:35:24.215: INFO: Init container init2 ready: false, restart count 0 Jan 14 18:35:24.215: INFO: Container run1 ready: false, restart count 0 Jan 14 18:35:24.215: INFO: update-demo-nautilus-9757j started at 2023-01-14 18:30:58 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:24.215: INFO: Container update-demo ready: true, restart count 0 Jan 14 18:35:24.215: INFO: externalname-service-2nvd6 started at 2023-01-14 18:34:36 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:24.215: INFO: Container externalname-service ready: true, restart count 0 Jan 14 18:35:24.215: INFO: ss2-1 started at 2023-01-14 18:33:26 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:24.215: INFO: Container webserver ready: true, restart count 0 Jan 14 18:35:24.215: INFO: update-demo-nautilus-gtnf9 started at 2023-01-14 18:32:15 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:24.215: INFO: Container update-demo ready: true, restart count 0 Jan 14 18:35:24.215: INFO: pod-ready started at 2023-01-14 18:34:20 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:24.215: INFO: Container pod-readiness-gate ready: true, restart count 0 Jan 14 18:35:24.215: INFO: image-pull-test52485987-1264-447b-b3c6-bbe4761b3eb2 started at 2023-01-14 18:33:48 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:24.215: INFO: Container image-pull-test ready: false, restart count 0 Jan 14 18:35:24.215: INFO: cloud-node-manager-c24hp started at 2023-01-14 18:20:38 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:24.215: INFO: Container cloud-node-manager ready: true, restart count 0 Jan 14 18:35:24.215: INFO: test-deployment-7df74c55ff-84hdq started at 2023-01-14 18:29:43 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:24.215: INFO: Container test-deployment ready: true, restart count 0 Jan 14 18:35:24.215: INFO: ss2-1 started at 2023-01-14 18:33:52 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:24.215: INFO: Container webserver ready: true, restart count 0 Jan 14 18:35:24.215: INFO: sample-apiserver-deployment-55bd96fd47-ff7kc started at 2023-01-14 18:31:43 +0000 UTC (0+2 container statuses recorded) Jan 14 18:35:24.215: INFO: Container etcd ready: true, restart count 0 Jan 14 18:35:24.215: INFO: Container sample-apiserver ready: false, restart count 0 Jan 14 18:35:24.215: INFO: ss-0 started at 2023-01-14 18:35:08 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:24.215: INFO: Container webserver ready: true, restart count 0 Jan 14 18:35:24.215: INFO: busybox-81487092-f501-4426-acf5-c16c8471c3c4 started at 2023-01-14 18:34:50 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:24.215: INFO: Container busybox ready: false, restart count 0 Jan 14 18:35:24.215: INFO: test-rolling-update-controller-lh8rd started at 2023-01-14 18:31:44 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:24.215: INFO: Container httpd ready: true, restart count 0 Jan 14 18:35:24.215: INFO: ss-0 started at 2023-01-14 18:34:43 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:24.215: INFO: Container webserver ready: true, restart count 0 Jan 14 18:35:24.215: INFO: busybox-user-65534-e1188811-c39c-4714-8d9f-b3aad5e7e12b started at 2023-01-14 18:35:11 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:24.215: INFO: Container busybox-user-65534-e1188811-c39c-4714-8d9f-b3aad5e7e12b ready: false, restart count 0 Jan 14 18:35:24.215: INFO: downwardapi-volume-b58576c1-737b-42c9-aeb6-1d8e6a721d70 started at 2023-01-14 18:35:16 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:24.215: INFO: Container client-container ready: true, restart count 0 Jan 14 18:35:24.215: INFO: pod-configmaps-36d07591-4990-4769-bfcb-b3813928fe8c started at 2023-01-14 18:35:15 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:24.215: INFO: Container env-test ready: false, restart count 0 Jan 14 18:35:24.215: INFO: test-ss-1 started at 2023-01-14 18:31:37 +0000 UTC (0+2 container statuses recorded) Jan 14 18:35:24.215: INFO: Container test-ss ready: true, restart count 0 Jan 14 18:35:24.215: INFO: Container webserver ready: true, restart count 0 Jan 14 18:35:24.215: INFO: kube-proxy-xd8xz started at 2023-01-14 18:19:07 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:24.215: INFO: Container kube-proxy ready: true, restart count 0 Jan 14 18:35:24.215: INFO: pod2 started at <nil> (0+0 container statuses recorded) Jan 14 18:35:24.215: INFO: calico-node-lzp55 started at 2023-01-14 18:19:07 +0000 UTC (2+1 container statuses recorded) Jan 14 18:35:24.215: INFO: Init container upgrade-ipam ready: true, restart count 0 Jan 14 18:35:24.215: INFO: Init container install-cni ready: true, restart count 0 Jan 14 18:35:24.215: INFO: Container calico-node ready: true, restart count 0 Jan 14 18:35:24.215: INFO: image-pull-testdb5f66f7-9de7-465c-888d-fcd0f2ef78f0 started at 2023-01-14 18:34:48 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:24.215: INFO: Container image-pull-test ready: false, restart count 0 Jan 14 18:35:24.215: INFO: test-rs-46njb started at 2023-01-14 18:31:59 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:24.215: INFO: Container httpd ready: true, restart count 0 Jan 14 18:35:24.215: INFO: test-deployment-7b7876f9d6-cjtpl started at 2023-01-14 18:33:10 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:24.215: INFO: Container test-deployment ready: true, restart count 0 Jan 14 18:35:24.215: INFO: ss2-2 started at 2023-01-14 18:34:53 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:24.215: INFO: Container webserver ready: false, restart count 0 Jan 14 18:35:24.215: INFO: sample-webhook-deployment-865554f4d9-xz65d started at 2023-01-14 18:35:13 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:24.215: INFO: Container sample-webhook ready: false, restart count 0 Jan 14 18:35:25.166: INFO: Latency metrics for node capz-67tgp2-mp-0000001 END STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/14/23 18:35:25.166 (4.315s) < Exit [DeferCleanup (Each)] [sig-apps] ReplicaSet - dump namespaces | framework.go:206 @ 01/14/23 18:35:25.166 (4.315s) > Enter [DeferCleanup (Each)] [sig-apps] ReplicaSet - tear down framework | framework.go:203 @ 01/14/23 18:35:25.166 STEP: Destroying namespace "replicaset-4894" for this suite. - test/e2e/framework/framework.go:347 @ 01/14/23 18:35:25.166 < Exit [DeferCleanup (Each)] [sig-apps] ReplicaSet - tear down framework | framework.go:203 @ 01/14/23 18:35:25.293 (127ms) > Enter [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/14/23 18:35:25.293 < Exit [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/14/23 18:35:25.293 (0s)
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-apps\]\sReplicaSet\sshould\svalidate\sReplicaset\sStatus\sendpoints\s\[Conformance\]$'
[FAILED] failed to locate replicaset test-rs in namespace replicaset-4894: watch closed before UntilWithoutRetry timeout In [It] at: test/e2e/apps/replica_set.go:697 @ 01/14/23 18:35:20.7from junit_01.xml
> Enter [BeforeEach] [sig-apps] ReplicaSet - set up framework | framework.go:188 @ 01/14/23 18:31:59.018 STEP: Creating a kubernetes client - test/e2e/framework/framework.go:208 @ 01/14/23 18:31:59.019 Jan 14 18:31:59.019: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/kubeconfig STEP: Building a namespace api object, basename replicaset - test/e2e/framework/framework.go:247 @ 01/14/23 18:31:59.019 STEP: Waiting for a default service account to be provisioned in namespace - test/e2e/framework/framework.go:256 @ 01/14/23 18:31:59.331 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace - test/e2e/framework/framework.go:259 @ 01/14/23 18:31:59.533 < Exit [BeforeEach] [sig-apps] ReplicaSet - set up framework | framework.go:188 @ 01/14/23 18:31:59.735 (716ms) > Enter [BeforeEach] [sig-apps] ReplicaSet - test/e2e/framework/metrics/init/init.go:33 @ 01/14/23 18:31:59.735 < Exit [BeforeEach] [sig-apps] ReplicaSet - test/e2e/framework/metrics/init/init.go:33 @ 01/14/23 18:31:59.735 (0s) > Enter [It] should validate Replicaset Status endpoints [Conformance] - test/e2e/apps/replica_set.go:176 @ 01/14/23 18:31:59.735 STEP: Create a Replicaset - test/e2e/apps/replica_set.go:629 @ 01/14/23 18:31:59.838 STEP: Verify that the required pods have come up. - test/e2e/apps/replica_set.go:634 @ 01/14/23 18:31:59.946 Jan 14 18:32:00.050: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running - test/e2e/framework/pod/resource.go:227 @ 01/14/23 18:32:00.05 Jan 14 18:32:00.050: INFO: Waiting up to 5m0s for pod "test-rs-46njb" in namespace "replicaset-4894" to be "running" Jan 14 18:32:00.152: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 102.453968ms Jan 14 18:32:02.257: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.206898516s Jan 14 18:32:04.263: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.212999397s Jan 14 18:32:06.258: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.208493253s Jan 14 18:32:08.368: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 8.317760408s Jan 14 18:32:10.255: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 10.204947056s Jan 14 18:32:12.255: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 12.205635054s Jan 14 18:32:14.255: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 14.205524654s Jan 14 18:32:16.254: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 16.204463264s Jan 14 18:32:18.269: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 18.219347834s Jan 14 18:32:20.257: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 20.20717081s Jan 14 18:32:22.257: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 22.207209075s Jan 14 18:32:24.255: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 24.205696782s Jan 14 18:32:26.255: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 26.205301748s Jan 14 18:32:28.258: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 28.20794967s Jan 14 18:32:30.255: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 30.205136336s Jan 14 18:32:32.268: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 32.218500269s Jan 14 18:32:34.261: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 34.211243473s Jan 14 18:32:36.255: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 36.205598963s Jan 14 18:32:38.254: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 38.204740672s Jan 14 18:32:40.254: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 40.204299867s Jan 14 18:32:42.255: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 42.204979165s Jan 14 18:32:44.255: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 44.205153471s Jan 14 18:32:46.255: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 46.205132432s Jan 14 18:32:48.254: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 48.204419149s Jan 14 18:32:50.255: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 50.204971359s Jan 14 18:32:52.255: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 52.205620152s Jan 14 18:32:54.256: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 54.206314043s Jan 14 18:32:56.255: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 56.20531297s Jan 14 18:32:58.259: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 58.209108856s Jan 14 18:33:00.263: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 1m0.21303587s Jan 14 18:33:02.256: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 1m2.206439733s Jan 14 18:33:04.255: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 1m4.205576379s Jan 14 18:33:06.255: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 1m6.204884986s Jan 14 18:33:08.256: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 1m8.205985554s Jan 14 18:33:10.266: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 1m10.215839851s Jan 14 18:33:12.255: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 1m12.205737514s Jan 14 18:33:14.254: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 1m14.204558249s Jan 14 18:33:16.255: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 1m16.205654811s Jan 14 18:33:18.262: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 1m18.212642894s Jan 14 18:33:20.254: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 1m20.204397222s Jan 14 18:33:22.254: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 1m22.204324076s Jan 14 18:33:24.256: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 1m24.205802561s Jan 14 18:33:26.255: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 1m26.204804699s Jan 14 18:33:28.258: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 1m28.207771063s Jan 14 18:33:30.255: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 1m30.20509442s Jan 14 18:33:32.255: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 1m32.204806532s Jan 14 18:33:34.255: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 1m34.205159847s Jan 14 18:33:36.263: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 1m36.21374215s Jan 14 18:33:38.257: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 1m38.206890009s Jan 14 18:33:40.262: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 1m40.212012842s Jan 14 18:33:42.255: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 1m42.205183251s Jan 14 18:33:44.258: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 1m44.20842548s Jan 14 18:33:46.261: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 1m46.210778858s Jan 14 18:33:48.280: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 1m48.230218919s Jan 14 18:33:50.255: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 1m50.20540792s Jan 14 18:33:52.257: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 1m52.207535386s Jan 14 18:33:54.259: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 1m54.209417695s Jan 14 18:33:56.256: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 1m56.206166881s Jan 14 18:33:58.258: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 1m58.208330455s Jan 14 18:34:00.254: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 2m0.204724396s Jan 14 18:34:02.256: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 2m2.206237885s Jan 14 18:34:04.255: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 2m4.204945099s Jan 14 18:34:06.264: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 2m6.213970901s Jan 14 18:34:08.254: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 2m8.204713415s Jan 14 18:34:10.264: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 2m10.21445277s Jan 14 18:34:12.254: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 2m12.204530545s Jan 14 18:34:14.264: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 2m14.213891315s Jan 14 18:34:16.259: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 2m16.2095037s Jan 14 18:34:18.277: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 2m18.226965746s Jan 14 18:34:20.255: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 2m20.205177974s Jan 14 18:34:22.255: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 2m22.205350967s Jan 14 18:34:24.256: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 2m24.20637934s Jan 14 18:34:26.259: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 2m26.208904846s Jan 14 18:34:28.255: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 2m28.205141373s Jan 14 18:34:30.262: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 2m30.211862398s Jan 14 18:34:32.255: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 2m32.205233021s Jan 14 18:34:34.275: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 2m34.225485845s Jan 14 18:34:36.266: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 2m36.216167695s Jan 14 18:34:38.257: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 2m38.207042445s Jan 14 18:34:40.259: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 2m40.20942118s Jan 14 18:34:42.259: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 2m42.209428396s Jan 14 18:34:44.261: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 2m44.21127193s Jan 14 18:34:46.254: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 2m46.204543639s Jan 14 18:34:48.258: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 2m48.2077949s Jan 14 18:34:50.255: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 2m50.20563898s Jan 14 18:34:52.256: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 2m52.206465509s Jan 14 18:34:54.260: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 2m54.210198298s Jan 14 18:34:56.257: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 2m56.207721726s Jan 14 18:34:58.259: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 2m58.208846185s Jan 14 18:35:00.263: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 3m0.213132741s Jan 14 18:35:02.254: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 3m2.204041227s Jan 14 18:35:04.259: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 3m4.209665884s Jan 14 18:35:06.258: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 3m6.208204646s Jan 14 18:35:08.256: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 3m8.205952141s Jan 14 18:35:10.265: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 3m10.215021899s Jan 14 18:35:12.258: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 3m12.208531951s Jan 14 18:35:14.272: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 3m14.221802963s Jan 14 18:35:16.255: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 3m16.204858762s Jan 14 18:35:18.261: INFO: Pod "test-rs-46njb": Phase="Pending", Reason="", readiness=false. Elapsed: 3m18.211692007s Jan 14 18:35:20.278: INFO: Pod "test-rs-46njb": Phase="Running", Reason="", readiness=true. Elapsed: 3m20.227779299s Jan 14 18:35:20.278: INFO: Pod "test-rs-46njb" satisfied condition "running" STEP: Getting /status - test/e2e/apps/replica_set.go:638 @ 01/14/23 18:35:20.278 Jan 14 18:35:20.383: INFO: Replicaset test-rs has Conditions: [] STEP: updating the Replicaset Status - test/e2e/apps/replica_set.go:650 @ 01/14/23 18:35:20.383 Jan 14 18:35:20.597: INFO: updatedStatus.Conditions: []v1.ReplicaSetCondition{v1.ReplicaSetCondition{Type:"StatusUpdate", Status:"True", LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"E2E", Message:"Set from e2e test"}} STEP: watching for the ReplicaSet status to be updated - test/e2e/apps/replica_set.go:670 @ 01/14/23 18:35:20.597 Jan 14 18:35:20.700: INFO: Observed &Status event: ERROR Jan 14 18:35:20.700: INFO: Unexpected error: failed to locate replicaset test-rs in namespace replicaset-4894: <*errors.errorString | 0xc0006c1480>: { s: "watch closed before UntilWithoutRetry timeout", } [FAILED] failed to locate replicaset test-rs in namespace replicaset-4894: watch closed before UntilWithoutRetry timeout In [It] at: test/e2e/apps/replica_set.go:697 @ 01/14/23 18:35:20.7 < Exit [It] should validate Replicaset Status endpoints [Conformance] - test/e2e/apps/replica_set.go:176 @ 01/14/23 18:35:20.701 (3m20.965s) > Enter [AfterEach] [sig-apps] ReplicaSet - test/e2e/framework/node/init/init.go:33 @ 01/14/23 18:35:20.701 Jan 14 18:35:20.701: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready < Exit [AfterEach] [sig-apps] ReplicaSet - test/e2e/framework/node/init/init.go:33 @ 01/14/23 18:35:20.85 (150ms) > Enter [DeferCleanup (Each)] [sig-apps] ReplicaSet - test/e2e/framework/metrics/init/init.go:35 @ 01/14/23 18:35:20.85 < Exit [DeferCleanup (Each)] [sig-apps] ReplicaSet - test/e2e/framework/metrics/init/init.go:35 @ 01/14/23 18:35:20.85 (0s) > Enter [DeferCleanup (Each)] [sig-apps] ReplicaSet - dump namespaces | framework.go:206 @ 01/14/23 18:35:20.85 STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/14/23 18:35:20.85 STEP: Collecting events from namespace "replicaset-4894". - test/e2e/framework/debug/dump.go:42 @ 01/14/23 18:35:20.85 STEP: Found 6 events. - test/e2e/framework/debug/dump.go:46 @ 01/14/23 18:35:20.958 Jan 14 18:35:20.958: INFO: At 2023-01-14 18:31:59 +0000 UTC - event for test-rs: {replicaset-controller } SuccessfulCreate: Created pod: test-rs-46njb Jan 14 18:35:20.958: INFO: At 2023-01-14 18:31:59 +0000 UTC - event for test-rs-46njb: {default-scheduler } Scheduled: Successfully assigned replicaset-4894/test-rs-46njb to capz-67tgp2-mp-0000001 Jan 14 18:35:20.958: INFO: At 2023-01-14 18:32:09 +0000 UTC - event for test-rs-46njb: {kubelet capz-67tgp2-mp-0000001} Pulling: Pulling image "registry.k8s.io/e2e-test-images/httpd:2.4.38-4" Jan 14 18:35:20.958: INFO: At 2023-01-14 18:35:08 +0000 UTC - event for test-rs-46njb: {kubelet capz-67tgp2-mp-0000001} Pulled: Successfully pulled image "registry.k8s.io/e2e-test-images/httpd:2.4.38-4" in 385.300462ms (2m59.182466633s including waiting) Jan 14 18:35:20.958: INFO: At 2023-01-14 18:35:09 +0000 UTC - event for test-rs-46njb: {kubelet capz-67tgp2-mp-0000001} Created: Created container httpd Jan 14 18:35:20.958: INFO: At 2023-01-14 18:35:10 +0000 UTC - event for test-rs-46njb: {kubelet capz-67tgp2-mp-0000001} Started: Started container httpd Jan 14 18:35:21.070: INFO: POD NODE PHASE GRACE CONDITIONS Jan 14 18:35:21.070: INFO: test-rs-46njb capz-67tgp2-mp-0000001 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 18:31:59 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 18:35:10 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 18:35:10 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 18:31:59 +0000 UTC }] Jan 14 18:35:21.070: INFO: Jan 14 18:35:21.351: INFO: Logging node info for node capz-67tgp2-control-plane-2chph Jan 14 18:35:21.472: INFO: Node Info: &Node{ObjectMeta:{capz-67tgp2-control-plane-2chph 28170de3-aa87-4a67-a5ad-65493aeb11b3 12074 0 2023-01-14 18:16:37 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:capz-67tgp2-control-plane-2chph kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node.kubernetes.io/exclude-from-external-load-balancers: node.kubernetes.io/instance-type:Standard_D2s_v3 topology.kubernetes.io/region:northeurope topology.kubernetes.io/zone:northeurope-2] map[cluster.x-k8s.io/cluster-name:capz-67tgp2 cluster.x-k8s.io/cluster-namespace:default cluster.x-k8s.io/machine:capz-67tgp2-control-plane-tj79f cluster.x-k8s.io/owner-kind:KubeadmControlPlane cluster.x-k8s.io/owner-name:capz-67tgp2-control-plane kubeadm.alpha.kubernetes.io/cri-socket:unix:///var/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.0.0.4/16 projectcalico.org/IPv4VXLANTunnelAddr:192.168.35.0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-14 18:16:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {kubeadm Update v1 2023-01-14 18:16:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {kube-controller-manager Update v1 2023-01-14 18:17:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}} } {Go-http-client Update v1 2023-01-14 18:17:47 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:projectcalico.org/IPv4Address":{},"f:projectcalico.org/IPv4VXLANTunnelAddr":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {cloud-node-manager Update v1 2023-01-14 18:20:44 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:taints":{}}} } {manager Update v1 2023-01-14 18:20:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cluster.x-k8s.io/cluster-name":{},"f:cluster.x-k8s.io/cluster-namespace":{},"f:cluster.x-k8s.io/machine":{},"f:cluster.x-k8s.io/owner-kind":{},"f:cluster.x-k8s.io/owner-name":{}}}} } {cloud-controller-manager Update v1 2023-01-14 18:21:06 +0000 UTC FieldsV1 {"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"192.168.0.0/24\"":{}}}} } {kubelet Update v1 2023-01-14 18:31:33 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:192.168.0.0/24,DoNotUseExternalID:,ProviderID:azure:///subscriptions/===REDACTED===/resourceGroups/capz-67tgp2/providers/Microsoft.Compute/virtualMachines/capz-67tgp2-control-plane-2chph,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/control-plane,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[192.168.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{133003395072 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{8344743936 0} {<nil>} 8149164Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{119703055367 0} {<nil>} 119703055367 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{8239886336 0} {<nil>} 8046764Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-14 18:17:47 +0000 UTC,LastTransitionTime:2023-01-14 18:17:47 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-14 18:31:33 +0000 UTC,LastTransitionTime:2023-01-14 18:16:14 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-14 18:31:33 +0000 UTC,LastTransitionTime:2023-01-14 18:16:14 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-14 18:31:33 +0000 UTC,LastTransitionTime:2023-01-14 18:16:14 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-14 18:31:33 +0000 UTC,LastTransitionTime:2023-01-14 18:17:37 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.0.0.4,},NodeAddress{Type:Hostname,Address:capz-67tgp2-control-plane-2chph,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:aa56c5629889429baa21826756529ecb,SystemUUID:744c1c53-9da3-134c-b7da-86c573f76ec3,BootID:b6ed8583-6ec6-40d3-b9e2-4bfd39a59694,KernelVersion:5.4.0-1098-azure,OSImage:Ubuntu 18.04.6 LTS,ContainerRuntimeVersion:containerd://1.6.2,KubeletVersion:v1.27.0-alpha.0.989+eabb70833a5649,KubeProxyVersion:v1.27.0-alpha.0.989+eabb70833a5649,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-apiserver:v1.27.0-alpha.0.989_eabb70833a5649 registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.0.989_eabb70833a5649 registry.k8s.io/kube-apiserver:v1.27.0-alpha.0.989_eabb70833a5649],SizeBytes:135903699,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-controller-manager:v1.27.0-alpha.0.989_eabb70833a5649 registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.0.989_eabb70833a5649 registry.k8s.io/kube-controller-manager:v1.27.0-alpha.0.989_eabb70833a5649],SizeBytes:125717305,},ContainerImage{Names:[docker.io/calico/cni@sha256:914823d144204288f881e49b93b6852febfe669074cd4e2a782860981615f521 docker.io/calico/cni:v3.23.0],SizeBytes:110494683,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[docker.io/calico/node@sha256:4763820ecb4d8e82483a2ffabfec7fcded9603318692df210a778d223a4d7474 docker.io/calico/node:v3.23.0],SizeBytes:71573794,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-scheduler:v1.27.0-alpha.0.989_eabb70833a5649 registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.0.989_eabb70833a5649 registry.k8s.io/kube-scheduler:v1.27.0-alpha.0.989_eabb70833a5649],SizeBytes:57551672,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-proxy:v1.27.0-alpha.0.989_eabb70833a5649 registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.989_eabb70833a5649 registry.k8s.io/kube-proxy:v1.27.0-alpha.0.989_eabb70833a5649],SizeBytes:52478325,},ContainerImage{Names:[registry.k8s.io/kube-apiserver@sha256:d230a0b88a3daf14e4cce03b906b992c8153f37da878677f434b1af8c4e8cc75 registry.k8s.io/kube-apiserver:v1.26.0],SizeBytes:35317868,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager@sha256:26e260b50ec46bd1da7352565cb8b34b6dd2cb006cebbd2f35170d50935fb9ec registry.k8s.io/kube-controller-manager:v1.26.0],SizeBytes:32244989,},ContainerImage{Names:[registry.k8s.io/kube-proxy@sha256:1e9bbe429e4e2b2ad32681c91deb98a334f1bf4135137df5f84f9d03689060fe registry.k8s.io/kube-proxy:v1.26.0],SizeBytes:21536465,},ContainerImage{Names:[registry.k8s.io/kube-scheduler@sha256:34a142549f94312b41d4a6cd98e7fddabff484767a199333acb7503bf46d7410 registry.k8s.io/kube-scheduler:v1.26.0],SizeBytes:17484038,},ContainerImage{Names:[capzci.azurecr.io/azure-cloud-controller-manager@sha256:a52d9377e1464d9e2d827e6555d7edf9082b5d85b60676d2fd74b87e202bad0c capzci.azurecr.io/azure-cloud-controller-manager:63c1cd3],SizeBytes:16980267,},ContainerImage{Names:[capzci.azurecr.io/azure-cloud-node-manager@sha256:45259845bc04cb115596dd16d88262d84214a1099fe085531240b24fa03021cf capzci.azurecr.io/azure-cloud-node-manager:63c1cd3],SizeBytes:16704716,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 14 18:35:21.473: INFO: Logging kubelet events for node capz-67tgp2-control-plane-2chph Jan 14 18:35:21.589: INFO: Logging pods the kubelet thinks is on node capz-67tgp2-control-plane-2chph Jan 14 18:35:21.797: INFO: kube-scheduler-capz-67tgp2-control-plane-2chph started at 2023-01-14 18:16:45 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:21.797: INFO: Container kube-scheduler ready: true, restart count 0 Jan 14 18:35:21.797: INFO: kube-proxy-j74l7 started at 2023-01-14 18:16:44 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:21.797: INFO: Container kube-proxy ready: true, restart count 0 Jan 14 18:35:21.797: INFO: calico-node-g5dqz started at 2023-01-14 18:17:11 +0000 UTC (2+1 container statuses recorded) Jan 14 18:35:21.797: INFO: Init container upgrade-ipam ready: true, restart count 0 Jan 14 18:35:21.797: INFO: Init container install-cni ready: true, restart count 0 Jan 14 18:35:21.797: INFO: Container calico-node ready: true, restart count 0 Jan 14 18:35:21.797: INFO: cloud-node-manager-5qlnt started at 2023-01-14 18:20:38 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:21.797: INFO: Container cloud-node-manager ready: true, restart count 0 Jan 14 18:35:21.797: INFO: cloud-controller-manager-64479fbc67-xdds2 started at 2023-01-14 18:20:44 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:21.797: INFO: Container cloud-controller-manager ready: true, restart count 0 Jan 14 18:35:21.797: INFO: etcd-capz-67tgp2-control-plane-2chph started at 2023-01-14 18:16:44 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:21.797: INFO: Container etcd ready: true, restart count 0 Jan 14 18:35:21.797: INFO: kube-apiserver-capz-67tgp2-control-plane-2chph started at 2023-01-14 18:16:44 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:21.797: INFO: Container kube-apiserver ready: true, restart count 0 Jan 14 18:35:21.797: INFO: kube-controller-manager-capz-67tgp2-control-plane-2chph started at 2023-01-14 18:16:44 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:21.797: INFO: Container kube-controller-manager ready: true, restart count 0 Jan 14 18:35:22.369: INFO: Latency metrics for node capz-67tgp2-control-plane-2chph Jan 14 18:35:22.369: INFO: Logging node info for node capz-67tgp2-mp-0000000 Jan 14 18:35:22.474: INFO: Node Info: &Node{ObjectMeta:{capz-67tgp2-mp-0000000 d6bf69fc-90f8-43c8-9623-356f58ea157f 16641 0 2023-01-14 18:19:00 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_D2s_v3 beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:northeurope failure-domain.beta.kubernetes.io/zone:0 kubernetes.io/arch:amd64 kubernetes.io/hostname:capz-67tgp2-mp-0000000 kubernetes.io/os:linux node.kubernetes.io/instance-type:Standard_D2s_v3 topology.kubernetes.io/region:northeurope topology.kubernetes.io/zone:0] map[cluster.x-k8s.io/cluster-name:capz-67tgp2 cluster.x-k8s.io/cluster-namespace:default cluster.x-k8s.io/owner-kind:MachinePool cluster.x-k8s.io/owner-name:capz-67tgp2-mp-0 kubeadm.alpha.kubernetes.io/cri-socket:unix:///var/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.1.0.4/16 projectcalico.org/IPv4VXLANTunnelAddr:192.168.243.192 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2023-01-14 18:19:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet Update v1 2023-01-14 18:19:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-14 18:19:48 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}} } {Go-http-client Update v1 2023-01-14 18:20:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:projectcalico.org/IPv4Address":{},"f:projectcalico.org/IPv4VXLANTunnelAddr":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {cloud-controller-manager Update v1 2023-01-14 18:21:06 +0000 UTC FieldsV1 {"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"192.168.1.0/24\"":{}}}} } {manager Update v1 2023-01-14 18:21:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cluster.x-k8s.io/cluster-name":{},"f:cluster.x-k8s.io/cluster-namespace":{},"f:cluster.x-k8s.io/owner-kind":{},"f:cluster.x-k8s.io/owner-name":{}}}} } {kubelet Update v1 2023-01-14 18:33:54 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:192.168.1.0/24,DoNotUseExternalID:,ProviderID:azure:///subscriptions/===REDACTED===/resourceGroups/capz-67tgp2/providers/Microsoft.Compute/virtualMachineScaleSets/capz-67tgp2-mp-0/virtualMachines/0,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[192.168.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{4 0} {<nil>} 4 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{31025332224 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{8344743936 0} {<nil>} 8149164Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{4 0} {<nil>} 4 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{27922798956 0} {<nil>} 27922798956 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{8239886336 0} {<nil>} 8046764Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-14 18:20:12 +0000 UTC,LastTransitionTime:2023-01-14 18:20:12 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-14 18:33:54 +0000 UTC,LastTransitionTime:2023-01-14 18:19:00 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-14 18:33:54 +0000 UTC,LastTransitionTime:2023-01-14 18:19:00 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-14 18:33:54 +0000 UTC,LastTransitionTime:2023-01-14 18:19:00 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-14 18:33:54 +0000 UTC,LastTransitionTime:2023-01-14 18:19:48 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:capz-67tgp2-mp-0000000,},NodeAddress{Type:InternalIP,Address:10.1.0.4,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:95d9ab6ead5141e2b46b1d18fec95432,SystemUUID:3fc8a171-f25a-2049-95d3-3c4be76d51a7,BootID:b9ac1a12-eff5-45ad-b970-9df972ef339e,KernelVersion:5.4.0-1098-azure,OSImage:Ubuntu 18.04.6 LTS,ContainerRuntimeVersion:containerd://1.6.2,KubeletVersion:v1.27.0-alpha.0.989+eabb70833a5649,KubeProxyVersion:v1.27.0-alpha.0.989+eabb70833a5649,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-apiserver:v1.27.0-alpha.0.989_eabb70833a5649 registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.0.989_eabb70833a5649 registry.k8s.io/kube-apiserver:v1.27.0-alpha.0.989_eabb70833a5649],SizeBytes:135903699,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-controller-manager:v1.27.0-alpha.0.989_eabb70833a5649 registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.0.989_eabb70833a5649 registry.k8s.io/kube-controller-manager:v1.27.0-alpha.0.989_eabb70833a5649],SizeBytes:125717305,},ContainerImage{Names:[docker.io/calico/cni@sha256:914823d144204288f881e49b93b6852febfe669074cd4e2a782860981615f521 docker.io/calico/cni:v3.23.0],SizeBytes:110494683,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[docker.io/calico/node@sha256:4763820ecb4d8e82483a2ffabfec7fcded9603318692df210a778d223a4d7474 docker.io/calico/node:v3.23.0],SizeBytes:71573794,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-scheduler:v1.27.0-alpha.0.989_eabb70833a5649 registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.0.989_eabb70833a5649 registry.k8s.io/kube-scheduler:v1.27.0-alpha.0.989_eabb70833a5649],SizeBytes:57551672,},ContainerImage{Names:[docker.io/calico/kube-controllers@sha256:78bc199299f966b0694dc4044501aee2d7ebd6862b2b0a00bca3ee8d3813c82f docker.io/calico/kube-controllers:v3.23.0],SizeBytes:56343954,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-proxy:v1.27.0-alpha.0.989_eabb70833a5649 registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.989_eabb70833a5649 registry.k8s.io/kube-proxy:v1.27.0-alpha.0.989_eabb70833a5649],SizeBytes:52478325,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nautilus@sha256:80ba6c8c44f9623f06e868a1aa66026c8ec438ad814f9ec95e9333b415fe3550 registry.k8s.io/e2e-test-images/nautilus:1.7],SizeBytes:49641698,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/kube-apiserver@sha256:d230a0b88a3daf14e4cce03b906b992c8153f37da878677f434b1af8c4e8cc75 registry.k8s.io/kube-apiserver:v1.26.0],SizeBytes:35317868,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager@sha256:26e260b50ec46bd1da7352565cb8b34b6dd2cb006cebbd2f35170d50935fb9ec registry.k8s.io/kube-controller-manager:v1.26.0],SizeBytes:32244989,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/kube-proxy@sha256:1e9bbe429e4e2b2ad32681c91deb98a334f1bf4135137df5f84f9d03689060fe registry.k8s.io/kube-proxy:v1.26.0],SizeBytes:21536465,},ContainerImage{Names:[registry.k8s.io/kube-scheduler@sha256:34a142549f94312b41d4a6cd98e7fddabff484767a199333acb7503bf46d7410 registry.k8s.io/kube-scheduler:v1.26.0],SizeBytes:17484038,},ContainerImage{Names:[capzci.azurecr.io/azure-cloud-node-manager@sha256:45259845bc04cb115596dd16d88262d84214a1099fe085531240b24fa03021cf capzci.azurecr.io/azure-cloud-node-manager:63c1cd3],SizeBytes:16704716,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:6978614,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 14 18:35:22.474: INFO: Logging kubelet events for node capz-67tgp2-mp-0000000 Jan 14 18:35:22.579: INFO: Logging pods the kubelet thinks is on node capz-67tgp2-mp-0000000 Jan 14 18:35:22.758: INFO: dns-test-eeb40b41-fc0f-431a-8cac-0735a1f4243b started at 2023-01-14 18:33:46 +0000 UTC (0+3 container statuses recorded) Jan 14 18:35:22.758: INFO: Container jessie-querier ready: false, restart count 0 Jan 14 18:35:22.758: INFO: Container querier ready: false, restart count 0 Jan 14 18:35:22.758: INFO: Container webserver ready: false, restart count 0 Jan 14 18:35:22.758: INFO: pod1 started at 2023-01-14 18:34:38 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:22.758: INFO: Container agnhost-container ready: true, restart count 0 Jan 14 18:35:22.758: INFO: execpodg9czm started at 2023-01-14 18:35:00 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:22.758: INFO: Container agnhost-container ready: true, restart count 0 Jan 14 18:35:22.758: INFO: update-demo-nautilus-mcn6g started at 2023-01-14 18:32:15 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:22.758: INFO: Container update-demo ready: true, restart count 0 Jan 14 18:35:22.758: INFO: test-deployment-7b7876f9d6-zqb4p started at 2023-01-14 18:30:28 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:22.758: INFO: Container test-deployment ready: true, restart count 0 Jan 14 18:35:22.758: INFO: alpine-nnp-true-a4de807c-d017-4cbf-80dd-9efa36816371 started at 2023-01-14 18:33:49 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:22.758: INFO: Container alpine-nnp-true-a4de807c-d017-4cbf-80dd-9efa36816371 ready: false, restart count 0 Jan 14 18:35:22.758: INFO: execpodmhfjd started at 2023-01-14 18:34:57 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:22.758: INFO: Container agnhost-container ready: true, restart count 0 Jan 14 18:35:22.758: INFO: externalname-service-pq2wx started at 2023-01-14 18:34:36 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:22.758: INFO: Container externalname-service ready: true, restart count 0 Jan 14 18:35:22.758: INFO: coredns-56f4c55bf9-4pfjc started at 2023-01-14 18:19:48 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:22.758: INFO: Container coredns ready: true, restart count 0 Jan 14 18:35:22.758: INFO: metrics-server-795d765ff8-rskk8 started at 2023-01-14 18:19:48 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:22.758: INFO: Container metrics-server ready: true, restart count 0 Jan 14 18:35:22.758: INFO: ss2-0 started at 2023-01-14 18:31:08 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:22.758: INFO: Container webserver ready: true, restart count 0 Jan 14 18:35:22.758: INFO: ss2-0 started at 2023-01-14 18:34:37 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:22.758: INFO: Container webserver ready: true, restart count 0 Jan 14 18:35:22.758: INFO: cloud-node-manager-l846f started at 2023-01-14 18:20:38 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:22.758: INFO: Container cloud-node-manager ready: true, restart count 0 Jan 14 18:35:22.758: INFO: ss2-2 started at <nil> (0+0 container statuses recorded) Jan 14 18:35:22.758: INFO: test-ss-0 started at 2023-01-14 18:28:36 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:22.758: INFO: Container webserver ready: true, restart count 0 Jan 14 18:35:22.758: INFO: calico-node-t5npc started at 2023-01-14 18:19:05 +0000 UTC (2+1 container statuses recorded) Jan 14 18:35:22.758: INFO: Init container upgrade-ipam ready: true, restart count 0 Jan 14 18:35:22.758: INFO: Init container install-cni ready: true, restart count 0 Jan 14 18:35:22.758: INFO: Container calico-node ready: true, restart count 0 Jan 14 18:35:22.758: INFO: calico-kube-controllers-657b584867-tn8lq started at 2023-01-14 18:19:48 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:22.758: INFO: Container calico-kube-controllers ready: true, restart count 0 Jan 14 18:35:22.758: INFO: pod-secrets-5a523e88-d1f1-46b1-b8c2-7b0072c2daca started at <nil> (0+0 container statuses recorded) Jan 14 18:35:22.758: INFO: coredns-56f4c55bf9-zp98j started at 2023-01-14 18:19:48 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:22.758: INFO: Container coredns ready: true, restart count 0 Jan 14 18:35:22.758: INFO: tester started at 2023-01-14 18:35:08 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:22.758: INFO: Container tester ready: true, restart count 0 Jan 14 18:35:22.758: INFO: ss-1 started at 2023-01-14 18:35:14 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:22.758: INFO: Container webserver ready: false, restart count 0 Jan 14 18:35:22.758: INFO: sample-webhook-deployment-865554f4d9-bb228 started at 2023-01-14 18:35:20 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:22.758: INFO: Container sample-webhook ready: false, restart count 0 Jan 14 18:35:22.758: INFO: sysctl-f358e086-4a11-4d39-95fe-d645fb791239 started at 2023-01-14 18:35:15 +0000 UTC (0+0 container statuses recorded) Jan 14 18:35:22.758: INFO: kube-proxy-8jftq started at 2023-01-14 18:19:05 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:22.758: INFO: Container kube-proxy ready: true, restart count 0 Jan 14 18:35:22.758: INFO: test-rolling-update-deployment-7549d9f46d-pklnz started at <nil> (0+0 container statuses recorded) Jan 14 18:35:22.758: INFO: downward-api-1073a5a4-0d5f-4af3-9e34-20a20f87b5ea started at 2023-01-14 18:35:19 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:22.758: INFO: Container dapi-container ready: false, restart count 0 Jan 14 18:35:22.758: INFO: pod-qos-class-bddd171a-e154-4523-9abb-837b2095dfbb started at 2023-01-14 18:33:27 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:22.758: INFO: Container agnhost ready: false, restart count 0 Jan 14 18:35:23.838: INFO: Latency metrics for node capz-67tgp2-mp-0000000 Jan 14 18:35:23.838: INFO: Logging node info for node capz-67tgp2-mp-0000001 Jan 14 18:35:23.943: INFO: Node Info: &Node{ObjectMeta:{capz-67tgp2-mp-0000001 a57d1a46-19d4-4265-8229-3bb32b89963d 19871 0 2023-01-14 18:18:59 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_D2s_v3 beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:northeurope failure-domain.beta.kubernetes.io/zone:1 kubernetes.io/arch:amd64 kubernetes.io/hostname:capz-67tgp2-mp-0000001 kubernetes.io/os:linux node.kubernetes.io/instance-type:Standard_D2s_v3 topology.kubernetes.io/region:northeurope topology.kubernetes.io/zone:1] map[cluster.x-k8s.io/cluster-name:capz-67tgp2 cluster.x-k8s.io/cluster-namespace:default cluster.x-k8s.io/owner-kind:MachinePool cluster.x-k8s.io/owner-name:capz-67tgp2-mp-0 kubeadm.alpha.kubernetes.io/cri-socket:unix:///var/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.1.0.5/16 projectcalico.org/IPv4VXLANTunnelAddr:192.168.14.128 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-14 18:18:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2023-01-14 18:19:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kube-controller-manager Update v1 2023-01-14 18:20:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}} } {Go-http-client Update v1 2023-01-14 18:20:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:projectcalico.org/IPv4Address":{},"f:projectcalico.org/IPv4VXLANTunnelAddr":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {cloud-controller-manager Update v1 2023-01-14 18:21:06 +0000 UTC FieldsV1 {"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"192.168.2.0/24\"":{}}}} } {manager Update v1 2023-01-14 18:21:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cluster.x-k8s.io/cluster-name":{},"f:cluster.x-k8s.io/cluster-namespace":{},"f:cluster.x-k8s.io/owner-kind":{},"f:cluster.x-k8s.io/owner-name":{}}}} } {kubelet Update v1 2023-01-14 18:35:23 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:192.168.2.0/24,DoNotUseExternalID:,ProviderID:azure:///subscriptions/===REDACTED===/resourceGroups/capz-67tgp2/providers/Microsoft.Compute/virtualMachineScaleSets/capz-67tgp2-mp-0/virtualMachines/1,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[192.168.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{4 0} {<nil>} 4 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{31025332224 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{8344743936 0} {<nil>} 8149164Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{4 0} {<nil>} 4 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{27922798956 0} {<nil>} 27922798956 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{8239886336 0} {<nil>} 8046764Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-14 18:20:41 +0000 UTC,LastTransitionTime:2023-01-14 18:20:41 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-14 18:35:23 +0000 UTC,LastTransitionTime:2023-01-14 18:18:59 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-14 18:35:23 +0000 UTC,LastTransitionTime:2023-01-14 18:18:59 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-14 18:35:23 +0000 UTC,LastTransitionTime:2023-01-14 18:18:59 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-14 18:35:23 +0000 UTC,LastTransitionTime:2023-01-14 18:20:32 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:capz-67tgp2-mp-0000001,},NodeAddress{Type:InternalIP,Address:10.1.0.5,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:e38f17c71746485985c8ebe9f1d87480,SystemUUID:31667858-013a-6c49-bd37-41a0bfb4cd7c,BootID:a61dc5b1-073f-4988-b019-c5aa35ecae86,KernelVersion:5.4.0-1098-azure,OSImage:Ubuntu 18.04.6 LTS,ContainerRuntimeVersion:containerd://1.6.2,KubeletVersion:v1.27.0-alpha.0.989+eabb70833a5649,KubeProxyVersion:v1.27.0-alpha.0.989+eabb70833a5649,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-apiserver:v1.27.0-alpha.0.989_eabb70833a5649 registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.0.989_eabb70833a5649 registry.k8s.io/kube-apiserver:v1.27.0-alpha.0.989_eabb70833a5649],SizeBytes:135903699,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-controller-manager:v1.27.0-alpha.0.989_eabb70833a5649 registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.0.989_eabb70833a5649 registry.k8s.io/kube-controller-manager:v1.27.0-alpha.0.989_eabb70833a5649],SizeBytes:125717305,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[docker.io/calico/cni@sha256:914823d144204288f881e49b93b6852febfe669074cd4e2a782860981615f521 docker.io/calico/cni:v3.23.0],SizeBytes:110494683,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[docker.io/calico/node@sha256:4763820ecb4d8e82483a2ffabfec7fcded9603318692df210a778d223a4d7474 docker.io/calico/node:v3.23.0],SizeBytes:71573794,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-scheduler:v1.27.0-alpha.0.989_eabb70833a5649 registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.0.989_eabb70833a5649 registry.k8s.io/kube-scheduler:v1.27.0-alpha.0.989_eabb70833a5649],SizeBytes:57551672,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-proxy:v1.27.0-alpha.0.989_eabb70833a5649 registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.989_eabb70833a5649 registry.k8s.io/kube-proxy:v1.27.0-alpha.0.989_eabb70833a5649],SizeBytes:52478325,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nautilus@sha256:80ba6c8c44f9623f06e868a1aa66026c8ec438ad814f9ec95e9333b415fe3550 registry.k8s.io/e2e-test-images/nautilus:1.7],SizeBytes:49641698,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/kube-apiserver@sha256:d230a0b88a3daf14e4cce03b906b992c8153f37da878677f434b1af8c4e8cc75 registry.k8s.io/kube-apiserver:v1.26.0],SizeBytes:35317868,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager@sha256:26e260b50ec46bd1da7352565cb8b34b6dd2cb006cebbd2f35170d50935fb9ec registry.k8s.io/kube-controller-manager:v1.26.0],SizeBytes:32244989,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/sample-apiserver@sha256:8d70890151aa5d096f331cb9da1b9cd5be0412b7363fe67b5c3befdcaa2a28d0 registry.k8s.io/e2e-test-images/sample-apiserver:1.17.7],SizeBytes:25667066,},ContainerImage{Names:[registry.k8s.io/kube-proxy@sha256:1e9bbe429e4e2b2ad32681c91deb98a334f1bf4135137df5f84f9d03689060fe registry.k8s.io/kube-proxy:v1.26.0],SizeBytes:21536465,},ContainerImage{Names:[registry.k8s.io/kube-scheduler@sha256:34a142549f94312b41d4a6cd98e7fddabff484767a199333acb7503bf46d7410 registry.k8s.io/kube-scheduler:v1.26.0],SizeBytes:17484038,},ContainerImage{Names:[capzci.azurecr.io/azure-cloud-node-manager@sha256:45259845bc04cb115596dd16d88262d84214a1099fe085531240b24fa03021cf capzci.azurecr.io/azure-cloud-node-manager:63c1cd3],SizeBytes:16704716,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:6978614,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 14 18:35:23.943: INFO: Logging kubelet events for node capz-67tgp2-mp-0000001 Jan 14 18:35:24.049: INFO: Logging pods the kubelet thinks is on node capz-67tgp2-mp-0000001 Jan 14 18:35:24.215: INFO: pod-init-e3f25dbe-5e64-4732-8132-bc1e8e27a112 started at 2023-01-14 18:35:14 +0000 UTC (2+1 container statuses recorded) Jan 14 18:35:24.215: INFO: Init container init1 ready: false, restart count 0 Jan 14 18:35:24.215: INFO: Init container init2 ready: false, restart count 0 Jan 14 18:35:24.215: INFO: Container run1 ready: false, restart count 0 Jan 14 18:35:24.215: INFO: update-demo-nautilus-9757j started at 2023-01-14 18:30:58 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:24.215: INFO: Container update-demo ready: true, restart count 0 Jan 14 18:35:24.215: INFO: externalname-service-2nvd6 started at 2023-01-14 18:34:36 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:24.215: INFO: Container externalname-service ready: true, restart count 0 Jan 14 18:35:24.215: INFO: ss2-1 started at 2023-01-14 18:33:26 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:24.215: INFO: Container webserver ready: true, restart count 0 Jan 14 18:35:24.215: INFO: update-demo-nautilus-gtnf9 started at 2023-01-14 18:32:15 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:24.215: INFO: Container update-demo ready: true, restart count 0 Jan 14 18:35:24.215: INFO: pod-ready started at 2023-01-14 18:34:20 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:24.215: INFO: Container pod-readiness-gate ready: true, restart count 0 Jan 14 18:35:24.215: INFO: image-pull-test52485987-1264-447b-b3c6-bbe4761b3eb2 started at 2023-01-14 18:33:48 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:24.215: INFO: Container image-pull-test ready: false, restart count 0 Jan 14 18:35:24.215: INFO: cloud-node-manager-c24hp started at 2023-01-14 18:20:38 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:24.215: INFO: Container cloud-node-manager ready: true, restart count 0 Jan 14 18:35:24.215: INFO: test-deployment-7df74c55ff-84hdq started at 2023-01-14 18:29:43 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:24.215: INFO: Container test-deployment ready: true, restart count 0 Jan 14 18:35:24.215: INFO: ss2-1 started at 2023-01-14 18:33:52 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:24.215: INFO: Container webserver ready: true, restart count 0 Jan 14 18:35:24.215: INFO: sample-apiserver-deployment-55bd96fd47-ff7kc started at 2023-01-14 18:31:43 +0000 UTC (0+2 container statuses recorded) Jan 14 18:35:24.215: INFO: Container etcd ready: true, restart count 0 Jan 14 18:35:24.215: INFO: Container sample-apiserver ready: false, restart count 0 Jan 14 18:35:24.215: INFO: ss-0 started at 2023-01-14 18:35:08 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:24.215: INFO: Container webserver ready: true, restart count 0 Jan 14 18:35:24.215: INFO: busybox-81487092-f501-4426-acf5-c16c8471c3c4 started at 2023-01-14 18:34:50 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:24.215: INFO: Container busybox ready: false, restart count 0 Jan 14 18:35:24.215: INFO: test-rolling-update-controller-lh8rd started at 2023-01-14 18:31:44 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:24.215: INFO: Container httpd ready: true, restart count 0 Jan 14 18:35:24.215: INFO: ss-0 started at 2023-01-14 18:34:43 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:24.215: INFO: Container webserver ready: true, restart count 0 Jan 14 18:35:24.215: INFO: busybox-user-65534-e1188811-c39c-4714-8d9f-b3aad5e7e12b started at 2023-01-14 18:35:11 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:24.215: INFO: Container busybox-user-65534-e1188811-c39c-4714-8d9f-b3aad5e7e12b ready: false, restart count 0 Jan 14 18:35:24.215: INFO: downwardapi-volume-b58576c1-737b-42c9-aeb6-1d8e6a721d70 started at 2023-01-14 18:35:16 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:24.215: INFO: Container client-container ready: true, restart count 0 Jan 14 18:35:24.215: INFO: pod-configmaps-36d07591-4990-4769-bfcb-b3813928fe8c started at 2023-01-14 18:35:15 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:24.215: INFO: Container env-test ready: false, restart count 0 Jan 14 18:35:24.215: INFO: test-ss-1 started at 2023-01-14 18:31:37 +0000 UTC (0+2 container statuses recorded) Jan 14 18:35:24.215: INFO: Container test-ss ready: true, restart count 0 Jan 14 18:35:24.215: INFO: Container webserver ready: true, restart count 0 Jan 14 18:35:24.215: INFO: kube-proxy-xd8xz started at 2023-01-14 18:19:07 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:24.215: INFO: Container kube-proxy ready: true, restart count 0 Jan 14 18:35:24.215: INFO: pod2 started at <nil> (0+0 container statuses recorded) Jan 14 18:35:24.215: INFO: calico-node-lzp55 started at 2023-01-14 18:19:07 +0000 UTC (2+1 container statuses recorded) Jan 14 18:35:24.215: INFO: Init container upgrade-ipam ready: true, restart count 0 Jan 14 18:35:24.215: INFO: Init container install-cni ready: true, restart count 0 Jan 14 18:35:24.215: INFO: Container calico-node ready: true, restart count 0 Jan 14 18:35:24.215: INFO: image-pull-testdb5f66f7-9de7-465c-888d-fcd0f2ef78f0 started at 2023-01-14 18:34:48 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:24.215: INFO: Container image-pull-test ready: false, restart count 0 Jan 14 18:35:24.215: INFO: test-rs-46njb started at 2023-01-14 18:31:59 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:24.215: INFO: Container httpd ready: true, restart count 0 Jan 14 18:35:24.215: INFO: test-deployment-7b7876f9d6-cjtpl started at 2023-01-14 18:33:10 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:24.215: INFO: Container test-deployment ready: true, restart count 0 Jan 14 18:35:24.215: INFO: ss2-2 started at 2023-01-14 18:34:53 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:24.215: INFO: Container webserver ready: false, restart count 0 Jan 14 18:35:24.215: INFO: sample-webhook-deployment-865554f4d9-xz65d started at 2023-01-14 18:35:13 +0000 UTC (0+1 container statuses recorded) Jan 14 18:35:24.215: INFO: Container sample-webhook ready: false, restart count 0 Jan 14 18:35:25.166: INFO: Latency metrics for node capz-67tgp2-mp-0000001 END STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/14/23 18:35:25.166 (4.315s) < Exit [DeferCleanup (Each)] [sig-apps] ReplicaSet - dump namespaces | framework.go:206 @ 01/14/23 18:35:25.166 (4.315s) > Enter [DeferCleanup (Each)] [sig-apps] ReplicaSet - tear down framework | framework.go:203 @ 01/14/23 18:35:25.166 STEP: Destroying namespace "replicaset-4894" for this suite. - test/e2e/framework/framework.go:347 @ 01/14/23 18:35:25.166 < Exit [DeferCleanup (Each)] [sig-apps] ReplicaSet - tear down framework | framework.go:203 @ 01/14/23 18:35:25.293 (127ms) > Enter [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/14/23 18:35:25.293 < Exit [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/14/23 18:35:25.293 (0s)
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-node\]\sPods\sshould\srun\sthrough\sthe\slifecycle\sof\sPods\sand\sPodStatus\s\[Conformance\]$'
[FAILED] pod pod-test found in namespace pods-9842, but it should be deleted: {"metadata":{"name":"pod-test","namespace":"pods-9842","uid":"04415031-94c9-45f0-9f1f-b063e61d2246","resourceVersion":"10300","creationTimestamp":"2023-01-14T18:29:43Z","deletionTimestamp":"2023-01-14T18:30:31Z","deletionGracePeriodSeconds":1,"labels":{"test-pod":"patched","test-pod-static":"true"},"annotations":{"cni.projectcalico.org/containerID":"db74a9ca9e24cff70c736bdfb5be7ea2cdbe920838fafac225226b0000bca37c","cni.projectcalico.org/podIP":"192.168.14.252/32","cni.projectcalico.org/podIPs":"192.168.14.252/32"},"managedFields":[{"manager":"Go-http-client","operation":"Update","apiVersion":"v1","time":"2023-01-14T18:30:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}},"subresource":"status"},{"manager":"e2e.test","operation":"Update","apiVersion":"v1","time":"2023-01-14T18:30:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:labels":{".":{},"f:test-pod":{},"f:test-pod-static":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"pod-test\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{".":{},"f:allowPrivilegeEscalation":{},"f:capabilities":{".":{},"f:drop":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{".":{},"f:runAsNonRoot":{},"f:runAsUser":{},"f:seccompProfile":{".":{},"f:type":{}}},"f:terminationGracePeriodSeconds":{}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T18:30:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.14.252\"}":{".":{},"f:ip":{}}},"f:startTime":{}}},"subresource":"status"}]},"spec":{"volumes":[{"name":"kube-api-access-2wsl2","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}}],"defaultMode":420}}],"containers":[{"name":"pod-test","image":"registry.k8s.io/e2e-test-images/httpd:2.4.38-4","resources":{},"volumeMounts":[{"name":"kube-api-access-2wsl2","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent","securityContext":{"capabilities":{"drop":["ALL"]},"allowPrivilegeEscalation":false}}],"restartPolicy":"Always","terminationGracePeriodSeconds":1,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"capz-67tgp2-mp-0000001","securityContext":{"runAsUser":1000,"runAsNonRoot":true,"seccompProfile":{"type":"RuntimeDefault"}},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute","tolerationSeconds":300},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute","tolerationSeconds":300}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-01-14T18:29:43Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-01-14T18:30:24Z"},{"type":"ContainersReady","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-01-14T18:30:24Z"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-01-14T18:29:43Z"}],"hostIP":"10.1.0.5","podIP":"192.168.14.252","podIPs":[{"ip":"192.168.14.252"}],"startTime":"2023-01-14T18:29:43Z","containerStatuses":[{"name":"pod-test","state":{"running":{"startedAt":"2023-01-14T18:30:23Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"registry.k8s.io/e2e-test-images/agnhost:2.43","imageID":"registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e","containerID":"containerd://9508aacc1f24eabc19c52b8f8606893171b681bbae10099e1041699f19091e5d","started":true}],"qosClass":"BestEffort"}} Expected an error to have occurred. Got: <nil>: nil In [It] at: test/e2e/common/node/pods.go:1070 @ 01/14/23 18:31:31.005from ginkgo_report.xml
> Enter [BeforeEach] [sig-node] Pods - set up framework | framework.go:188 @ 01/14/23 18:29:42.614 STEP: Creating a kubernetes client - test/e2e/framework/framework.go:208 @ 01/14/23 18:29:42.615 Jan 14 18:29:42.615: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/kubeconfig STEP: Building a namespace api object, basename pods - test/e2e/framework/framework.go:247 @ 01/14/23 18:29:42.616 STEP: Waiting for a default service account to be provisioned in namespace - test/e2e/framework/framework.go:256 @ 01/14/23 18:29:43.062 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace - test/e2e/framework/framework.go:259 @ 01/14/23 18:29:43.267 < Exit [BeforeEach] [sig-node] Pods - set up framework | framework.go:188 @ 01/14/23 18:29:43.471 (856ms) > Enter [BeforeEach] [sig-node] Pods - test/e2e/framework/metrics/init/init.go:33 @ 01/14/23 18:29:43.471 < Exit [BeforeEach] [sig-node] Pods - test/e2e/framework/metrics/init/init.go:33 @ 01/14/23 18:29:43.471 (0s) > Enter [BeforeEach] [sig-node] Pods - test/e2e/common/node/pods.go:195 @ 01/14/23 18:29:43.471 < Exit [BeforeEach] [sig-node] Pods - test/e2e/common/node/pods.go:195 @ 01/14/23 18:29:43.471 (0s) > Enter [It] should run through the lifecycle of Pods and PodStatus [Conformance] - test/e2e/common/node/pods.go:897 @ 01/14/23 18:29:43.471 STEP: creating a Pod with a static label - test/e2e/common/node/pods.go:931 @ 01/14/23 18:29:43.632 STEP: watching for Pod to be ready - test/e2e/common/node/pods.go:935 @ 01/14/23 18:29:43.829 Jan 14 18:29:43.937: INFO: observed Pod pod-test in namespace pods-9842 in phase Pending with labels: map[test-pod-static:true] & conditions [] Jan 14 18:29:43.937: INFO: observed Pod pod-test in namespace pods-9842 in phase Pending with labels: map[test-pod-static:true] & conditions [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 18:29:43 +0000 UTC }] Jan 14 18:29:52.688: INFO: observed Pod pod-test in namespace pods-9842 in phase Pending with labels: map[test-pod-static:true] & conditions [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 18:29:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 18:29:43 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 18:29:43 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 18:29:43 +0000 UTC }] Jan 14 18:30:01.224: INFO: observed Pod pod-test in namespace pods-9842 in phase Pending with labels: map[test-pod-static:true] & conditions [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 18:29:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 18:29:43 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 18:29:43 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 18:29:43 +0000 UTC }] Jan 14 18:30:30.098: INFO: Found Pod pod-test in namespace pods-9842 in phase Running with labels: map[test-pod-static:true] & conditions [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 18:29:43 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 18:30:24 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 18:30:24 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 18:29:43 +0000 UTC }] STEP: patching the Pod with a new Label and updated data - test/e2e/common/node/pods.go:961 @ 01/14/23 18:30:30.208 STEP: getting the Pod and ensuring that it's patched - test/e2e/common/node/pods.go:997 @ 01/14/23 18:30:30.437 STEP: replacing the Pod's status Ready condition to False - test/e2e/common/node/pods.go:1003 @ 01/14/23 18:30:30.546 STEP: check the Pod again to ensure its Ready conditions are False - test/e2e/common/node/pods.go:1029 @ 01/14/23 18:30:30.772 STEP: deleting the Pod via a Collection with a LabelSelector - test/e2e/common/node/pods.go:1039 @ 01/14/23 18:30:30.772 STEP: watching for the Pod to be deleted - test/e2e/common/node/pods.go:1044 @ 01/14/23 18:30:30.893 Jan 14 18:30:31.001: INFO: observed event type MODIFIED Jan 14 18:30:38.704: INFO: observed event type MODIFIED Jan 14 18:31:30.893: INFO: failed to see DELETED event: timed out waiting for the condition [FAILED] pod pod-test found in namespace pods-9842, but it should be deleted: {"metadata":{"name":"pod-test","namespace":"pods-9842","uid":"04415031-94c9-45f0-9f1f-b063e61d2246","resourceVersion":"10300","creationTimestamp":"2023-01-14T18:29:43Z","deletionTimestamp":"2023-01-14T18:30:31Z","deletionGracePeriodSeconds":1,"labels":{"test-pod":"patched","test-pod-static":"true"},"annotations":{"cni.projectcalico.org/containerID":"db74a9ca9e24cff70c736bdfb5be7ea2cdbe920838fafac225226b0000bca37c","cni.projectcalico.org/podIP":"192.168.14.252/32","cni.projectcalico.org/podIPs":"192.168.14.252/32"},"managedFields":[{"manager":"Go-http-client","operation":"Update","apiVersion":"v1","time":"2023-01-14T18:30:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}},"subresource":"status"},{"manager":"e2e.test","operation":"Update","apiVersion":"v1","time":"2023-01-14T18:30:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:labels":{".":{},"f:test-pod":{},"f:test-pod-static":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"pod-test\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{".":{},"f:allowPrivilegeEscalation":{},"f:capabilities":{".":{},"f:drop":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{".":{},"f:runAsNonRoot":{},"f:runAsUser":{},"f:seccompProfile":{".":{},"f:type":{}}},"f:terminationGracePeriodSeconds":{}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T18:30:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.14.252\"}":{".":{},"f:ip":{}}},"f:startTime":{}}},"subresource":"status"}]},"spec":{"volumes":[{"name":"kube-api-access-2wsl2","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}}],"defaultMode":420}}],"containers":[{"name":"pod-test","image":"registry.k8s.io/e2e-test-images/httpd:2.4.38-4","resources":{},"volumeMounts":[{"name":"kube-api-access-2wsl2","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent","securityContext":{"capabilities":{"drop":["ALL"]},"allowPrivilegeEscalation":false}}],"restartPolicy":"Always","terminationGracePeriodSeconds":1,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"capz-67tgp2-mp-0000001","securityContext":{"runAsUser":1000,"runAsNonRoot":true,"seccompProfile":{"type":"RuntimeDefault"}},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute","tolerationSeconds":300},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute","tolerationSeconds":300}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-01-14T18:29:43Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-01-14T18:30:24Z"},{"type":"ContainersReady","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-01-14T18:30:24Z"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-01-14T18:29:43Z"}],"hostIP":"10.1.0.5","podIP":"192.168.14.252","podIPs":[{"ip":"192.168.14.252"}],"startTime":"2023-01-14T18:29:43Z","containerStatuses":[{"name":"pod-test","state":{"running":{"startedAt":"2023-01-14T18:30:23Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"registry.k8s.io/e2e-test-images/agnhost:2.43","imageID":"registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e","containerID":"containerd://9508aacc1f24eabc19c52b8f8606893171b681bbae10099e1041699f19091e5d","started":true}],"qosClass":"BestEffort"}} Expected an error to have occurred. Got: <nil>: nil In [It] at: test/e2e/common/node/pods.go:1070 @ 01/14/23 18:31:31.005 < Exit [It] should run through the lifecycle of Pods and PodStatus [Conformance] - test/e2e/common/node/pods.go:897 @ 01/14/23 18:31:31.005 (1m47.534s) > Enter [AfterEach] [sig-node] Pods - test/e2e/framework/node/init/init.go:33 @ 01/14/23 18:31:31.005 Jan 14 18:31:31.005: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready < Exit [AfterEach] [sig-node] Pods - test/e2e/framework/node/init/init.go:33 @ 01/14/23 18:31:31.285 (280ms) > Enter [DeferCleanup (Each)] [sig-node] Pods - test/e2e/framework/metrics/init/init.go:35 @ 01/14/23 18:31:31.285 < Exit [DeferCleanup (Each)] [sig-node] Pods - test/e2e/framework/metrics/init/init.go:35 @ 01/14/23 18:31:31.285 (0s) > Enter [DeferCleanup (Each)] [sig-node] Pods - dump namespaces | framework.go:206 @ 01/14/23 18:31:31.285 STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/14/23 18:31:31.285 STEP: Collecting events from namespace "pods-9842". - test/e2e/framework/debug/dump.go:42 @ 01/14/23 18:31:31.285 STEP: Found 6 events. - test/e2e/framework/debug/dump.go:46 @ 01/14/23 18:31:31.41 Jan 14 18:31:31.410: INFO: At 2023-01-14 18:29:43 +0000 UTC - event for pod-test: {default-scheduler } Scheduled: Successfully assigned pods-9842/pod-test to capz-67tgp2-mp-0000001 Jan 14 18:31:31.410: INFO: At 2023-01-14 18:30:15 +0000 UTC - event for pod-test: {kubelet capz-67tgp2-mp-0000001} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.43" already present on machine Jan 14 18:31:31.410: INFO: At 2023-01-14 18:30:18 +0000 UTC - event for pod-test: {kubelet capz-67tgp2-mp-0000001} Created: Created container pod-test Jan 14 18:31:31.410: INFO: At 2023-01-14 18:30:24 +0000 UTC - event for pod-test: {kubelet capz-67tgp2-mp-0000001} Started: Started container pod-test Jan 14 18:31:31.410: INFO: At 2023-01-14 18:30:30 +0000 UTC - event for pod-test: {kubelet capz-67tgp2-mp-0000001} Killing: Container pod-test definition changed, will be restarted Jan 14 18:31:31.410: INFO: At 2023-01-14 18:30:33 +0000 UTC - event for pod-test: {kubelet capz-67tgp2-mp-0000001} Pulling: Pulling image "registry.k8s.io/e2e-test-images/httpd:2.4.38-4" Jan 14 18:31:31.521: INFO: POD NODE PHASE GRACE CONDITIONS Jan 14 18:31:31.521: INFO: pod-test capz-67tgp2-mp-0000001 Running 1s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 18:29:43 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 18:30:24 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 18:30:24 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 18:29:43 +0000 UTC }] Jan 14 18:31:31.521: INFO: Jan 14 18:31:31.796: INFO: Logging node info for node capz-67tgp2-control-plane-2chph Jan 14 18:31:31.910: INFO: Node Info: &Node{ObjectMeta:{capz-67tgp2-control-plane-2chph 28170de3-aa87-4a67-a5ad-65493aeb11b3 1854 0 2023-01-14 18:16:37 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:capz-67tgp2-control-plane-2chph kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node.kubernetes.io/exclude-from-external-load-balancers: node.kubernetes.io/instance-type:Standard_D2s_v3 topology.kubernetes.io/region:northeurope topology.kubernetes.io/zone:northeurope-2] map[cluster.x-k8s.io/cluster-name:capz-67tgp2 cluster.x-k8s.io/cluster-namespace:default cluster.x-k8s.io/machine:capz-67tgp2-control-plane-tj79f cluster.x-k8s.io/owner-kind:KubeadmControlPlane cluster.x-k8s.io/owner-name:capz-67tgp2-control-plane kubeadm.alpha.kubernetes.io/cri-socket:unix:///var/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.0.0.4/16 projectcalico.org/IPv4VXLANTunnelAddr:192.168.35.0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-14 18:16:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {kubeadm Update v1 2023-01-14 18:16:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {kube-controller-manager Update v1 2023-01-14 18:17:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}} } {Go-http-client Update v1 2023-01-14 18:17:47 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:projectcalico.org/IPv4Address":{},"f:projectcalico.org/IPv4VXLANTunnelAddr":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {cloud-node-manager Update v1 2023-01-14 18:20:44 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:taints":{}}} } {manager Update v1 2023-01-14 18:20:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cluster.x-k8s.io/cluster-name":{},"f:cluster.x-k8s.io/cluster-namespace":{},"f:cluster.x-k8s.io/machine":{},"f:cluster.x-k8s.io/owner-kind":{},"f:cluster.x-k8s.io/owner-name":{}}}} } {cloud-controller-manager Update v1 2023-01-14 18:21:06 +0000 UTC FieldsV1 {"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"192.168.0.0/24\"":{}}}} } {kubelet Update v1 2023-01-14 18:26:26 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:192.168.0.0/24,DoNotUseExternalID:,ProviderID:azure:///subscriptions/===REDACTED===/resourceGroups/capz-67tgp2/providers/Microsoft.Compute/virtualMachines/capz-67tgp2-control-plane-2chph,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/control-plane,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[192.168.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{133003395072 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{8344743936 0} {<nil>} 8149164Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{119703055367 0} {<nil>} 119703055367 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{8239886336 0} {<nil>} 8046764Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-14 18:17:47 +0000 UTC,LastTransitionTime:2023-01-14 18:17:47 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-14 18:26:26 +0000 UTC,LastTransitionTime:2023-01-14 18:16:14 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-14 18:26:26 +0000 UTC,LastTransitionTime:2023-01-14 18:16:14 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-14 18:26:26 +0000 UTC,LastTransitionTime:2023-01-14 18:16:14 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-14 18:26:26 +0000 UTC,LastTransitionTime:2023-01-14 18:17:37 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.0.0.4,},NodeAddress{Type:Hostname,Address:capz-67tgp2-control-plane-2chph,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:aa56c5629889429baa21826756529ecb,SystemUUID:744c1c53-9da3-134c-b7da-86c573f76ec3,BootID:b6ed8583-6ec6-40d3-b9e2-4bfd39a59694,KernelVersion:5.4.0-1098-azure,OSImage:Ubuntu 18.04.6 LTS,ContainerRuntimeVersion:containerd://1.6.2,KubeletVersion:v1.27.0-alpha.0.989+eabb70833a5649,KubeProxyVersion:v1.27.0-alpha.0.989+eabb70833a5649,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-apiserver:v1.27.0-alpha.0.989_eabb70833a5649 registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.0.989_eabb70833a5649 registry.k8s.io/kube-apiserver:v1.27.0-alpha.0.989_eabb70833a5649],SizeBytes:135903699,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-controller-manager:v1.27.0-alpha.0.989_eabb70833a5649 registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.0.989_eabb70833a5649 registry.k8s.io/kube-controller-manager:v1.27.0-alpha.0.989_eabb70833a5649],SizeBytes:125717305,},ContainerImage{Names:[docker.io/calico/cni@sha256:914823d144204288f881e49b93b6852febfe669074cd4e2a782860981615f521 docker.io/calico/cni:v3.23.0],SizeBytes:110494683,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[docker.io/calico/node@sha256:4763820ecb4d8e82483a2ffabfec7fcded9603318692df210a778d223a4d7474 docker.io/calico/node:v3.23.0],SizeBytes:71573794,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-scheduler:v1.27.0-alpha.0.989_eabb70833a5649 registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.0.989_eabb70833a5649 registry.k8s.io/kube-scheduler:v1.27.0-alpha.0.989_eabb70833a5649],SizeBytes:57551672,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-proxy:v1.27.0-alpha.0.989_eabb70833a5649 registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.989_eabb70833a5649 registry.k8s.io/kube-proxy:v1.27.0-alpha.0.989_eabb70833a5649],SizeBytes:52478325,},ContainerImage{Names:[registry.k8s.io/kube-apiserver@sha256:d230a0b88a3daf14e4cce03b906b992c8153f37da878677f434b1af8c4e8cc75 registry.k8s.io/kube-apiserver:v1.26.0],SizeBytes:35317868,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager@sha256:26e260b50ec46bd1da7352565cb8b34b6dd2cb006cebbd2f35170d50935fb9ec registry.k8s.io/kube-controller-manager:v1.26.0],SizeBytes:32244989,},ContainerImage{Names:[registry.k8s.io/kube-proxy@sha256:1e9bbe429e4e2b2ad32681c91deb98a334f1bf4135137df5f84f9d03689060fe registry.k8s.io/kube-proxy:v1.26.0],SizeBytes:21536465,},ContainerImage{Names:[registry.k8s.io/kube-scheduler@sha256:34a142549f94312b41d4a6cd98e7fddabff484767a199333acb7503bf46d7410 registry.k8s.io/kube-scheduler:v1.26.0],SizeBytes:17484038,},ContainerImage{Names:[capzci.azurecr.io/azure-cloud-controller-manager@sha256:a52d9377e1464d9e2d827e6555d7edf9082b5d85b60676d2fd74b87e202bad0c capzci.azurecr.io/azure-cloud-controller-manager:63c1cd3],SizeBytes:16980267,},ContainerImage{Names:[capzci.azurecr.io/azure-cloud-node-manager@sha256:45259845bc04cb115596dd16d88262d84214a1099fe085531240b24fa03021cf capzci.azurecr.io/azure-cloud-node-manager:63c1cd3],SizeBytes:16704716,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 14 18:31:31.911: INFO: Logging kubelet events for node capz-67tgp2-control-plane-2chph Jan 14 18:31:32.016: INFO: Logging pods the kubelet thinks is on node capz-67tgp2-control-plane-2chph Jan 14 18:31:32.234: INFO: cloud-node-manager-5qlnt started at 2023-01-14 18:20:38 +0000 UTC (0+1 container statuses recorded) Jan 14 18:31:32.234: INFO: Container cloud-node-manager ready: true, restart count 0 Jan 14 18:31:32.234: INFO: cloud-controller-manager-64479fbc67-xdds2 started at 2023-01-14 18:20:44 +0000 UTC (0+1 container statuses recorded) Jan 14 18:31:32.234: INFO: Container cloud-controller-manager ready: true, restart count 0 Jan 14 18:31:32.234: INFO: etcd-capz-67tgp2-control-plane-2chph started at 2023-01-14 18:16:44 +0000 UTC (0+1 container statuses recorded) Jan 14 18:31:32.234: INFO: Container etcd ready: true, restart count 0 Jan 14 18:31:32.234: INFO: kube-apiserver-capz-67tgp2-control-plane-2chph started at 2023-01-14 18:16:44 +0000 UTC (0+1 container statuses recorded) Jan 14 18:31:32.234: INFO: Container kube-apiserver ready: true, restart count 0 Jan 14 18:31:32.234: INFO: kube-scheduler-capz-67tgp2-control-plane-2chph started at 2023-01-14 18:16:45 +0000 UTC (0+1 container statuses recorded) Jan 14 18:31:32.234: INFO: Container kube-scheduler ready: true, restart count 0 Jan 14 18:31:32.234: INFO: kube-proxy-j74l7 started at 2023-01-14 18:16:44 +0000 UTC (0+1 container statuses recorded) Jan 14 18:31:32.234: INFO: Container kube-proxy ready: true, restart count 0 Jan 14 18:31:32.234: INFO: calico-node-g5dqz started at 2023-01-14 18:17:11 +0000 UTC (2+1 container statuses recorded) Jan 14 18:31:32.234: INFO: Init container upgrade-ipam ready: true, restart count 0 Jan 14 18:31:32.234: INFO: Init container install-cni ready: true, restart count 0 Jan 14 18:31:32.234: INFO: Container calico-node ready: true, restart count 0 Jan 14 18:31:32.234: INFO: kube-controller-manager-capz-67tgp2-control-plane-2chph started at 2023-01-14 18:16:44 +0000 UTC (0+1 container statuses recorded) Jan 14 18:31:32.234: INFO: Container kube-controller-manager ready: true, restart count 0 Jan 14 18:31:32.762: INFO: Latency metrics for node capz-67tgp2-control-plane-2chph Jan 14 18:31:32.762: INFO: Logging node info for node capz-67tgp2-mp-0000000 Jan 14 18:31:32.870: INFO: Node Info: &Node{ObjectMeta:{capz-67tgp2-mp-0000000 d6bf69fc-90f8-43c8-9623-356f58ea157f 5409 0 2023-01-14 18:19:00 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_D2s_v3 beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:northeurope failure-domain.beta.kubernetes.io/zone:0 kubernetes.io/arch:amd64 kubernetes.io/hostname:capz-67tgp2-mp-0000000 kubernetes.io/os:linux node.kubernetes.io/instance-type:Standard_D2s_v3 topology.kubernetes.io/region:northeurope topology.kubernetes.io/zone:0] map[cluster.x-k8s.io/cluster-name:capz-67tgp2 cluster.x-k8s.io/cluster-namespace:default cluster.x-k8s.io/owner-kind:MachinePool cluster.x-k8s.io/owner-name:capz-67tgp2-mp-0 kubeadm.alpha.kubernetes.io/cri-socket:unix:///var/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.1.0.4/16 projectcalico.org/IPv4VXLANTunnelAddr:192.168.243.192 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2023-01-14 18:19:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet Update v1 2023-01-14 18:19:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-14 18:19:48 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}} } {Go-http-client Update v1 2023-01-14 18:20:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:projectcalico.org/IPv4Address":{},"f:projectcalico.org/IPv4VXLANTunnelAddr":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {cloud-controller-manager Update v1 2023-01-14 18:21:06 +0000 UTC FieldsV1 {"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"192.168.1.0/24\"":{}}}} } {manager Update v1 2023-01-14 18:21:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cluster.x-k8s.io/cluster-name":{},"f:cluster.x-k8s.io/cluster-namespace":{},"f:cluster.x-k8s.io/owner-kind":{},"f:cluster.x-k8s.io/owner-name":{}}}} } {kubelet Update v1 2023-01-14 18:29:25 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:192.168.1.0/24,DoNotUseExternalID:,ProviderID:azure:///subscriptions/===REDACTED===/resourceGroups/capz-67tgp2/providers/Microsoft.Compute/virtualMachineScaleSets/capz-67tgp2-mp-0/virtualMachines/0,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[192.168.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{4 0} {<nil>} 4 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{31025332224 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{8344743936 0} {<nil>} 8149164Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{4 0} {<nil>} 4 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{27922798956 0} {<nil>} 27922798956 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{8239886336 0} {<nil>} 8046764Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-14 18:20:12 +0000 UTC,LastTransitionTime:2023-01-14 18:20:12 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-14 18:29:25 +0000 UTC,LastTransitionTime:2023-01-14 18:19:00 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-14 18:29:25 +0000 UTC,LastTransitionTime:2023-01-14 18:19:00 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-14 18:29:25 +0000 UTC,LastTransitionTime:2023-01-14 18:19:00 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-14 18:29:25 +0000 UTC,LastTransitionTime:2023-01-14 18:19:48 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:capz-67tgp2-mp-0000000,},NodeAddress{Type:InternalIP,Address:10.1.0.4,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:95d9ab6ead5141e2b46b1d18fec95432,SystemUUID:3fc8a171-f25a-2049-95d3-3c4be76d51a7,BootID:b9ac1a12-eff5-45ad-b970-9df972ef339e,KernelVersion:5.4.0-1098-azure,OSImage:Ubuntu 18.04.6 LTS,ContainerRuntimeVersion:containerd://1.6.2,KubeletVersion:v1.27.0-alpha.0.989+eabb70833a5649,KubeProxyVersion:v1.27.0-alpha.0.989+eabb70833a5649,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-apiserver:v1.27.0-alpha.0.989_eabb70833a5649 registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.0.989_eabb70833a5649 registry.k8s.io/kube-apiserver:v1.27.0-alpha.0.989_eabb70833a5649],SizeBytes:135903699,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-controller-manager:v1.27.0-alpha.0.989_eabb70833a5649 registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.0.989_eabb70833a5649 registry.k8s.io/kube-controller-manager:v1.27.0-alpha.0.989_eabb70833a5649],SizeBytes:125717305,},ContainerImage{Names:[docker.io/calico/cni@sha256:914823d144204288f881e49b93b6852febfe669074cd4e2a782860981615f521 docker.io/calico/cni:v3.23.0],SizeBytes:110494683,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[docker.io/calico/node@sha256:4763820ecb4d8e82483a2ffabfec7fcded9603318692df210a778d223a4d7474 docker.io/calico/node:v3.23.0],SizeBytes:71573794,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-scheduler:v1.27.0-alpha.0.989_eabb70833a5649 registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.0.989_eabb70833a5649 registry.k8s.io/kube-scheduler:v1.27.0-alpha.0.989_eabb70833a5649],SizeBytes:57551672,},ContainerImage{Names:[docker.io/calico/kube-controllers@sha256:78bc199299f966b0694dc4044501aee2d7ebd6862b2b0a00bca3ee8d3813c82f docker.io/calico/kube-controllers:v3.23.0],SizeBytes:56343954,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-proxy:v1.27.0-alpha.0.989_eabb70833a5649 registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.989_eabb70833a5649 registry.k8s.io/kube-proxy:v1.27.0-alpha.0.989_eabb70833a5649],SizeBytes:52478325,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[registry.k8s.io/kube-apiserver@sha256:d230a0b88a3daf14e4cce03b906b992c8153f37da878677f434b1af8c4e8cc75 registry.k8s.io/kube-apiserver:v1.26.0],SizeBytes:35317868,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager@sha256:26e260b50ec46bd1da7352565cb8b34b6dd2cb006cebbd2f35170d50935fb9ec registry.k8s.io/kube-controller-manager:v1.26.0],SizeBytes:32244989,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/kube-proxy@sha256:1e9bbe429e4e2b2ad32681c91deb98a334f1bf4135137df5f84f9d03689060fe registry.k8s.io/kube-proxy:v1.26.0],SizeBytes:21536465,},ContainerImage{Names:[registry.k8s.io/kube-scheduler@sha256:34a142549f94312b41d4a6cd98e7fddabff484767a199333acb7503bf46d7410 registry.k8s.io/kube-scheduler:v1.26.0],SizeBytes:17484038,},ContainerImage{Names:[capzci.azurecr.io/azure-cloud-node-manager@sha256:45259845bc04cb115596dd16d88262d84214a1099fe085531240b24fa03021cf capzci.azurecr.io/azure-cloud-node-manager:63c1cd3],SizeBytes:16704716,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:6978614,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 14 18:31:32.870: INFO: Logging kubelet events for node capz-67tgp2-mp-0000000 Jan 14 18:31:32.976: INFO: Logging pods the kubelet thinks is on node capz-67tgp2-mp-0000000 Jan 14 18:31:33.158: INFO: webserver-deployment-7f5969cbc7-77nh6 started at 2023-01-14 18:29:50 +0000 UTC (0+1 container statuses recorded) Jan 14 18:31:33.158: INFO: Container httpd ready: false, restart count 0 Jan 14 18:31:33.158: INFO: rs-h6xr5 started at 2023-01-14 18:31:31 +0000 UTC (0+1 container statuses recorded) Jan 14 18:31:33.158: INFO: Container donothing ready: false, restart count 0 Jan 14 18:31:33.158: INFO: pod-service-account-mountsa-mountspec started at 2023-01-14 18:30:01 +0000 UTC (0+1 container statuses recorded) Jan 14 18:31:33.158: INFO: Container token-test ready: false, restart count 0 Jan 14 18:31:33.158: INFO: server-envvars-aa40d956-5c40-4512-8f5b-4bb99b685461 started at 2023-01-14 18:31:01 +0000 UTC (0+1 container statuses recorded) Jan 14 18:31:33.158: INFO: Container srv ready: true, restart count 0 Jan 14 18:31:33.158: INFO: adopt-release-4rgz9 started at 2023-01-14 18:31:08 +0000 UTC (0+1 container statuses recorded) Jan 14 18:31:33.158: INFO: Container c ready: true, restart count 0 Jan 14 18:31:33.158: INFO: coredns-56f4c55bf9-zp98j started at 2023-01-14 18:19:48 +0000 UTC (0+1 container statuses recorded) Jan 14 18:31:33.158: INFO: Container coredns ready: true, restart count 0 Jan 14 18:31:33.158: INFO: execpod-affinity9lst7 started at 2023-01-14 18:31:16 +0000 UTC (0+1 container statuses recorded) Jan 14 18:31:33.158: INFO: Container agnhost-container ready: true, restart count 0 Jan 14 18:31:33.158: INFO: webserver-deployment-7f5969cbc7-qp4pt started at 2023-01-14 18:29:50 +0000 UTC (0+1 container statuses recorded) Jan 14 18:31:33.158: INFO: Container httpd ready: false, restart count 0 Jan 14 18:31:33.158: INFO: affinity-clusterip-transition-fhp56 started at 2023-01-14 18:31:04 +0000 UTC (0+1 container statuses recorded) Jan 14 18:31:33.158: INFO: Container affinity-clusterip-transition ready: true, restart count 0 Jan 14 18:31:33.158: INFO: pod2 started at 2023-01-14 18:30:51 +0000 UTC (0+1 container statuses recorded) Jan 14 18:31:33.158: INFO: Container agnhost ready: true, restart count 0 Jan 14 18:31:33.158: INFO: kube-proxy-8jftq started at 2023-01-14 18:19:05 +0000 UTC (0+1 container statuses recorded) Jan 14 18:31:33.158: INFO: Container kube-proxy ready: true, restart count 0 Jan 14 18:31:33.158: INFO: update-demo-nautilus-kkvz5 started at 2023-01-14 18:30:58 +0000 UTC (0+1 container statuses recorded) Jan 14 18:31:33.158: INFO: Container update-demo ready: false, restart count 0 Jan 14 18:31:33.158: INFO: webserver-deployment-7f5969cbc7-6x2pr started at 2023-01-14 18:29:50 +0000 UTC (0+1 container statuses recorded) Jan 14 18:31:33.158: INFO: Container httpd ready: false, restart count 0 Jan 14 18:31:33.158: INFO: pod-service-account-mountsa started at 2023-01-14 18:30:01 +0000 UTC (0+1 container statuses recorded) Jan 14 18:31:33.158: INFO: Container token-test ready: false, restart count 0 Jan 14 18:31:33.158: INFO: test-deployment-2qcdv-54bc444df-lcbkp started at 2023-01-14 18:31:02 +0000 UTC (0+1 container statuses recorded) Jan 14 18:31:33.158: INFO: Container httpd ready: false, restart count 0 Jan 14 18:31:33.158: INFO: rs-z2hzj started at 2023-01-14 18:31:20 +0000 UTC (0+1 container statuses recorded) Jan 14 18:31:33.158: INFO: Container donothing ready: true, restart count 0 Jan 14 18:31:33.158: INFO: webserver-deployment-7f5969cbc7-hmwtg started at 2023-01-14 18:29:50 +0000 UTC (0+1 container statuses recorded) Jan 14 18:31:33.158: INFO: Container httpd ready: false, restart count 0 Jan 14 18:31:33.158: INFO: pod1 started at 2023-01-14 18:29:51 +0000 UTC (0+1 container statuses recorded) Jan 14 18:31:33.158: INFO: Container agnhost ready: true, restart count 0 Jan 14 18:31:33.158: INFO: test-grpc-f156c965-2ae0-4fbe-9e25-499a81961e3e started at 2023-01-14 18:30:41 +0000 UTC (0+1 container statuses recorded) Jan 14 18:31:33.158: INFO: Container etcd ready: true, restart count 0 Jan 14 18:31:33.158: INFO: test-rs-ndvhm started at 2023-01-14 18:29:55 +0000 UTC (0+1 container statuses recorded) Jan 14 18:31:33.158: INFO: Container httpd ready: false, restart count 0 Jan 14 18:31:33.158: INFO: pod-xvzqs started at 2023-01-14 18:31:31 +0000 UTC (0+1 container statuses recorded) Jan 14 18:31:33.158: INFO: Container agnhost ready: false, restart count 0 Jan 14 18:31:33.158: INFO: affinity-clusterip-transition-qrk8l started at 2023-01-14 18:31:04 +0000 UTC (0+1 container statuses recorded) Jan 14 18:31:33.158: INFO: Container affinity-clusterip-transition ready: true, restart count 0 Jan 14 18:31:33.158: INFO: pod-service-account-defaultsa-mountspec started at 2023-01-14 18:30:01 +0000 UTC (0+1 container statuses recorded) Jan 14 18:31:33.158: INFO: Container token-test ready: false, restart count 0 Jan 14 18:31:33.158: INFO: e2e-host-exec started at 2023-01-14 18:31:20 +0000 UTC (0+1 container statuses recorded) Jan 14 18:31:33.158: INFO: Container e2e-host-exec ready: true, restart count 0 Jan 14 18:31:33.158: INFO: test-deployment-7b7876f9d6-zqb4p started at 2023-01-14 18:30:28 +0000 UTC (0+1 container statuses recorded) Jan 14 18:31:33.158: INFO: Container test-deployment ready: false, restart count 0 Jan 14 18:31:33.158: INFO: pod3 started at 2023-01-14 18:31:06 +0000 UTC (0+1 container statuses recorded) Jan 14 18:31:33.158: INFO: Container agnhost ready: true, restart count 0 Jan 14 18:31:33.158: INFO: webserver-deployment-7f5969cbc7-xfkvh started at 2023-01-14 18:29:50 +0000 UTC (0+1 container statuses recorded) Jan 14 18:31:33.158: INFO: Container httpd ready: false, restart count 0 Jan 14 18:31:33.158: INFO: coredns-56f4c55bf9-4pfjc started at 2023-01-14 18:19:48 +0000 UTC (0+1 container statuses recorded) Jan 14 18:31:33.158: INFO: Container coredns ready: true, restart count 0 Jan 14 18:31:33.158: INFO: metrics-server-795d765ff8-rskk8 started at 2023-01-14 18:19:48 +0000 UTC (0+1 container statuses recorded) Jan 14 18:31:33.158: INFO: Container metrics-server ready: true, restart count 0 Jan 14 18:31:33.158: INFO: ss2-0 started at 2023-01-14 18:31:08 +0000 UTC (0+1 container statuses recorded) Jan 14 18:31:33.158: INFO: Container webserver ready: false, restart count 0 Jan 14 18:31:33.158: INFO: cloud-node-manager-l846f started at 2023-01-14 18:20:38 +0000 UTC (0+1 container statuses recorded) Jan 14 18:31:33.158: INFO: Container cloud-node-manager ready: true, restart count 0 Jan 14 18:31:33.158: INFO: test-ss-0 started at 2023-01-14 18:28:36 +0000 UTC (0+1 container statuses recorded) Jan 14 18:31:33.158: INFO: Container webserver ready: true, restart count 0 Jan 14 18:31:33.158: INFO: test-deployment-7df74c55ff-s9lvr started at 2023-01-14 18:30:28 +0000 UTC (0+1 container statuses recorded) Jan 14 18:31:33.158: INFO: Container test-deployment ready: true, restart count 0 Jan 14 18:31:33.158: INFO: calico-node-t5npc started at 2023-01-14 18:19:05 +0000 UTC (2+1 container statuses recorded) Jan 14 18:31:33.158: INFO: Init container upgrade-ipam ready: true, restart count 0 Jan 14 18:31:33.158: INFO: Init container install-cni ready: true, restart count 0 Jan 14 18:31:33.158: INFO: Container calico-node ready: true, restart count 0 Jan 14 18:31:33.158: INFO: calico-kube-controllers-657b584867-tn8lq started at 2023-01-14 18:19:48 +0000 UTC (0+1 container statuses recorded) Jan 14 18:31:33.158: INFO: Container calico-kube-controllers ready: true, restart count 0 Jan 14 18:31:33.158: INFO: pod-service-account-nomountsa-mountspec started at 2023-01-14 18:30:01 +0000 UTC (0+1 container statuses recorded) Jan 14 18:31:33.158: INFO: Container token-test ready: false, restart count 0 Jan 14 18:31:35.032: INFO: Latency metrics for node capz-67tgp2-mp-0000000 Jan 14 18:31:35.033: INFO: Logging node info for node capz-67tgp2-mp-0000001 Jan 14 18:31:35.141: INFO: Node Info: &Node{ObjectMeta:{capz-67tgp2-mp-0000001 a57d1a46-19d4-4265-8229-3bb32b89963d 4424 0 2023-01-14 18:18:59 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_D2s_v3 beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:northeurope failure-domain.beta.kubernetes.io/zone:1 kubernetes.io/arch:amd64 kubernetes.io/hostname:capz-67tgp2-mp-0000001 kubernetes.io/os:linux node.kubernetes.io/instance-type:Standard_D2s_v3 topology.kubernetes.io/region:northeurope topology.kubernetes.io/zone:1] map[cluster.x-k8s.io/cluster-name:capz-67tgp2 cluster.x-k8s.io/cluster-namespace:default cluster.x-k8s.io/owner-kind:MachinePool cluster.x-k8s.io/owner-name:capz-67tgp2-mp-0 kubeadm.alpha.kubernetes.io/cri-socket:unix:///var/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.1.0.5/16 projectcalico.org/IPv4VXLANTunnelAddr:192.168.14.128 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-14 18:18:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2023-01-14 18:19:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kube-controller-manager Update v1 2023-01-14 18:20:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}} } {Go-http-client Update v1 2023-01-14 18:20:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:projectcalico.org/IPv4Address":{},"f:projectcalico.org/IPv4VXLANTunnelAddr":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {cloud-controller-manager Update v1 2023-01-14 18:21:06 +0000 UTC FieldsV1 {"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"192.168.2.0/24\"":{}}}} } {manager Update v1 2023-01-14 18:21:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cluster.x-k8s.io/cluster-name":{},"f:cluster.x-k8s.io/cluster-namespace":{},"f:cluster.x-k8s.io/owner-kind":{},"f:cluster.x-k8s.io/owner-name":{}}}} } {kubelet Update v1 2023-01-14 18:28:54 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:192.168.2.0/24,DoNotUseExternalID:,ProviderID:azure:///subscriptions/===REDACTED===/resourceGroups/capz-67tgp2/providers/Microsoft.Compute/virtualMachineScaleSets/capz-67tgp2-mp-0/virtualMachines/1,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[192.168.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{4 0} {<nil>} 4 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{31025332224 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{8344743936 0} {<nil>} 8149164Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{4 0} {<nil>} 4 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{27922798956 0} {<nil>} 27922798956 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{8239886336 0} {<nil>} 8046764Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-14 18:20:41 +0000 UTC,LastTransitionTime:2023-01-14 18:20:41 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-14 18:28:54 +0000 UTC,LastTransitionTime:2023-01-14 18:18:59 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-14 18:28:54 +0000 UTC,LastTransitionTime:2023-01-14 18:18:59 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-14 18:28:54 +0000 UTC,LastTransitionTime:2023-01-14 18:18:59 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-14 18:28:54 +0000 UTC,LastTransitionTime:2023-01-14 18:20:32 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:capz-67tgp2-mp-0000001,},NodeAddress{Type:InternalIP,Address:10.1.0.5,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:e38f17c71746485985c8ebe9f1d87480,SystemUUID:31667858-013a-6c49-bd37-41a0bfb4cd7c,BootID:a61dc5b1-073f-4988-b019-c5aa35ecae86,KernelVersion:5.4.0-1098-azure,OSImage:Ubuntu 18.04.6 LTS,ContainerRuntimeVersion:containerd://1.6.2,KubeletVersion:v1.27.0-alpha.0.989+eabb70833a5649,KubeProxyVersion:v1.27.0-alpha.0.989+eabb70833a5649,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-apiserver:v1.27.0-alpha.0.989_eabb70833a5649 registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.0.989_eabb70833a5649 registry.k8s.io/kube-apiserver:v1.27.0-alpha.0.989_eabb70833a5649],SizeBytes:135903699,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-controller-manager:v1.27.0-alpha.0.989_eabb70833a5649 registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.0.989_eabb70833a5649 registry.k8s.io/kube-controller-manager:v1.27.0-alpha.0.989_eabb70833a5649],SizeBytes:125717305,},ContainerImage{Names:[docker.io/calico/cni@sha256:914823d144204288f881e49b93b6852febfe669074cd4e2a782860981615f521 docker.io/calico/cni:v3.23.0],SizeBytes:110494683,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[docker.io/calico/node@sha256:4763820ecb4d8e82483a2ffabfec7fcded9603318692df210a778d223a4d7474 docker.io/calico/node:v3.23.0],SizeBytes:71573794,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-scheduler:v1.27.0-alpha.0.989_eabb70833a5649 registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.0.989_eabb70833a5649 registry.k8s.io/kube-scheduler:v1.27.0-alpha.0.989_eabb70833a5649],SizeBytes:57551672,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-proxy:v1.27.0-alpha.0.989_eabb70833a5649 registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.989_eabb70833a5649 registry.k8s.io/kube-proxy:v1.27.0-alpha.0.989_eabb70833a5649],SizeBytes:52478325,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[registry.k8s.io/kube-apiserver@sha256:d230a0b88a3daf14e4cce03b906b992c8153f37da878677f434b1af8c4e8cc75 registry.k8s.io/kube-apiserver:v1.26.0],SizeBytes:35317868,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager@sha256:26e260b50ec46bd1da7352565cb8b34b6dd2cb006cebbd2f35170d50935fb9ec registry.k8s.io/kube-controller-manager:v1.26.0],SizeBytes:32244989,},ContainerImage{Names:[registry.k8s.io/kube-proxy@sha256:1e9bbe429e4e2b2ad32681c91deb98a334f1bf4135137df5f84f9d03689060fe registry.k8s.io/kube-proxy:v1.26.0],SizeBytes:21536465,},ContainerImage{Names:[registry.k8s.io/kube-scheduler@sha256:34a142549f94312b41d4a6cd98e7fddabff484767a199333acb7503bf46d7410 registry.k8s.io/kube-scheduler:v1.26.0],SizeBytes:17484038,},ContainerImage{Names:[capzci.azurecr.io/azure-cloud-node-manager@sha256:45259845bc04cb115596dd16d88262d84214a1099fe085531240b24fa03021cf capzci.azurecr.io/azure-cloud-node-manager:63c1cd3],SizeBytes:16704716,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 14 18:31:35.141: INFO: Logging kubelet events for node capz-67tgp2-mp-0000001 Jan 14 18:31:35.246: INFO: Logging pods the kubelet thinks is on node capz-67tgp2-mp-0000001 Jan 14 18:31:35.439: INFO: calico-node-lzp55 started at 2023-01-14 18:19:07 +0000 UTC (2+1 container statuses recorded) Jan 14 18:31:35.439: INFO: Init container upgrade-ipam ready: true, restart count 0 Jan 14 18:31:35.439: INFO: Init container install-cni ready: true, restart count 0 Jan 14 18:31:35.439: INFO: Container calico-node ready: true, restart count 0 Jan 14 18:31:35.439: INFO: privileged-pod started at 2023-01-14 18:31:28 +0000 UTC (0+2 container statuses recorded) Jan 14 18:31:35.439: INFO: Container not-privileged-container ready: true, restart count 0 Jan 14 18:31:35.439: INFO: Container privileged-container ready: true, restart count 0 Jan 14 18:31:35.439: INFO: terminate-cmd-rpne7ecf535-e473-4273-8075-40044ca55f4e started at 2023-01-14 18:31:30 +0000 UTC (0+1 container statuses recorded) Jan 14 18:31:35.439: INFO: Container terminate-cmd-rpn ready: false, restart count 0 Jan 14 18:31:35.439: INFO: adopt-release-5tk47 started at 2023-01-14 18:30:51 +0000 UTC (0+1 container statuses recorded) Jan 14 18:31:35.439: INFO: Container c ready: true, restart count 0 Jan 14 18:31:35.439: INFO: adopt-release-ctmlr started at 2023-01-14 18:30:51 +0000 UTC (0+1 container statuses recorded) Jan 14 18:31:35.439: INFO: Container c ready: true, restart count 0 Jan 14 18:31:35.439: INFO: pod-adoption started at 2023-01-14 18:29:01 +0000 UTC (0+1 container statuses recorded) Jan 14 18:31:35.439: INFO: Container pod-adoption ready: false, restart count 0 Jan 14 18:31:35.439: INFO: webserver-deployment-7f5969cbc7-f6jmv started at 2023-01-14 18:29:50 +0000 UTC (0+1 container statuses recorded) Jan 14 18:31:35.439: INFO: Container httpd ready: false, restart count 0 Jan 14 18:31:35.439: INFO: webserver-deployment-7f5969cbc7-zmkvp started at 2023-01-14 18:29:50 +0000 UTC (0+1 container statuses recorded) Jan 14 18:31:35.439: INFO: Container httpd ready: false, restart count 0 Jan 14 18:31:35.439: INFO: update-demo-nautilus-9757j started at 2023-01-14 18:30:58 +0000 UTC (0+1 container statuses recorded) Jan 14 18:31:35.439: INFO: Container update-demo ready: false, restart count 0 Jan 14 18:31:35.439: INFO: busybox-6838bd23-aab9-4abf-b816-7aa83c52b6f1 started at 2023-01-14 18:30:02 +0000 UTC (0+1 container statuses recorded) Jan 14 18:31:35.439: INFO: Container busybox ready: true, restart count 0 Jan 14 18:31:35.439: INFO: webserver-deployment-7f5969cbc7-jhph8 started at 2023-01-14 18:29:50 +0000 UTC (0+1 container statuses recorded) Jan 14 18:31:35.439: INFO: Container httpd ready: false, restart count 0 Jan 14 18:31:35.439: INFO: test-webserver-bebd24c8-7e4e-468d-b7f9-f7dacc78fdd5 started at 2023-01-14 18:28:14 +0000 UTC (0+1 container statuses recorded) Jan 14 18:31:35.439: INFO: Container test-webserver ready: true, restart count 0 Jan 14 18:31:35.439: INFO: rs-q4mbj started at 2023-01-14 18:31:20 +0000 UTC (0+1 container statuses recorded) Jan 14 18:31:35.439: INFO: Container donothing ready: true, restart count 0 Jan 14 18:31:35.439: INFO: pod-configmaps-d93eabd0-999b-4501-ac99-2dcef2f85f8f started at 2023-01-14 18:29:48 +0000 UTC (0+1 container statuses recorded) Jan 14 18:31:35.439: INFO: Container agnhost-container ready: true, restart count 0 Jan 14 18:31:35.439: INFO: cloud-node-manager-c24hp started at 2023-01-14 18:20:38 +0000 UTC (0+1 container statuses recorded) Jan 14 18:31:35.439: INFO: Container cloud-node-manager ready: true, restart count 0 Jan 14 18:31:35.439: INFO: dns-test-45948e0a-047f-4523-969c-9bc41b0b2ef8 started at 2023-01-14 18:28:32 +0000 UTC (0+3 container statuses recorded) Jan 14 18:31:35.439: INFO: Container jessie-querier ready: true, restart count 0 Jan 14 18:31:35.439: INFO: Container querier ready: true, restart count 0 Jan 14 18:31:35.439: INFO: Container webserver ready: true, restart count 0 Jan 14 18:31:35.439: INFO: pod-test started at 2023-01-14 18:29:43 +0000 UTC (0+1 container statuses recorded) Jan 14 18:31:35.439: INFO: Container pod-test ready: true, restart count 0 Jan 14 18:31:35.439: INFO: rs-mkqq8 started at 2023-01-14 18:31:20 +0000 UTC (0+1 container statuses recorded) Jan 14 18:31:35.439: INFO: Container donothing ready: true, restart count 0 Jan 14 18:31:35.439: INFO: test-deployment-7df74c55ff-84hdq started at 2023-01-14 18:29:43 +0000 UTC (0+1 container statuses recorded) Jan 14 18:31:35.439: INFO: Container test-deployment ready: true, restart count 0 Jan 14 18:31:35.439: INFO: dns-test-6b8b5a61-e5df-42db-ac53-e949636abcb0 started at 2023-01-14 18:30:44 +0000 UTC (0+3 container statuses recorded) Jan 14 18:31:35.439: INFO: Container jessie-querier ready: false, restart count 0 Jan 14 18:31:35.439: INFO: Container querier ready: false, restart count 0 Jan 14 18:31:35.439: INFO: Container webserver ready: false, restart count 0 Jan 14 18:31:35.439: INFO: image-pull-test02589db6-0ccc-440c-9ec6-225d18b71d37 started at 2023-01-14 18:31:06 +0000 UTC (0+1 container statuses recorded) Jan 14 18:31:35.439: INFO: Container image-pull-test ready: false, restart count 0 Jan 14 18:31:35.439: INFO: affinity-clusterip-transition-4qwm9 started at 2023-01-14 18:31:04 +0000 UTC (0+1 container statuses recorded) Jan 14 18:31:35.439: INFO: Container affinity-clusterip-transition ready: true, restart count 0 Jan 14 18:31:35.439: INFO: image-pull-test1a9e8d67-d219-4a39-b91b-061fb78c9cfc started at 2023-01-14 18:29:47 +0000 UTC (0+1 container statuses recorded) Jan 14 18:31:35.439: INFO: Container image-pull-test ready: false, restart count 0 Jan 14 18:31:35.439: INFO: busybox-2cf95817-dadf-4675-8b51-2fdbeab77a73 started at 2023-01-14 18:31:12 +0000 UTC (0+1 container statuses recorded) Jan 14 18:31:35.439: INFO: Container busybox ready: true, restart count 0 Jan 14 18:31:35.439: INFO: proxy-service-p2gl8-jl8p6 started at 2023-01-14 18:31:22 +0000 UTC (0+1 container statuses recorded) Jan 14 18:31:35.439: INFO: Container proxy-service-p2gl8 ready: true, restart count 0 Jan 14 18:31:35.439: INFO: webserver-deployment-7f5969cbc7-vn5tb started at 2023-01-14 18:29:50 +0000 UTC (0+1 container statuses recorded) Jan 14 18:31:35.439: INFO: Container httpd ready: false, restart count 0 Jan 14 18:31:35.439: INFO: pod-projected-secrets-99ff3575-08d9-47b4-9937-ccb362eac3a5 started at 2023-01-14 18:30:43 +0000 UTC (0+3 container statuses recorded) Jan 14 18:31:35.439: INFO: Container creates-volume-test ready: true, restart count 0 Jan 14 18:31:35.439: INFO: Container dels-volume-test ready: true, restart count 0 Jan 14 18:31:35.439: INFO: Container upds-volume-test ready: true, restart count 0 Jan 14 18:31:35.439: INFO: liveness-c28e233c-23c4-451f-b61f-015c73828952 started at 2023-01-14 18:29:26 +0000 UTC (0+1 container statuses recorded) Jan 14 18:31:35.439: INFO: Container agnhost-container ready: true, restart count 4 Jan 14 18:31:35.439: INFO: kube-proxy-xd8xz started at 2023-01-14 18:19:07 +0000 UTC (0+1 container statuses recorded) Jan 14 18:31:35.439: INFO: Container kube-proxy ready: true, restart count 0 Jan 14 18:31:35.439: INFO: e2e-test-httpd-pod started at 2023-01-14 18:29:01 +0000 UTC (0+1 container statuses recorded) Jan 14 18:31:35.439: INFO: Container e2e-test-httpd-pod ready: false, restart count 0 Jan 14 18:31:35.439: INFO: sample-crd-conversion-webhook-deployment-74ff66dd47-h5vtw started at 2023-01-14 18:31:29 +0000 UTC (0+1 container statuses recorded) Jan 14 18:31:35.439: INFO: Container sample-crd-conversion-webhook ready: false, restart count 0 Jan 14 18:31:35.439: INFO: webserver-deployment-7f5969cbc7-fcnzv started at 2023-01-14 18:29:50 +0000 UTC (0+1 container statuses recorded) Jan 14 18:31:35.439: INFO: Container httpd ready: false, restart count 0 Jan 14 18:31:37.011: INFO: Latency metrics for node capz-67tgp2-mp-0000001 END STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/14/23 18:31:37.011 (5.726s) < Exit [DeferCleanup (Each)] [sig-node] Pods - dump namespaces | framework.go:206 @ 01/14/23 18:31:37.011 (5.726s) > Enter [DeferCleanup (Each)] [sig-node] Pods - tear down framework | framework.go:203 @ 01/14/23 18:31:37.011 STEP: Destroying namespace "pods-9842" for this suite. - test/e2e/framework/framework.go:347 @ 01/14/23 18:31:37.011 < Exit [DeferCleanup (Each)] [sig-node] Pods - tear down framework | framework.go:203 @ 01/14/23 18:31:37.123 (111ms) > Enter [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/14/23 18:31:37.123 < Exit [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/14/23 18:31:37.123 (0s)
Find pod-test mentions in log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-node\]\sPods\sshould\srun\sthrough\sthe\slifecycle\sof\sPods\sand\sPodStatus\s\[Conformance\]$'
[FAILED] pod pod-test found in namespace pods-9842, but it should be deleted: {"metadata":{"name":"pod-test","namespace":"pods-9842","uid":"04415031-94c9-45f0-9f1f-b063e61d2246","resourceVersion":"10300","creationTimestamp":"2023-01-14T18:29:43Z","deletionTimestamp":"2023-01-14T18:30:31Z","deletionGracePeriodSeconds":1,"labels":{"test-pod":"patched","test-pod-static":"true"},"annotations":{"cni.projectcalico.org/containerID":"db74a9ca9e24cff70c736bdfb5be7ea2cdbe920838fafac225226b0000bca37c","cni.projectcalico.org/podIP":"192.168.14.252/32","cni.projectcalico.org/podIPs":"192.168.14.252/32"},"managedFields":[{"manager":"Go-http-client","operation":"Update","apiVersion":"v1","time":"2023-01-14T18:30:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}},"subresource":"status"},{"manager":"e2e.test","operation":"Update","apiVersion":"v1","time":"2023-01-14T18:30:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:labels":{".":{},"f:test-pod":{},"f:test-pod-static":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"pod-test\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{".":{},"f:allowPrivilegeEscalation":{},"f:capabilities":{".":{},"f:drop":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{".":{},"f:runAsNonRoot":{},"f:runAsUser":{},"f:seccompProfile":{".":{},"f:type":{}}},"f:terminationGracePeriodSeconds":{}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T18:30:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.14.252\"}":{".":{},"f:ip":{}}},"f:startTime":{}}},"subresource":"status"}]},"spec":{"volumes":[{"name":"kube-api-access-2wsl2","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}}],"defaultMode":420}}],"containers":[{"name":"pod-test","image":"registry.k8s.io/e2e-test-images/httpd:2.4.38-4","resources":{},"volumeMounts":[{"name":"kube-api-access-2wsl2","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent","securityContext":{"capabilities":{"drop":["ALL"]},"allowPrivilegeEscalation":false}}],"restartPolicy":"Always","terminationGracePeriodSeconds":1,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"capz-67tgp2-mp-0000001","securityContext":{"runAsUser":1000,"runAsNonRoot":true,"seccompProfile":{"type":"RuntimeDefault"}},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute","tolerationSeconds":300},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute","tolerationSeconds":300}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-01-14T18:29:43Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-01-14T18:30:24Z"},{"type":"ContainersReady","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-01-14T18:30:24Z"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-01-14T18:29:43Z"}],"hostIP":"10.1.0.5","podIP":"192.168.14.252","podIPs":[{"ip":"192.168.14.252"}],"startTime":"2023-01-14T18:29:43Z","containerStatuses":[{"name":"pod-test","state":{"running":{"startedAt":"2023-01-14T18:30:23Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"registry.k8s.io/e2e-test-images/agnhost:2.43","imageID":"registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e","containerID":"containerd://9508aacc1f24eabc19c52b8f8606893171b681bbae10099e1041699f19091e5d","started":true}],"qosClass":"BestEffort"}} Expected an error to have occurred. Got: <nil>: nil In [It] at: test/e2e/common/node/pods.go:1070 @ 01/14/23 18:31:31.005from junit_01.xml
> Enter [BeforeEach] [sig-node] Pods - set up framework | framework.go:188 @ 01/14/23 18:29:42.614 STEP: Creating a kubernetes client - test/e2e/framework/framework.go:208 @ 01/14/23 18:29:42.615 Jan 14 18:29:42.615: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/kubeconfig STEP: Building a namespace api object, basename pods - test/e2e/framework/framework.go:247 @ 01/14/23 18:29:42.616 STEP: Waiting for a default service account to be provisioned in namespace - test/e2e/framework/framework.go:256 @ 01/14/23 18:29:43.062 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace - test/e2e/framework/framework.go:259 @ 01/14/23 18:29:43.267 < Exit [BeforeEach] [sig-node] Pods - set up framework | framework.go:188 @ 01/14/23 18:29:43.471 (856ms) > Enter [BeforeEach] [sig-node] Pods - test/e2e/framework/metrics/init/init.go:33 @ 01/14/23 18:29:43.471 < Exit [BeforeEach] [sig-node] Pods - test/e2e/framework/metrics/init/init.go:33 @ 01/14/23 18:29:43.471 (0s) > Enter [BeforeEach] [sig-node] Pods - test/e2e/common/node/pods.go:195 @ 01/14/23 18:29:43.471 < Exit [BeforeEach] [sig-node] Pods - test/e2e/common/node/pods.go:195 @ 01/14/23 18:29:43.471 (0s) > Enter [It] should run through the lifecycle of Pods and PodStatus [Conformance] - test/e2e/common/node/pods.go:897 @ 01/14/23 18:29:43.471 STEP: creating a Pod with a static label - test/e2e/common/node/pods.go:931 @ 01/14/23 18:29:43.632 STEP: watching for Pod to be ready - test/e2e/common/node/pods.go:935 @ 01/14/23 18:29:43.829 Jan 14 18:29:43.937: INFO: observed Pod pod-test in namespace pods-9842 in phase Pending with labels: map[test-pod-static:true] & conditions [] Jan 14 18:29:43.937: INFO: observed Pod pod-test in namespace pods-9842 in phase Pending with labels: map[test-pod-static:true] & conditions [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 18:29:43 +0000 UTC }] Jan 14 18:29:52.688: INFO: observed Pod pod-test in namespace pods-9842 in phase Pending with labels: map[test-pod-static:true] & conditions [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 18:29:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 18:29:43 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 18:29:43 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 18:29:43 +0000 UTC }] Jan 14 18:30:01.224: INFO: observed Pod pod-test in namespace pods-9842 in phase Pending with labels: map[test-pod-static:true] & conditions [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 18:29:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 18:29:43 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 18:29:43 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 18:29:43 +0000 UTC }] Jan 14 18:30:30.098: INFO: Found Pod pod-test in namespace pods-9842 in phase Running with labels: map[test-pod-static:true] & conditions [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 18:29:43 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 18:30:24 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 18:30:24 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 18:29:43 +0000 UTC }] STEP: patching the Pod with a new Label and updated data - test/e2e/common/node/pods.go:961 @ 01/14/23 18:30:30.208 STEP: getting the Pod and ensuring that it's patched - test/e2e/common/node/pods.go:997 @ 01/14/23 18:30:30.437 STEP: replacing the Pod's status Ready condition to False - test/e2e/common/node/pods.go:1003 @ 01/14/23 18:30:30.546 STEP: check the Pod again to ensure its Ready conditions are False - test/e2e/common/node/pods.go:1029 @ 01/14/23 18:30:30.772 STEP: deleting the Pod via a Collection with a LabelSelector - test/e2e/common/node/pods.go:1039 @ 01/14/23 18:30:30.772 STEP: watching for the Pod to be deleted - test/e2e/common/node/pods.go:1044 @ 01/14/23 18:30:30.893 Jan 14 18:30:31.001: INFO: observed event type MODIFIED Jan 14 18:30:38.704: INFO: observed event type MODIFIED Jan 14 18:31:30.893: INFO: failed to see DELETED event: timed out waiting for the condition [FAILED] pod pod-test found in namespace pods-9842, but it should be deleted: {"metadata":{"name":"pod-test","namespace":"pods-9842","uid":"04415031-94c9-45f0-9f1f-b063e61d2246","resourceVersion":"10300","creationTimestamp":"2023-01-14T18:29:43Z","deletionTimestamp":"2023-01-14T18:30:31Z","deletionGracePeriodSeconds":1,"labels":{"test-pod":"patched","test-pod-static":"true"},"annotations":{"cni.projectcalico.org/containerID":"db74a9ca9e24cff70c736bdfb5be7ea2cdbe920838fafac225226b0000bca37c","cni.projectcalico.org/podIP":"192.168.14.252/32","cni.projectcalico.org/podIPs":"192.168.14.252/32"},"managedFields":[{"manager":"Go-http-client","operation":"Update","apiVersion":"v1","time":"2023-01-14T18:30:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}},"subresource":"status"},{"manager":"e2e.test","operation":"Update","apiVersion":"v1","time":"2023-01-14T18:30:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:labels":{".":{},"f:test-pod":{},"f:test-pod-static":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"pod-test\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{".":{},"f:allowPrivilegeEscalation":{},"f:capabilities":{".":{},"f:drop":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{".":{},"f:runAsNonRoot":{},"f:runAsUser":{},"f:seccompProfile":{".":{},"f:type":{}}},"f:terminationGracePeriodSeconds":{}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T18:30:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.14.252\"}":{".":{},"f:ip":{}}},"f:startTime":{}}},"subresource":"status"}]},"spec":{"volumes":[{"name":"kube-api-access-2wsl2","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}}],"defaultMode":420}}],"containers":[{"name":"pod-test","image":"registry.k8s.io/e2e-test-images/httpd:2.4.38-4","resources":{},"volumeMounts":[{"name":"kube-api-access-2wsl2","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent","securityContext":{"capabilities":{"drop":["ALL"]},"allowPrivilegeEscalation":false}}],"restartPolicy":"Always","terminationGracePeriodSeconds":1,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"capz-67tgp2-mp-0000001","securityContext":{"runAsUser":1000,"runAsNonRoot":true,"seccompProfile":{"type":"RuntimeDefault"}},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute","tolerationSeconds":300},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute","tolerationSeconds":300}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-01-14T18:29:43Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-01-14T18:30:24Z"},{"type":"ContainersReady","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-01-14T18:30:24Z"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-01-14T18:29:43Z"}],"hostIP":"10.1.0.5","podIP":"192.168.14.252","podIPs":[{"ip":"192.168.14.252"}],"startTime":"2023-01-14T18:29:43Z","containerStatuses":[{"name":"pod-test","state":{"running":{"startedAt":"2023-01-14T18:30:23Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"registry.k8s.io/e2e-test-images/agnhost:2.43","imageID":"registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e","containerID":"containerd://9508aacc1f24eabc19c52b8f8606893171b681bbae10099e1041699f19091e5d","started":true}],"qosClass":"BestEffort"}} Expected an error to have occurred. Got: <nil>: nil In [It] at: test/e2e/common/node/pods.go:1070 @ 01/14/23 18:31:31.005 < Exit [It] should run through the lifecycle of Pods and PodStatus [Conformance] - test/e2e/common/node/pods.go:897 @ 01/14/23 18:31:31.005 (1m47.534s) > Enter [AfterEach] [sig-node] Pods - test/e2e/framework/node/init/init.go:33 @ 01/14/23 18:31:31.005 Jan 14 18:31:31.005: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready < Exit [AfterEach] [sig-node] Pods - test/e2e/framework/node/init/init.go:33 @ 01/14/23 18:31:31.285 (280ms) > Enter [DeferCleanup (Each)] [sig-node] Pods - test/e2e/framework/metrics/init/init.go:35 @ 01/14/23 18:31:31.285 < Exit [DeferCleanup (Each)] [sig-node] Pods - test/e2e/framework/metrics/init/init.go:35 @ 01/14/23 18:31:31.285 (0s) > Enter [DeferCleanup (Each)] [sig-node] Pods - dump namespaces | framework.go:206 @ 01/14/23 18:31:31.285 STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/14/23 18:31:31.285 STEP: Collecting events from namespace "pods-9842". - test/e2e/framework/debug/dump.go:42 @ 01/14/23 18:31:31.285 STEP: Found 6 events. - test/e2e/framework/debug/dump.go:46 @ 01/14/23 18:31:31.41 Jan 14 18:31:31.410: INFO: At 2023-01-14 18:29:43 +0000 UTC - event for pod-test: {default-scheduler } Scheduled: Successfully assigned pods-9842/pod-test to capz-67tgp2-mp-0000001 Jan 14 18:31:31.410: INFO: At 2023-01-14 18:30:15 +0000 UTC - event for pod-test: {kubelet capz-67tgp2-mp-0000001} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.43" already present on machine Jan 14 18:31:31.410: INFO: At 2023-01-14 18:30:18 +0000 UTC - event for pod-test: {kubelet capz-67tgp2-mp-0000001} Created: Created container pod-test Jan 14 18:31:31.410: INFO: At 2023-01-14 18:30:24 +0000 UTC - event for pod-test: {kubelet capz-67tgp2-mp-0000001} Started: Started container pod-test Jan 14 18:31:31.410: INFO: At 2023-01-14 18:30:30 +0000 UTC - event for pod-test: {kubelet capz-67tgp2-mp-0000001} Killing: Container pod-test definition changed, will be restarted Jan 14 18:31:31.410: INFO: At 2023-01-14 18:30:33 +0000 UTC - event for pod-test: {kubelet capz-67tgp2-mp-0000001} Pulling: Pulling image "registry.k8s.io/e2e-test-images/httpd:2.4.38-4" Jan 14 18:31:31.521: INFO: POD NODE PHASE GRACE CONDITIONS Jan 14 18:31:31.521: INFO: pod-test capz-67tgp2-mp-0000001 Running 1s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 18:29:43 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 18:30:24 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 18:30:24 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 18:29:43 +0000 UTC }] Jan 14 18:31:31.521: INFO: Jan 14 18:31:31.796: INFO: Logging node info for node capz-67tgp2-control-plane-2chph Jan 14 18:31:31.910: INFO: Node Info: &Node{ObjectMeta:{capz-67tgp2-control-plane-2chph 28170de3-aa87-4a67-a5ad-65493aeb11b3 1854 0 2023-01-14 18:16:37 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:capz-67tgp2-control-plane-2chph kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node.kubernetes.io/exclude-from-external-load-balancers: node.kubernetes.io/instance-type:Standard_D2s_v3 topology.kubernetes.io/region:northeurope topology.kubernetes.io/zone:northeurope-2] map[cluster.x-k8s.io/cluster-name:capz-67tgp2 cluster.x-k8s.io/cluster-namespace:default cluster.x-k8s.io/machine:capz-67tgp2-control-plane-tj79f cluster.x-k8s.io/owner-kind:KubeadmControlPlane cluster.x-k8s.io/owner-name:capz-67tgp2-control-plane kubeadm.alpha.kubernetes.io/cri-socket:unix:///var/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.0.0.4/16 projectcalico.org/IPv4VXLANTunnelAddr:192.168.35.0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-14 18:16:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {kubeadm Update v1 2023-01-14 18:16:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {kube-controller-manager Update v1 2023-01-14 18:17:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}} } {Go-http-client Update v1 2023-01-14 18:17:47 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:projectcalico.org/IPv4Address":{},"f:projectcalico.org/IPv4VXLANTunnelAddr":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {cloud-node-manager Update v1 2023-01-14 18:20:44 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:taints":{}}} } {manager Update v1 2023-01-14 18:20:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cluster.x-k8s.io/cluster-name":{},"f:cluster.x-k8s.io/cluster-namespace":{},"f:cluster.x-k8s.io/machine":{},"f:cluster.x-k8s.io/owner-kind":{},"f:cluster.x-k8s.io/owner-name":{}}}} } {cloud-controller-manager Update v1 2023-01-14 18:21:06 +0000 UTC FieldsV1 {"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"192.168.0.0/24\"":{}}}} } {kubelet Update v1 2023-01-14 18:26:26 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:192.168.0.0/24,DoNotUseExternalID:,ProviderID:azure:///subscriptions/===REDACTED===/resourceGroups/capz-67tgp2/providers/Microsoft.Compute/virtualMachines/capz-67tgp2-control-plane-2chph,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/control-plane,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[192.168.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{133003395072 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{8344743936 0} {<nil>} 8149164Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{119703055367 0} {<nil>} 119703055367 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{8239886336 0} {<nil>} 8046764Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-14 18:17:47 +0000 UTC,LastTransitionTime:2023-01-14 18:17:47 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-14 18:26:26 +0000 UTC,LastTransitionTime:2023-01-14 18:16:14 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-14 18:26:26 +0000 UTC,LastTransitionTime:2023-01-14 18:16:14 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-14 18:26:26 +0000 UTC,LastTransitionTime:2023-01-14 18:16:14 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-14 18:26:26 +0000 UTC,LastTransitionTime:2023-01-14 18:17:37 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.0.0.4,},NodeAddress{Type:Hostname,Address:capz-67tgp2-control-plane-2chph,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:aa56c5629889429baa21826756529ecb,SystemUUID:744c1c53-9da3-134c-b7da-86c573f76ec3,BootID:b6ed8583-6ec6-40d3-b9e2-4bfd39a59694,KernelVersion:5.4.0-1098-azure,OSImage:Ubuntu 18.04.6 LTS,ContainerRuntimeVersion:containerd://1.6.2,KubeletVersion:v1.27.0-alpha.0.989+eabb70833a5649,KubeProxyVersion:v1.27.0-alpha.0.989+eabb70833a5649,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-apiserver:v1.27.0-alpha.0.989_eabb70833a5649 registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.0.989_eabb70833a5649 registry.k8s.io/kube-apiserver:v1.27.0-alpha.0.989_eabb70833a5649],SizeBytes:135903699,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-controller-manager:v1.27.0-alpha.0.989_eabb70833a5649 registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.0.989_eabb70833a5649 registry.k8s.io/kube-controller-manager:v1.27.0-alpha.0.989_eabb70833a5649],SizeBytes:125717305,},ContainerImage{Names:[docker.io/calico/cni@sha256:914823d144204288f881e49b93b6852febfe669074cd4e2a782860981615f521 docker.io/calico/cni:v3.23.0],SizeBytes:110494683,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[docker.io/calico/node@sha256:4763820ecb4d8e82483a2ffabfec7fcded9603318692df210a778d223a4d7474 docker.io/calico/node:v3.23.0],SizeBytes:71573794,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-scheduler:v1.27.0-alpha.0.989_eabb70833a5649 registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.0.989_eabb70833a5649 registry.k8s.io/kube-scheduler:v1.27.0-alpha.0.989_eabb70833a5649],SizeBytes:57551672,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-proxy:v1.27.0-alpha.0.989_eabb70833a5649 registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.989_eabb70833a5649 registry.k8s.io/kube-proxy:v1.27.0-alpha.0.989_eabb70833a5649],SizeBytes:52478325,},ContainerImage{Names:[registry.k8s.io/kube-apiserver@sha256:d230a0b88a3daf14e4cce03b906b992c8153f37da878677f434b1af8c4e8cc75 registry.k8s.io/kube-apiserver:v1.26.0],SizeBytes:35317868,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager@sha256:26e260b50ec46bd1da7352565cb8b34b6dd2cb006cebbd2f35170d50935fb9ec registry.k8s.io/kube-controller-manager:v1.26.0],SizeBytes:32244989,},ContainerImage{Names:[registry.k8s.io/kube-proxy@sha256:1e9bbe429e4e2b2ad32681c91deb98a334f1bf4135137df5f84f9d03689060fe registry.k8s.io/kube-proxy:v1.26.0],SizeBytes:21536465,},ContainerImage{Names:[registry.k8s.io/kube-scheduler@sha256:34a142549f94312b41d4a6cd98e7fddabff484767a199333acb7503bf46d7410 registry.k8s.io/kube-scheduler:v1.26.0],SizeBytes:17484038,},ContainerImage{Names:[capzci.azurecr.io/azure-cloud-controller-manager@sha256:a52d9377e1464d9e2d827e6555d7edf9082b5d85b60676d2fd74b87e202bad0c capzci.azurecr.io/azure-cloud-controller-manager:63c1cd3],SizeBytes:16980267,},ContainerImage{Names:[capzci.azurecr.io/azure-cloud-node-manager@sha256:45259845bc04cb115596dd16d88262d84214a1099fe085531240b24fa03021cf capzci.azurecr.io/azure-cloud-node-manager:63c1cd3],SizeBytes:16704716,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 14 18:31:31.911: INFO: Logging kubelet events for node capz-67tgp2-control-plane-2chph Jan 14 18:31:32.016: INFO: Logging pods the kubelet thinks is on node capz-67tgp2-control-plane-2chph Jan 14 18:31:32.234: INFO: cloud-node-manager-5qlnt started at 2023-01-14 18:20:38 +0000 UTC (0+1 container statuses recorded) Jan 14 18:31:32.234: INFO: Container cloud-node-manager ready: true, restart count 0 Jan 14 18:31:32.234: INFO: cloud-controller-manager-64479fbc67-xdds2 started at 2023-01-14 18:20:44 +0000 UTC (0+1 container statuses recorded) Jan 14 18:31:32.234: INFO: Container cloud-controller-manager ready: true, restart count 0 Jan 14 18:31:32.234: INFO: etcd-capz-67tgp2-control-plane-2chph started at 2023-01-14 18:16:44 +0000 UTC (0+1 container statuses recorded) Jan 14 18:31:32.234: INFO: Container etcd ready: true, restart count 0 Jan 14 18:31:32.234: INFO: kube-apiserver-capz-67tgp2-control-plane-2chph started at 2023-01-14 18:16:44 +0000 UTC (0+1 container statuses recorded) Jan 14 18:31:32.234: INFO: Container kube-apiserver ready: true, restart count 0 Jan 14 18:31:32.234: INFO: kube-scheduler-capz-67tgp2-control-plane-2chph started at 2023-01-14 18:16:45 +0000 UTC (0+1 container statuses recorded) Jan 14 18:31:32.234: INFO: Container kube-scheduler ready: true, restart count 0 Jan 14 18:31:32.234: INFO: kube-proxy-j74l7 started at 2023-01-14 18:16:44 +0000 UTC (0+1 container statuses recorded) Jan 14 18:31:32.234: INFO: Container kube-proxy ready: true, restart count 0 Jan 14 18:31:32.234: INFO: calico-node-g5dqz started at 2023-01-14 18:17:11 +0000 UTC (2+1 container statuses recorded) Jan 14 18:31:32.234: INFO: Init container upgrade-ipam ready: true, restart count 0 Jan 14 18:31:32.234: INFO: Init container install-cni ready: true, restart count 0 Jan 14 18:31:32.234: INFO: Container calico-node ready: true, restart count 0 Jan 14 18:31:32.234: INFO: kube-controller-manager-capz-67tgp2-control-plane-2chph started at 2023-01-14 18:16:44 +0000 UTC (0+1 container statuses recorded) Jan 14 18:31:32.234: INFO: Container kube-controller-manager ready: true, restart count 0 Jan 14 18:31:32.762: INFO: Latency metrics for node capz-67tgp2-control-plane-2chph Jan 14 18:31:32.762: INFO: Logging node info for node capz-67tgp2-mp-0000000 Jan 14 18:31:32.870: INFO: Node Info: &Node{ObjectMeta:{capz-67tgp2-mp-0000000 d6bf69fc-90f8-43c8-9623-356f58ea157f 5409 0 2023-01-14 18:19:00 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_D2s_v3 beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:northeurope failure-domain.beta.kubernetes.io/zone:0 kubernetes.io/arch:amd64 kubernetes.io/hostname:capz-67tgp2-mp-0000000 kubernetes.io/os:linux node.kubernetes.io/instance-type:Standard_D2s_v3 topology.kubernetes.io/region:northeurope topology.kubernetes.io/zone:0] map[cluster.x-k8s.io/cluster-name:capz-67tgp2 cluster.x-k8s.io/cluster-namespace:default cluster.x-k8s.io/owner-kind:MachinePool cluster.x-k8s.io/owner-name:capz-67tgp2-mp-0 kubeadm.alpha.kubernetes.io/cri-socket:unix:///var/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.1.0.4/16 projectcalico.org/IPv4VXLANTunnelAddr:192.168.243.192 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2023-01-14 18:19:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet Update v1 2023-01-14 18:19:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-14 18:19:48 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}} } {Go-http-client Update v1 2023-01-14 18:20:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:projectcalico.org/IPv4Address":{},"f:projectcalico.org/IPv4VXLANTunnelAddr":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {cloud-controller-manager Update v1 2023-01-14 18:21:06 +0000 UTC FieldsV1 {"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"192.168.1.0/24\"":{}}}} } {manager Update v1 2023-01-14 18:21:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cluster.x-k8s.io/cluster-name":{},"f:cluster.x-k8s.io/cluster-namespace":{},"f:cluster.x-k8s.io/owner-kind":{},"f:cluster.x-k8s.io/owner-name":{}}}} } {kubelet Update v1 2023-01-14 18:29:25 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:192.168.1.0/24,DoNotUseExternalID:,ProviderID:azure:///subscriptions/===REDACTED===/resourceGroups/capz-67tgp2/providers/Microsoft.Compute/virtualMachineScaleSets/capz-67tgp2-mp-0/virtualMachines/0,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[192.168.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{4 0} {<nil>} 4 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{31025332224 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{8344743936 0} {<nil>} 8149164Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{4 0} {<nil>} 4 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{27922798956 0} {<nil>} 27922798956 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{8239886336 0} {<nil>} 8046764Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-14 18:20:12 +0000 UTC,LastTransitionTime:2023-01-14 18:20:12 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-14 18:29:25 +0000 UTC,LastTransitionTime:2023-01-14 18:19:00 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-14 18:29:25 +0000 UTC,LastTransitionTime:2023-01-14 18:19:00 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-14 18:29:25 +0000 UTC,LastTransitionTime:2023-01-14 18:19:00 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-14 18:29:25 +0000 UTC,LastTransitionTime:2023-01-14 18:19:48 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:capz-67tgp2-mp-0000000,},NodeAddress{Type:InternalIP,Address:10.1.0.4,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:95d9ab6ead5141e2b46b1d18fec95432,SystemUUID:3fc8a171-f25a-2049-95d3-3c4be76d51a7,BootID:b9ac1a12-eff5-45ad-b970-9df972ef339e,KernelVersion:5.4.0-1098-azure,OSImage:Ubuntu 18.04.6 LTS,ContainerRuntimeVersion:containerd://1.6.2,KubeletVersion:v1.27.0-alpha.0.989+eabb70833a5649,KubeProxyVersion:v1.27.0-alpha.0.989+eabb70833a5649,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-apiserver:v1.27.0-alpha.0.989_eabb70833a5649 registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.0.989_eabb70833a5649 registry.k8s.io/kube-apiserver:v1.27.0-alpha.0.989_eabb70833a5649],SizeBytes:135903699,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-controller-manager:v1.27.0-alpha.0.989_eabb70833a5649 registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.0.989_eabb70833a5649 registry.k8s.io/kube-controller-manager:v1.27.0-alpha.0.989_eabb70833a5649],SizeBytes:125717305,},ContainerImage{Names:[docker.io/calico/cni@sha256:914823d144204288f881e49b93b6852febfe669074cd4e2a782860981615f521 docker.io/calico/cni:v3.23.0],SizeBytes:110494683,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[docker.io/calico/node@sha256:4763820ecb4d8e82483a2ffabfec7fcded9603318692df210a778d223a4d7474 docker.io/calico/node:v3.23.0],SizeBytes:71573794,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-scheduler:v1.27.0-alpha.0.989_eabb70833a5649 registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.0.989_eabb70833a5649 registry.k8s.io/kube-scheduler:v1.27.0-alpha.0.989_eabb70833a5649],SizeBytes:57551672,},ContainerImage{Names:[docker.io/calico/kube-controllers@sha256:78bc199299f966b0694dc4044501aee2d7ebd6862b2b0a00bca3ee8d3813c82f docker.io/calico/kube-controllers:v3.23.0],SizeBytes:56343954,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-proxy:v1.27.0-alpha.0.989_eabb70833a5649 registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.989_eabb70833a5649 registry.k8s.io/kube-proxy:v1.27.0-alpha.0.989_eabb70833a5649],SizeBytes:52478325,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[registry.k8s.io/kube-apiserver@sha256:d230a0b88a3daf14e4cce03b906b992c8153f37da878677f434b1af8c4e8cc75 registry.k8s.io/kube-apiserver:v1.26.0],SizeBytes:35317868,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager@sha256:26e260b50ec46bd1da7352565cb8b34b6dd2cb006cebbd2f35170d50935fb9ec registry.k8s.io/kube-controller-manager:v1.26.0],SizeBytes:32244989,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/kube-proxy@sha256:1e9bbe429e4e2b2ad32681c91deb98a334f1bf4135137df5f84f9d03689060fe registry.k8s.io/kube-proxy:v1.26.0],SizeBytes:21536465,},ContainerImage{Names:[registry.k8s.io/kube-scheduler@sha256:34a142549f94312b41d4a6cd98e7fddabff484767a199333acb7503bf46d7410 registry.k8s.io/kube-scheduler:v1.26.0],SizeBytes:17484038,},ContainerImage{Names:[capzci.azurecr.io/azure-cloud-node-manager@sha256:45259845bc04cb115596dd16d88262d84214a1099fe085531240b24fa03021cf capzci.azurecr.io/azure-cloud-node-manager:63c1cd3],SizeBytes:16704716,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:6978614,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 14 18:31:32.870: INFO: Logging kubelet events for node capz-67tgp2-mp-0000000 Jan 14 18:31:32.976: INFO: Logging pods the kubelet thinks is on node capz-67tgp2-mp-0000000 Jan 14 18:31:33.158: INFO: webserver-deployment-7f5969cbc7-77nh6 started at 2023-01-14 18:29:50 +0000 UTC (0+1 container statuses recorded) Jan 14 18:31:33.158: INFO: Container httpd ready: false, restart count 0 Jan 14 18:31:33.158: INFO: rs-h6xr5 started at 2023-01-14 18:31:31 +0000 UTC (0+1 container statuses recorded) Jan 14 18:31:33.158: INFO: Container donothing ready: false, restart count 0 Jan 14 18:31:33.158: INFO: pod-service-account-mountsa-mountspec started at 2023-01-14 18:30:01 +0000 UTC (0+1 container statuses recorded) Jan 14 18:31:33.158: INFO: Container token-test ready: false, restart count 0 Jan 14 18:31:33.158: INFO: server-envvars-aa40d956-5c40-4512-8f5b-4bb99b685461 started at 2023-01-14 18:31:01 +0000 UTC (0+1 container statuses recorded) Jan 14 18:31:33.158: INFO: Container srv ready: true, restart count 0 Jan 14 18:31:33.158: INFO: adopt-release-4rgz9 started at 2023-01-14 18:31:08 +0000 UTC (0+1 container statuses recorded) Jan 14 18:31:33.158: INFO: Container c ready: true, restart count 0 Jan 14 18:31:33.158: INFO: coredns-56f4c55bf9-zp98j started at 2023-01-14 18:19:48 +0000 UTC (0+1 container statuses recorded) Jan 14 18:31:33.158: INFO: Container coredns ready: true, restart count 0 Jan 14 18:31:33.158: INFO: execpod-affinity9lst7 started at 2023-01-14 18:31:16 +0000 UTC (0+1 container statuses recorded) Jan 14 18:31:33.158: INFO: Container agnhost-container ready: true, restart count 0 Jan 14 18:31:33.158: INFO: webserver-deployment-7f5969cbc7-qp4pt started at 2023-01-14 18:29:50 +0000 UTC (0+1 container statuses recorded) Jan 14 18:31:33.158: INFO: Container httpd ready: false, restart count 0 Jan 14 18:31:33.158: INFO: affinity-clusterip-transition-fhp56 started at 2023-01-14 18:31:04 +0000 UTC (0+1 container statuses recorded) Jan 14 18:31:33.158: INFO: Container affinity-clusterip-transition ready: true, restart count 0 Jan 14 18:31:33.158: INFO: pod2 started at 2023-01-14 18:30:51 +0000 UTC (0+1 container statuses recorded) Jan 14 18:31:33.158: INFO: Container agnhost ready: true, restart count 0 Jan 14 18:31:33.158: INFO: kube-proxy-8jftq started at 2023-01-14 18:19:05 +0000 UTC (0+1 container statuses recorded) Jan 14 18:31:33.158: INFO: Container kube-proxy ready: true, restart count 0 Jan 14 18:31:33.158: INFO: update-demo-nautilus-kkvz5 started at 2023-01-14 18:30:58 +0000 UTC (0+1 container statuses recorded) Jan 14 18:31:33.158: INFO: Container update-demo ready: false, restart count 0 Jan 14 18:31:33.158: INFO: webserver-deployment-7f5969cbc7-6x2pr started at 2023-01-14 18:29:50 +0000 UTC (0+1 container statuses recorded) Jan 14 18:31:33.158: INFO: Container httpd ready: false, restart count 0 Jan 14 18:31:33.158: INFO: pod-service-account-mountsa started at 2023-01-14 18:30:01 +0000 UTC (0+1 container statuses recorded) Jan 14 18:31:33.158: INFO: Container token-test ready: false, restart count 0 Jan 14 18:31:33.158: INFO: test-deployment-2qcdv-54bc444df-lcbkp started at 2023-01-14 18:31:02 +0000 UTC (0+1 container statuses recorded) Jan 14 18:31:33.158: INFO: Container httpd ready: false, restart count 0 Jan 14 18:31:33.158: INFO: rs-z2hzj started at 2023-01-14 18:31:20 +0000 UTC (0+1 container statuses recorded) Jan 14 18:31:33.158: INFO: Container donothing ready: true, restart count 0 Jan 14 18:31:33.158: INFO: webserver-deployment-7f5969cbc7-hmwtg started at 2023-01-14 18:29:50 +0000 UTC (0+1 container statuses recorded) Jan 14 18:31:33.158: INFO: Container httpd ready: false, restart count 0 Jan 14 18:31:33.158: INFO: pod1 started at 2023-01-14 18:29:51 +0000 UTC (0+1 container statuses recorded) Jan 14 18:31:33.158: INFO: Container agnhost ready: true, restart count 0 Jan 14 18:31:33.158: INFO: test-grpc-f156c965-2ae0-4fbe-9e25-499a81961e3e started at 2023-01-14 18:30:41 +0000 UTC (0+1 container statuses recorded) Jan 14 18:31:33.158: INFO: Container etcd ready: true, restart count 0 Jan 14 18:31:33.158: INFO: test-rs-ndvhm started at 2023-01-14 18:29:55 +0000 UTC (0+1 container statuses recorded) Jan 14 18:31:33.158: INFO: Container httpd ready: false, restart count 0 Jan 14 18:31:33.158: INFO: pod-xvzqs started at 2023-01-14 18:31:31 +0000 UTC (0+1 container statuses recorded) Jan 14 18:31:33.158: INFO: Container agnhost ready: false, restart count 0 Jan 14 18:31:33.158: INFO: affinity-clusterip-transition-qrk8l started at 2023-01-14 18:31:04 +0000 UTC (0+1 container statuses recorded) Jan 14 18:31:33.158: INFO: Container affinity-clusterip-transition ready: true, restart count 0 Jan 14 18:31:33.158: INFO: pod-service-account-defaultsa-mountspec started at 2023-01-14 18:30:01 +0000 UTC (0+1 container statuses recorded) Jan 14 18:31:33.158: INFO: Container token-test ready: false, restart count 0 Jan 14 18:31:33.158: INFO: e2e-host-exec started at 2023-01-14 18:31:20 +0000 UTC (0+1 container statuses recorded) Jan 14 18:31:33.158: INFO: Container e2e-host-exec ready: true, restart count 0 Jan 14 18:31:33.158: INFO: test-deployment-7b7876f9d6-zqb4p started at 2023-01-14 18:30:28 +0000 UTC (0+1 container statuses recorded) Jan 14 18:31:33.158: INFO: Container test-deployment ready: false, restart count 0 Jan 14 18:31:33.158: INFO: pod3 started at 2023-01-14 18:31:06 +0000 UTC (0+1 container statuses recorded) Jan 14 18:31:33.158: INFO: Container agnhost ready: true, restart count 0 Jan 14 18:31:33.158: INFO: webserver-deployment-7f5969cbc7-xfkvh started at 2023-01-14 18:29:50 +0000 UTC (0+1 container statuses recorded) Jan 14 18:31:33.158: INFO: Container httpd ready: false, restart count 0 Jan 14 18:31:33.158: INFO: coredns-56f4c55bf9-4pfjc started at 2023-01-14 18:19:48 +0000 UTC (0+1 container statuses recorded) Jan 14 18:31:33.158: INFO: Container coredns ready: true, restart count 0 Jan 14 18:31:33.158: INFO: metrics-server-795d765ff8-rskk8 started at 2023-01-14 18:19:48 +0000 UTC (0+1 container statuses recorded) Jan 14 18:31:33.158: INFO: Container metrics-server ready: true, restart count 0 Jan 14 18:31:33.158: INFO: ss2-0 started at 2023-01-14 18:31:08 +0000 UTC (0+1 container statuses recorded) Jan 14 18:31:33.158: INFO: Container webserver ready: false, restart count 0 Jan 14 18:31:33.158: INFO: cloud-node-manager-l846f started at 2023-01-14 18:20:38 +0000 UTC (0+1 container statuses recorded) Jan 14 18:31:33.158: INFO: Container cloud-node-manager ready: true, restart count 0 Jan 14 18:31:33.158: INFO: test-ss-0 started at 2023-01-14 18:28:36 +0000 UTC (0+1 container statuses recorded) Jan 14 18:31:33.158: INFO: Container webserver ready: true, restart count 0 Jan 14 18:31:33.158: INFO: test-deployment-7df74c55ff-s9lvr started at 2023-01-14 18:30:28 +0000 UTC (0+1 container statuses recorded) Jan 14 18:31:33.158: INFO: Container test-deployment ready: true, restart count 0 Jan 14 18:31:33.158: INFO: calico-node-t5npc started at 2023-01-14 18:19:05 +0000 UTC (2+1 container statuses recorded) Jan 14 18:31:33.158: INFO: Init container upgrade-ipam ready: true, restart count 0 Jan 14 18:31:33.158: INFO: Init container install-cni ready: true, restart count 0 Jan 14 18:31:33.158: INFO: Container calico-node ready: true, restart count 0 Jan 14 18:31:33.158: INFO: calico-kube-controllers-657b584867-tn8lq started at 2023-01-14 18:19:48 +0000 UTC (0+1 container statuses recorded) Jan 14 18:31:33.158: INFO: Container calico-kube-controllers ready: true, restart count 0 Jan 14 18:31:33.158: INFO: pod-service-account-nomountsa-mountspec started at 2023-01-14 18:30:01 +0000 UTC (0+1 container statuses recorded) Jan 14 18:31:33.158: INFO: Container token-test ready: false, restart count 0 Jan 14 18:31:35.032: INFO: Latency metrics for node capz-67tgp2-mp-0000000 Jan 14 18:31:35.033: INFO: Logging node info for node capz-67tgp2-mp-0000001 Jan 14 18:31:35.141: INFO: Node Info: &Node{ObjectMeta:{capz-67tgp2-mp-0000001 a57d1a46-19d4-4265-8229-3bb32b89963d 4424 0 2023-01-14 18:18:59 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_D2s_v3 beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:northeurope failure-domain.beta.kubernetes.io/zone:1 kubernetes.io/arch:amd64 kubernetes.io/hostname:capz-67tgp2-mp-0000001 kubernetes.io/os:linux node.kubernetes.io/instance-type:Standard_D2s_v3 topology.kubernetes.io/region:northeurope topology.kubernetes.io/zone:1] map[cluster.x-k8s.io/cluster-name:capz-67tgp2 cluster.x-k8s.io/cluster-namespace:default cluster.x-k8s.io/owner-kind:MachinePool cluster.x-k8s.io/owner-name:capz-67tgp2-mp-0 kubeadm.alpha.kubernetes.io/cri-socket:unix:///var/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.1.0.5/16 projectcalico.org/IPv4VXLANTunnelAddr:192.168.14.128 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-14 18:18:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2023-01-14 18:19:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kube-controller-manager Update v1 2023-01-14 18:20:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}} } {Go-http-client Update v1 2023-01-14 18:20:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:projectcalico.org/IPv4Address":{},"f:projectcalico.org/IPv4VXLANTunnelAddr":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {cloud-controller-manager Update v1 2023-01-14 18:21:06 +0000 UTC FieldsV1 {"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"192.168.2.0/24\"":{}}}} } {manager Update v1 2023-01-14 18:21:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cluster.x-k8s.io/cluster-name":{},"f:cluster.x-k8s.io/cluster-namespace":{},"f:cluster.x-k8s.io/owner-kind":{},"f:cluster.x-k8s.io/owner-name":{}}}} } {kubelet Update v1 2023-01-14 18:28:54 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:192.168.2.0/24,DoNotUseExternalID:,ProviderID:azure:///subscriptions/===REDACTED===/resourceGroups/capz-67tgp2/providers/Microsoft.Compute/virtualMachineScaleSets/capz-67tgp2-mp-0/virtualMachines/1,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[192.168.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{4 0} {<nil>} 4 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{31025332224 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{8344743936 0} {<nil>} 8149164Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{4 0} {<nil>} 4 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{27922798956 0} {<nil>} 27922798956 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{8239886336 0} {<nil>} 8046764Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-14 18:20:41 +0000 UTC,LastTransitionTime:2023-01-14 18:20:41 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-14 18:28:54 +0000 UTC,LastTransitionTime:2023-01-14 18:18:59 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-14 18:28:54 +0000 UTC,LastTransitionTime:2023-01-14 18:18:59 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-14 18:28:54 +0000 UTC,LastTransitionTime:2023-01-14 18:18:59 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-14 18:28:54 +0000 UTC,LastTransitionTime:2023-01-14 18:20:32 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:capz-67tgp2-mp-0000001,},NodeAddress{Type:InternalIP,Address:10.1.0.5,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:e38f17c71746485985c8ebe9f1d87480,SystemUUID:31667858-013a-6c49-bd37-41a0bfb4cd7c,BootID:a61dc5b1-073f-4988-b019-c5aa35ecae86,KernelVersion:5.4.0-1098-azure,OSImage:Ubuntu 18.04.6 LTS,ContainerRuntimeVersion:containerd://1.6.2,KubeletVersion:v1.27.0-alpha.0.989+eabb70833a5649,KubeProxyVersion:v1.27.0-alpha.0.989+eabb70833a5649,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-apiserver:v1.27.0-alpha.0.989_eabb70833a5649 registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.0.989_eabb70833a5649 registry.k8s.io/kube-apiserver:v1.27.0-alpha.0.989_eabb70833a5649],SizeBytes:135903699,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-controller-manager:v1.27.0-alpha.0.989_eabb70833a5649 registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.0.989_eabb70833a5649 registry.k8s.io/kube-controller-manager:v1.27.0-alpha.0.989_eabb70833a5649],SizeBytes:125717305,},ContainerImage{Names:[docker.io/calico/cni@sha256:914823d144204288f881e49b93b6852febfe669074cd4e2a782860981615f521 docker.io/calico/cni:v3.23.0],SizeBytes:110494683,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[docker.io/calico/node@sha256:4763820ecb4d8e82483a2ffabfec7fcded9603318692df210a778d223a4d7474 docker.io/calico/node:v3.23.0],SizeBytes:71573794,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-scheduler:v1.27.0-alpha.0.989_eabb70833a5649 registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.0.989_eabb70833a5649 registry.k8s.io/kube-scheduler:v1.27.0-alpha.0.989_eabb70833a5649],SizeBytes:57551672,},ContainerImage{Names:[gcr.io/k8s-staging-ci-images/kube-proxy:v1.27.0-alpha.0.989_eabb70833a5649 registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.989_eabb70833a5649 registry.k8s.io/kube-proxy:v1.27.0-alpha.0.989_eabb70833a5649],SizeBytes:52478325,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[registry.k8s.io/kube-apiserver@sha256:d230a0b88a3daf14e4cce03b906b992c8153f37da878677f434b1af8c4e8cc75 registry.k8s.io/kube-apiserver:v1.26.0],SizeBytes:35317868,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager@sha256:26e260b50ec46bd1da7352565cb8b34b6dd2cb006cebbd2f35170d50935fb9ec registry.k8s.io/kube-controller-manager:v1.26.0],SizeBytes:32244989,},ContainerImage{Names:[registry.k8s.io/kube-proxy@sha256:1e9bbe429e4e2b2ad32681c91deb98a334f1bf4135137df5f84f9d03689060fe registry.k8s.io/kube-proxy:v1.26.0],SizeBytes:21536465,},ContainerImage{Names:[registry.k8s.io/kube-scheduler@sha256:34a142549f94312b41d4a6cd98e7fddabff484767a199333acb7503bf46d7410 registry.k8s.io/kube-scheduler:v1.26.0],SizeBytes:17484038,},ContainerImage{Names:[capzci.azurecr.io/azure-cloud-node-manager@sha256:45259845bc04cb115596dd16d88262d84214a1099fe085531240b24fa03021cf capzci.azurecr.io/azure-cloud-node-manager:63c1cd3],SizeBytes:16704716,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 14 18:31:35.141: INFO: Logging kubelet events for node capz-67tgp2-mp-0000001 Jan 14 18:31:35.246: INFO: Logging pods the kubelet thinks is on node capz-67tgp2-mp-0000001 Jan 14 18:31:35.439: INFO: calico-node-lzp55 started at 2023-01-14 18:19:07 +0000 UTC (2+1 container statuses recorded) Jan 14 18:31:35.439: INFO: Init container upgrade-ipam ready: true, restart count 0 Jan 14 18:31:35.439: INFO: Init container install-cni ready: true, restart count 0 Jan 14 18:31:35.439: INFO: Container calico-node ready: true, restart count 0 Jan 14 18:31:35.439: INFO: privileged-pod started at 2023-01-14 18:31:28 +0000 UTC (0+2 container statuses recorded) Jan 14 18:31:35.439: INFO: Container not-privileged-container ready: true, restart count 0 Jan 14 18:31:35.439: INFO: Container privileged-container ready: true, restart count 0 Jan 14 18:31:35.439: INFO: terminate-cmd-rpne7ecf535-e473-4273-8075-40044ca55f4e started at 2023-01-14 18:31:30 +0000 UTC (0+1 container statuses recorded) Jan 14 18:31:35.439: INFO: Container terminate-cmd-rpn ready: false, restart count 0 Jan 14 18:31:35.439: INFO: adopt-release-5tk47 started at 2023-01-14 18:30:51 +0000 UTC (0+1 container statuses recorded) Jan 14 18:31:35.439: INFO: Container c ready: true, restart count 0 Jan 14 18:31:35.439: INFO: adopt-release-ctmlr started at 2023-01-14 18:30:51 +0000 UTC (0+1 container statuses recorded) Jan 14 18:31:35.439: INFO: Container c ready: true, restart count 0 Jan 14 18:31:35.439: INFO: pod-adoption started at 2023-01-14 18:29:01 +0000 UTC (0+1 container statuses recorded) Jan 14 18:31:35.439: INFO: Container pod-adoption ready: false, restart count 0 Jan 14 18:31:35.439: INFO: webserver-deployment-7f5969cbc7-f6jmv started at 2023-01-14 18:29:50 +0000 UTC (0+1 container statuses recorded) Jan 14 18:31:35.439: INFO: Container httpd ready: false, restart count 0 Jan 14 18:31:35.439: INFO: webserver-deployment-7f5969cbc7-zmkvp started at 2023-01-14 18:29:50 +0000 UTC (0+1 container statuses recorded) Jan 14 18:31:35.439: INFO: Container httpd ready: false, restart count 0 Jan 14 18:31:35.439: INFO: update-demo-nautilus-9757j started at 2023-01-14 18:30:58 +0000 UTC (0+1 container statuses recorded) Jan 14 18:31:35.439: INFO: Container update-demo ready: false, restart count 0 Jan 14 18:31:35.439: INFO: busybox-6838bd23-aab9-4abf-b816-7aa83c52b6f1 started at 2023-01-14 18:30:02 +0000 UTC (0+1 container statuses recorded) Jan 14 18:31:35.439: INFO: Container busybox ready: true, restart count 0 Jan 14 18:31:35.439: INFO: webserver-deployment-7f5969cbc7-jhph8 started at 2023-01-14 18:29:50 +0000 UTC (0+1 container statuses recorded) Jan 14 18:31:35.439: INFO: Container httpd ready: false, restart count 0 Jan 14 18:31:35.439: INFO: test-webserver-bebd24c8-7e4e-468d-b7f9-f7dacc78fdd5 started at 2023-01-14 18:28:14 +0000 UTC (0+1 container statuses recorded) Jan 14 18:31:35.439: INFO: Container test-webserver ready: true, restart count 0 Jan 14 18:31:35.439: INFO: rs-q4mbj started at 2023-01-14 18:31:20 +0000 UTC (0+1 container statuses recorded) Jan 14 18:31:35.439: INFO: Container donothing ready: true, restart count 0 Jan 14 18:31:35.439: INFO: pod-configmaps-d93eabd0-999b-4501-ac99-2dcef2f85f8f started at 2023-01-14 18:29:48 +0000 UTC (0+1 container statuses recorded) Jan 14 18:31:35.439: INFO: Container agnhost-container ready: true, restart count 0 Jan 14 18:31:35.439: INFO: cloud-node-manager-c24hp started at 2023-01-14 18:20:38 +0000 UTC (0+1 container statuses recorded) Jan 14 18:31:35.439: INFO: Container cloud-node-manager ready: true, restart count 0 Jan 14 18:31:35.439: INFO: dns-test-45948e0a-047f-4523-969c-9bc41b0b2ef8 started at 2023-01-14 18:28:32 +0000 UTC (0+3 container statuses recorded) Jan 14 18:31:35.439: INFO: Container jessie-querier ready: true, restart count 0 Jan 14 18:31:35.439: INFO: Container querier ready: true, restart count 0 Jan 14 18:31:35.439: INFO: Container webserver ready: true, restart count 0 Jan 14 18:31:35.439: INFO: pod-test started at 2023-01-14 18:29:43 +0000 UTC (0+1 container statuses recorded) Jan 14 18:31:35.439: INFO: Container pod-test ready: true, restart count 0 Jan 14 18:31:35.439: INFO: rs-mkqq8 started at 2023-01-14 18:31:20 +0000 UTC (0+1 container statuses recorded) Jan 14 18:31:35.439: INFO: Container donothing ready: true, restart count 0 Jan 14 18:31:35.439: INFO: test-deployment-7df74c55ff-84hdq started at 2023-01-14 18:29:43 +0000 UTC (0+1 container statuses recorded) Jan 14 18:31:35.439: INFO: Container test-deployment ready: true, restart count 0 Jan 14 18:31:35.439: INFO: dns-test-6b8b5a61-e5df-42db-ac53-e949636abcb0 started at 2023-01-14 18:30:44 +0000 UTC (0+3 container statuses recorded) Jan 14 18:31:35.439: INFO: Container jessie-querier ready: false, restart count 0 Jan 14 18:31:35.439: INFO: Container querier ready: false, restart count 0 Jan 14 18:31:35.439: INFO: Container webserver ready: false, restart count 0 Jan 14 18:31:35.439: INFO: image-pull-test02589db6-0ccc-440c-9ec6-225d18b71d37 started at 2023-01-14 18:31:06 +0000 UTC (0+1 container statuses recorded) Jan 14 18:31:35.439: INFO: Container image-pull-test ready: false, restart count 0 Jan 14 18:31:35.439: INFO: affinity-clusterip-transition-4qwm9 started at 2023-01-14 18:31:04 +0000 UTC (0+1 container statuses recorded) Jan 14 18:31:35.439: INFO: Container affinity-clusterip-transition ready: true, restart count 0 Jan 14 18:31:35.439: INFO: image-pull-test1a9e8d67-d219-4a39-b91b-061fb78c9cfc started at 2023-01-14 18:29:47 +0000 UTC (0+1 container statuses recorded) Jan 14 18:31:35.439: INFO: Container image-pull-test ready: false, restart count 0 Jan 14 18:31:35.439: INFO: busybox-2cf95817-dadf-4675-8b51-2fdbeab77a73 started at 2023-01-14 18:31:12 +0000 UTC (0+1 container statuses recorded) Jan 14 18:31:35.439: INFO: Container busybox ready: true, restart count 0 Jan 14 18:31:35.439: INFO: proxy-service-p2gl8-jl8p6 started at 2023-01-14 18:31:22 +0000 UTC (0+1 container statuses recorded) Jan 14 18:31:35.439: INFO: Container proxy-service-p2gl8 ready: true, restart count 0 Jan 14 18:31:35.439: INFO: webserver-deployment-7f5969cbc7-vn5tb started at 2023-01-14 18:29:50 +0000 UTC (0+1 container statuses recorded) Jan 14 18:31:35.439: INFO: Container httpd ready: false, restart count 0 Jan 14 18:31:35.439: INFO: pod-projected-secrets-99ff3575-08d9-47b4-9937-ccb362eac3a5 started at 2023-01-14 18:30:43 +0000 UTC (0+3 container statuses recorded) Jan 14 18:31:35.439: INFO: Container creates-volume-test ready: true, restart count 0 Jan 14 18:31:35.439: INFO: Container dels-volume-test ready: true, restart count 0 Jan 14 18:31:35.439: INFO: Container upds-volume-test ready: true, restart count 0 Jan 14 18:31:35.439: INFO: liveness-c28e233c-23c4-451f-b61f-015c73828952 started at 2023-01-14 18:29:26 +0000 UTC (0+1 container statuses recorded) Jan 14 18:31:35.439: INFO: Container agnhost-container ready: true, restart count 4 Jan 14 18:31:35.439: INFO: kube-proxy-xd8xz started at 2023-01-14 18:19:07 +0000 UTC (0+1 container statuses recorded) Jan 14 18:31:35.439: INFO: Container kube-proxy ready: true, restart count 0 Jan 14 18:31:35.439: INFO: e2e-test-httpd-pod started at 2023-01-14 18:29:01 +0000 UTC (0+1 container statuses recorded) Jan 14 18:31:35.439: INFO: Container e2e-test-httpd-pod ready: false, restart count 0 Jan 14 18:31:35.439: INFO: sample-crd-conversion-webhook-deployment-74ff66dd47-h5vtw started at 2023-01-14 18:31:29 +0000 UTC (0+1 container statuses recorded) Jan 14 18:31:35.439: INFO: Container sample-crd-conversion-webhook ready: false, restart count 0 Jan 14 18:31:35.439: INFO: webserver-deployment-7f5969cbc7-fcnzv started at 2023-01-14 18:29:50 +0000 UTC (0+1 container statuses recorded) Jan 14 18:31:35.439: INFO: Container httpd ready: false, restart count 0 Jan 14 18:31:37.011: INFO: Latency metrics for node capz-67tgp2-mp-0000001 END STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/14/23 18:31:37.011 (5.726s) < Exit [DeferCleanup (Each)] [sig-node] Pods - dump namespaces | framework.go:206 @ 01/14/23 18:31:37.011 (5.726s) > Enter [DeferCleanup (Each)] [sig-node] Pods - tear down framework | framework.go:203 @ 01/14/23 18:31:37.011 STEP: Destroying namespace "pods-9842" for this suite. - test/e2e/framework/framework.go:347 @ 01/14/23 18:31:37.011 < Exit [DeferCleanup (Each)] [sig-node] Pods - tear down framework | framework.go:203 @ 01/14/23 18:31:37.123 (111ms) > Enter [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/14/23 18:31:37.123 < Exit [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/14/23 18:31:37.123 (0s)
Find pod-test mentions in log files | View test history on testgrid
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should apply changes to a resourcequota status [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should apply changes to a resourcequota status [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should manage the lifecycle of a ResourceQuota [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should manage the lifecycle of a ResourceQuota [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] server version should find the server version [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] server version should find the server version [Conformance]
Kubernetes e2e suite [It] [sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]
Kubernetes e2e suite [It] [sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]
Kubernetes e2e suite [It] [sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]
Kubernetes e2e suite [It] [sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]
Kubernetes e2e suite [It] [sig-apps] CronJob should support CronJob API operations [Conformance]
Kubernetes e2e suite [It] [sig-apps] CronJob should support CronJob API operations [Conformance]
Kubernetes e2e suite [It] [sig-apps] Deployment Deployment should have a working scale subresource [Conformance]
Kubernetes e2e suite [It] [sig-apps] Deployment Deployment should have a working scale subresource [Conformance]
Kubernetes e2e suite [It] [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]
Kubernetes e2e suite [It] [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]
Kubernetes e2e suite [It] [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]
Kubernetes e2e suite [It] [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]
Kubernetes e2e suite [It] [sig-apps] Deployment deployment should delete old replica sets [Conformance]
Kubernetes e2e suite [It] [sig-apps] Deployment deployment should delete old replica sets [Conformance]
Kubernetes e2e suite [It] [sig-apps] Deployment deployment should support proportional scaling [Conformance]
Kubernetes e2e suite [It] [sig-apps] Deployment deployment should support proportional scaling [Conformance]
Kubernetes e2e suite [It] [sig-apps] Deployment deployment should support rollover [Conformance]
Kubernetes e2e suite [It] [sig-apps] Deployment deployment should support rollover [Conformance]
Kubernetes e2e suite [It] [sig-apps] Deployment should validate Deployment Status endpoints [Conformance]
Kubernetes e2e suite [It] [sig-apps] Deployment should validate Deployment Status endpoints [Conformance]
Kubernetes e2e suite [It] [sig-apps] DisruptionController Listing PodDisruptionBudgets for all namespaces should list and delete a collection of PodDisruptionBudgets [Conformance]
Kubernetes e2e suite [It] [sig-apps] DisruptionController Listing PodDisruptionBudgets for all namespaces should list and delete a collection of PodDisruptionBudgets [Conformance]
Kubernetes e2e suite [It] [sig-apps] DisruptionController should block an eviction until the PDB is updated to allow it [Conformance]
Kubernetes e2e suite [It] [sig-apps] DisruptionController should block an eviction until the PDB is updated to allow it [Conformance]
Kubernetes e2e suite [It] [sig-apps] DisruptionController should create a PodDisruptionBudget [Conformance]
Kubernetes e2e suite [It] [sig-apps] DisruptionController should create a PodDisruptionBudget [Conformance]
Kubernetes e2e suite [It] [sig-apps] DisruptionController should observe PodDisruptionBudget status updated [Conformance]
Kubernetes e2e suite [It] [sig-apps] DisruptionController should observe PodDisruptionBudget status updated [Conformance]
Kubernetes e2e suite [It] [sig-apps] DisruptionController should update/patch PodDisruptionBudget status [Conformance]
Kubernetes e2e suite [It] [sig-apps] DisruptionController should update/patch PodDisruptionBudget status [Conformance]
Kubernetes e2e suite [It] [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]
Kubernetes e2e suite [It] [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]
Kubernetes e2e suite [It] [sig-apps] Job should apply changes to a job status [Conformance]
Kubernetes e2e suite [It] [sig-apps] Job should apply changes to a job status [Conformance]
Kubernetes e2e suite [It] [sig-apps] Job should create pods for an Indexed job with completion indexes and specified hostname [Conformance]
Kubernetes e2e suite [It] [sig-apps] Job should create pods for an Indexed job with completion indexes and specified hostname [Conformance]
Kubernetes e2e suite [It] [sig-apps] Job should delete a job [Conformance]
Kubernetes e2e suite [It] [sig-apps] Job should delete a job [Conformance]
Kubernetes e2e suite [It] [sig-apps] Job should manage the lifecycle of a job [Conformance]
Kubernetes e2e suite [It] [sig-apps] Job should manage the lifecycle of a job [Conformance]
Kubernetes e2e suite [It] [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
Kubernetes e2e suite [It] [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicaSet Replace and Patch tests [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicaSet Replace and Patch tests [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicaSet Replicaset should have a working scale subresource [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicaSet Replicaset should have a working scale subresource [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicaSet should list and delete a collection of ReplicaSets [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicaSet should list and delete a collection of ReplicaSets [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicationController should get and update a ReplicationController scale [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicationController should get and update a ReplicationController scale [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicationController should release no longer matching pods [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicationController should release no longer matching pods [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicationController should test the lifecycle of a ReplicationController [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicationController should test the lifecycle of a ReplicationController [Conformance]
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should list, patch and delete a collection of StatefulSets [Conformance]
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should list, patch and delete a collection of StatefulSets [Conformance]
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should validate Statefulset Status endpoints [Conformance]
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should validate Statefulset Status endpoints [Conformance]
Kubernetes e2e suite [It] [sig-architecture] Conformance Tests should have at least two untainted nodes [Conformance]
Kubernetes e2e suite [It] [sig-architecture] Conformance Tests should have at least two untainted nodes [Conformance]
Kubernetes e2e suite [It] [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]
Kubernetes e2e suite [It] [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]
Kubernetes e2e suite [It] [sig-auth] ServiceAccounts ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]
Kubernetes e2e suite [It] [sig-auth] ServiceAccounts ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]
Kubernetes e2e suite [It] [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]
Kubernetes e2e suite [It] [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]
Kubernetes e2e suite [It] [sig-auth] ServiceAccounts should guarantee kube-root-ca.crt exist in any namespace [Conformance]
Kubernetes e2e suite [It] [sig-auth] ServiceAccounts should guarantee kube-root-ca.crt exist in any namespace [Conformance]
Kubernetes e2e suite [It] [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]
Kubernetes e2e suite [It] [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]
Kubernetes e2e suite [It] [sig-auth] ServiceAccounts should mount projected service account token [Conformance]
Kubernetes e2e suite [It] [sig-auth] ServiceAccounts should mount projected service account token [Conformance]
Kubernetes e2e suite [It] [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance]
Kubernetes e2e suite [It] [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance]
Kubernetes e2e suite [It] [sig-auth] ServiceAccounts should update a ServiceAccount [Conformance]
Kubernetes e2e suite [It] [sig-auth] ServiceAccounts should update a ServiceAccount [Conformance]
Kubernetes e2e suite [It] [sig-auth] SubjectReview should support SubjectReview API operations [Conformance]
Kubernetes e2e suite [It] [sig-auth] SubjectReview should support SubjectReview API operations [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes control plane services is included in cluster-info [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes control plane services is included in cluster-info [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]
Kubernetes e2e suite [It] [sig-instrumentation] Events API should delete a collection of events [Conformance]
Kubernetes e2e suite [It] [sig-instrumentation] Events API should delete a collection of events [Conformance]
Kubernetes e2e suite [It] [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]
Kubernetes e2e suite [It] [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]
Kubernetes e2e suite [It] [sig-instrumentation] Events should delete a collection of events [Conformance]
Kubernetes e2e suite [It] [sig-instrumentation] Events should delete a collection of events [Conformance]
Kubernetes e2e suite [It] [sig-instrumentation] Events should manage the lifecycle of an event [Conformance]
Kubernetes e2e suite [It] [sig-instrumentation] Events should manage the lifecycle of an event [Conformance]
Kubernetes e2e suite [It] [sig-network] DNS should provide /etc/hosts entries for the cluster [Conformance]
Kubernetes e2e suite [It] [sig-network] DNS should provide /etc/hosts entries for the cluster [Conformance]
Kubernetes e2e suite [It] [sig-network] DNS should provide DNS for ExternalName services [Conformance]
Kubernetes e2e suite [It] [sig-network] DNS should provide DNS for ExternalName services [Conformance]
Kubernetes e2e suite [It] [sig-network] DNS should provide DNS for pods for Hostname [Conformance]
Kubernetes e2e suite [It] [sig-network] DNS should provide DNS for pods for Hostname [Conformance]
Kubernetes e2e suite [It] [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]
Kubernetes e2e suite [It] [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]
Kubernetes e2e suite [It] [sig-network] DNS should provide DNS for services [Conformance]
Kubernetes e2e suite [It] [sig-network] DNS should provide DNS for services [Conformance]
Kubernetes e2e suite [It] [sig-network] DNS should provide DNS for the cluster [Conformance]
Kubernetes e2e suite [It] [sig-network] DNS should provide DNS for the cluster [Conformance]
Kubernetes e2e suite [It] [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
Kubernetes e2e suite [It] [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
Kubernetes e2e suite [It] [sig-network] DNS should support configurable pod DNS nameservers [Conformance]
Kubernetes e2e suite [It] [sig-network] DNS should support configurable pod DNS nameservers [Conformance]
Kubernetes e2e suite [It] [sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]
Kubernetes e2e suite [It] [sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]
Kubernetes e2e suite [It] [sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]
Kubernetes e2e suite [It] [sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]
Kubernetes e2e suite [It] [sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]
Kubernetes e2e suite [It] [sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]
Kubernetes e2e suite [It] [sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]
Kubernetes e2e suite [It] [sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]
Kubernetes e2e suite [It] [sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]
Kubernetes e2e suite [It] [sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]
Kubernetes e2e suite [It] [sig-network] HostPort validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance]
Kubernetes e2e suite [It] [sig-network] HostPort validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance]
Kubernetes e2e suite [It] [sig-network] Ingress API should support creating Ingress API operations [Conformance]
Kubernetes e2e suite [It] [sig-network] Ingress API should support creating Ingress API operations [Conformance]
Kubernetes e2e suite [It] [sig-network] IngressClass API should support creating IngressClass API operations [Conformance]
Kubernetes e2e suite [It] [sig-network] IngressClass API should support creating IngressClass API operations [Conformance]
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-network] Proxy version v1 A set of valid responses are returned for both pod and service Proxy [Conformance]
Kubernetes e2e suite [It] [sig-network] Proxy version v1 A set of valid responses are returned for both pod and service Proxy [Conformance]
Kubernetes e2e suite [It] [sig-network] Proxy version v1 A set of valid responses are returned for both pod and service ProxyWithPath [Conformance]
Kubernetes e2e suite [It] [sig-network] Proxy version v1 A set of valid responses are returned for both pod and service ProxyWithPath [Conformance]
Kubernetes e2e suite [It] [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance]
Kubernetes e2e suite [It] [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance]
Kubernetes e2e suite [It] [sig-network] Service endpoints latency should not be very high [Conformance]
Kubernetes e2e suite [It] [sig-network] Service endpoints latency should not be very high [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should be able to create a functioning NodePort service [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should be able to create a functioning NodePort service [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should complete a service status lifecycle [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should complete a service status lifecycle [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should delete a collection of services [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should delete a collection of services [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should find a service from listing all namespaces [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should find a service from listing all namespaces [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should provide secure master service [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should provide secure master service [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should serve a basic endpoint from pods [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should serve a basic endpoint from pods [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should serve multiport endpoints from pods [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should serve multiport endpoints from pods [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should test the lifecycle of an Endpoint [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should test the lifecycle of an Endpoint [Conformance]
Kubernetes e2e suite [It] [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]
Kubernetes e2e suite [It] [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]
Kubernetes e2e suite [It] [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]
Kubernetes e2e suite [It] [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]
Kubernetes e2e suite [It] [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart https hook properly [MinimumKubeletVersion:1.23] [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart https hook properly [MinimumKubeletVersion:1.23] [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop https hook properly [MinimumKubeletVersion:1.23] [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop https hook properly [MinimumKubeletVersion:1.23] [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Container Runtime blackbox test on terminated container should report termination message as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Container Runtime blackbox test on terminated container should report termination message as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Container Runtime blackbox test on terminated container should report termination message from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Container Runtime blackbox test on terminated container should report termination message from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Container Runtime blackbox test on terminated container should report termination message from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Container Runtime blackbox test on terminated container should report termination message from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Container Runtime blackbox test on terminated container should report termination message if TerminationMessagePath is set [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Container Runtime blackbox test on terminated container should report termination message if TerminationMessagePath is set [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Container Runtime blackbox test on terminated container should report termination message if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Container Runtime blackbox test on terminated container should report termination message if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Container Runtime blackbox test when running a container with a new image should be able to pull from private registry with secret [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Container Runtime blackbox test when running a container with a new image should be able to pull from private registry with secret [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Container Runtime blackbox test when running a container with a new image should be able to pull image [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Container Runtime blackbox test when running a container with a new image should be able to pull image [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Container Runtime blackbox test when running a container with a new image should not be able to pull from private registry without secret [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Container Runtime blackbox test when running a container with a new image should not be able to pull from private registry without secret [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Container Runtime blackbox test when running a container with a new image should not be able to pull image from invalid registry [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Container Runtime blackbox test when running a container with a new image should not be able to pull image from invalid registry [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Containers should be able to override the image's default arguments (container cmd) [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Containers should be able to override the image's default arguments (container cmd) [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Containers should be able to override the image's default command (container entrypoint) [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Containers should be able to override the image's default command (container entrypoint) [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Ephemeral Containers [NodeConformance] will start an ephemeral container in an existing pod [Conformance]
Kubernetes e2e suite [It] [sig-node] Ephemeral Containers [NodeConformance] will start an ephemeral container in an existing pod [Conformance]
Kubernetes e2e suite [It] [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]
Kubernetes e2e suite [It] [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]
Kubernetes e2e suite [It] [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]
Kubernetes e2e suite [It] [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]
Kubernetes e2e suite [It] [sig-node] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
Kubernetes e2e suite [It] [sig-node] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
Kubernetes e2e suite [It] [sig-node] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
Kubernetes e2e suite [It] [sig-node] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
Kubernetes e2e suite [It] [sig-node] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Kubelet when scheduling an agnhost Pod with hostAliases should write entries to /etc/hosts [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Kubelet when scheduling an agnhost Pod with hostAliases should write entries to /etc/hosts [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Lease lease API should be available [Conformance]
Kubernetes e2e suite [It] [sig-node] Lease lease API should be available [Conformance]
Kubernetes e2e suite [It] [sig-node] PodOSRejection [NodeConformance] Kubelet should reject pod when the node OS doesn't match pod's OS
Kubernetes e2e suite [It] [sig-node] PodOSRejection [NodeConformance] Kubelet should reject pod when the node OS doesn't match pod's OS
Kubernetes e2e suite [It] [sig-node] PodTemplates should delete a collection of pod templates [Conformance]
Kubernetes e2e suite [It] [sig-node] PodTemplates should delete a collection of pod templates [Conformance]
Kubernetes e2e suite [It] [sig-node] PodTemplates should replace a pod template [Conformance]
Kubernetes e2e suite [It] [sig-node] PodTemplates should replace a pod template [Conformance]
Kubernetes e2e suite [It] [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]
Kubernetes e2e suite [It] [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]
Kubernetes e2e suite [It] [sig-node] Pods Extended Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]
Kubernetes e2e suite [It] [sig-node] Pods Extended Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]
Kubernetes e2e suite [It] [sig-node] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Pods should be submitted and removed [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Pods should be submitted and removed [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Pods should be updated [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Pods should be updated [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Pods should contain environment variables for services [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Pods should contain environment variables for services [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Pods should delete a collection of pods [Conformance]
Kubernetes e2e suite [It] [sig-node] Pods should delete a collection of pods [Conformance]
Kubernetes e2e suite [It] [sig-node] Pods should get a host IP [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Pods should get a host IP [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Pods should patch a pod status [Conformance]
Kubernetes e2e suite [It] [sig-node] Pods should patch a pod status [Conformance]
Kubernetes e2e suite [It] [sig-node] Pods should support pod readiness gates [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Pods should support pod readiness gates [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Pods should support remote command execution over websockets [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Pods should support remote command execution over websockets [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] PreStop should call prestop when killing a pod [Conformance]
Kubernetes e2e suite [It] [sig-node] PreStop should call prestop when killing a pod [Conformance]
Kubernetes e2e suite [It] [sig-node] PrivilegedPod [NodeConformance] should enable privileged commands [LinuxOnly]
Kubernetes e2e suite [It] [sig-node] PrivilegedPod [NodeConformance] should enable privileged commands [LinuxOnly]
Kubernetes e2e suite [It] [sig-node] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Probing container should *not* be restarted with a GRPC liveness probe [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Probing container should *not* be restarted with a GRPC liveness probe [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Probing container should be restarted with a GRPC liveness probe [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Probing container should be restarted with a GRPC liveness probe [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Probing container should be restarted with an exec liveness probe with timeout [MinimumKubeletVersion:1.20] [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Probing container should be restarted with an exec liveness probe with timeout [MinimumKubeletVersion:1.20] [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Probing container should not be ready with an exec readiness probe timeout [MinimumKubeletVersion:1.20] [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Probing container should not be ready with an exec readiness probe timeout [MinimumKubeletVersion:1.20] [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] RuntimeClass should support RuntimeClasses API operations [Conformance]
Kubernetes e2e suite [It] [sig-node] RuntimeClass should support RuntimeClasses API operations [Conformance]
Kubernetes e2e suite [It] [sig-node] RuntimeClass should reject a Pod requesting a deleted RuntimeClass [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] RuntimeClass should reject a Pod requesting a deleted RuntimeClass [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] RuntimeClass should reject a Pod requesting a non-existent RuntimeClass [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] RuntimeClass should reject a Pod requesting a non-existent RuntimeClass [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] RuntimeClass should schedule a Pod requesting a RuntimeClass and initialize its Overhead [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] RuntimeClass should schedule a Pod requesting a RuntimeClass and initialize its Overhead [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] RuntimeClass should schedule a Pod requesting a RuntimeClass without PodOverhead [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] RuntimeClass should schedule a Pod requesting a RuntimeClass without PodOverhead [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Secrets should be consumable via the environment [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Secrets should be consumable via the environment [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Secrets should fail to create secret due to empty secret key [Conformance]
Kubernetes e2e suite [It] [sig-node] Secrets should fail to create secret due to empty secret key [Conformance]
Kubernetes e2e suite [It] [sig-node] Secrets should patch a secret [Conformance]
Kubernetes e2e suite [It] [sig-node] Secrets should patch a secret [Conformance]
Kubernetes e2e suite [It] [sig-node] Security Context When creating a container with runAsUser should run the container with uid 0 [LinuxOnly] [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Security Context When creating a container with runAsUser should run the container with uid 0 [LinuxOnly] [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Security Context When creating a pod with readOnlyRootFilesystem should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Security Context When creating a pod with readOnlyRootFilesystem should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Security Context should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]
Kubernetes e2e suite [It] [sig-node] Security Context should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]
Kubernetes e2e suite [It] [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]
Kubernetes e2e suite [It] [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]
Kubernetes e2e suite [It] [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when true [LinuxOnly] [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when true [LinuxOnly] [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Sysctls [LinuxOnly] [NodeConformance] should not launch unsafe, but not explicitly enabled sysctls on the node [MinimumKubeletVersion:1.21]
Kubernetes e2e suite [It] [sig-node] Sysctls [LinuxOnly] [NodeConformance] should not launch unsafe, but not explicitly enabled sysctls on the node [MinimumKubeletVersion:1.21]
Kubernetes e2e suite [It] [sig-node] Sysctls [LinuxOnly] [NodeConformance] should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance]
Kubernetes e2e suite [It] [sig-node] Sysctls [LinuxOnly] [NodeConformance] should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance]
Kubernetes e2e suite [It] [sig-node] Sysctls [LinuxOnly] [NodeConformance] should support sysctls [MinimumKubeletVersion:1.21] [Conformance]
Kubernetes e2e suite [It] [sig-node] Sysctls [LinuxOnly] [NodeConformance] should support sysctls [MinimumKubeletVersion:1.21] [Conformance]
Kubernetes e2e suite [It] [sig-node] Sysctls [LinuxOnly] [NodeConformance] should support sysctls with slashes as separator [MinimumKubeletVersion:1.23]
Kubernetes e2e suite [It] [sig-node] Sysctls [LinuxOnly] [NodeConformance] should support sysctls with slashes as separator [MinimumKubeletVersion:1.23]
Kubernetes e2e suite [It] [sig-node] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Variable Expansion should allow substituting values in a volume subpath [Conformance]
Kubernetes e2e suite [It] [sig-node] Variable Expansion should allow substituting values in a volume subpath [Conformance]
Kubernetes e2e suite [It] [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]
Kubernetes e2e suite [It] [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]
Kubernetes e2e suite [It] [sig-scheduling] LimitRange should list, patch and delete a LimitRange by collection [Conformance]
Kubernetes e2e suite [It] [sig-scheduling] LimitRange should list, patch and delete a LimitRange by collection [Conformance]
Kubernetes e2e suite [It] [sig-storage] CSIInlineVolumes should support CSIVolumeSource in Pod API [Conformance]
Kubernetes e2e suite [It] [sig-storage] CSIInlineVolumes should support CSIVolumeSource in Pod API [Conformance]
Kubernetes e2e suite [It] [sig-storage] CSIInlineVolumes should support ephemeral VolumeLifecycleMode in CSIDriver API [Conformance]
Kubernetes e2e suite [It] [sig-storage] CSIInlineVolumes should support ephemeral VolumeLifecycleMode in CSIDriver API [Conformance]
Kubernetes e2e suite [It] [sig-storage] CSIStorageCapacity should support CSIStorageCapacities API operations [Conformance]
Kubernetes e2e suite [It] [sig-storage] CSIStorageCapacity should support CSIStorageCapacities API operations [Conformance]
Kubernetes e2e suite [It] [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]
Kubernetes e2e suite [It] [sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]
Kubernetes e2e suite [It] [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]
Kubernetes e2e suite [It] [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]
Kubernetes e2e suite [It] [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]
Kubernetes e2e suite [It] [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]
Kubernetes e2e suite [It] [sig-storage] HostPath should support r/w [NodeConformance]
Kubernetes e2e suite [It] [sig-storage] HostPath should support r/w [NodeConformance]
Kubernetes e2e suite [It] [sig-storage] HostPath should support subPath [NodeConformance]
Kubernetes e2e suite [It] [sig-storage] HostPath should support subPath [NodeConformance]
Kubernetes e2e suite [It] [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected secret should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance]
Kubernetes e2e suite [It] [sig-storage] Projected secret should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance]
Kubernetes e2e suite [It] [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Secrets should be immutable if `immutable` field is set [Conformance]
Kubernetes e2e suite [It] [sig-storage] Secrets should be immutable if `immutable` field is set [Conformance]
Kubernetes e2e suite [It] [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [Conformance]
Kubernetes e2e suite [It] [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [Conformance]
Kubernetes e2e suite [It] [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [Conformance]
Kubernetes e2e suite [It] [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [Conformance]
Kubernetes e2e suite [It] [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Conformance]
Kubernetes e2e suite [It] [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Conformance]
Kubernetes e2e suite [It] [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [Conformance]
Kubernetes e2e suite [It] [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [Conformance]
Kubernetes e2e suite [It] [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [Conformance]
Kubernetes e2e suite [It] [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [Conformance]
Kubernetes e2e suite [ReportAfterSuite] Kubernetes e2e JUnit report
Kubernetes e2e suite [ReportAfterSuite] Kubernetes e2e suite report
Kubernetes e2e suite [ReportAfterSuite] Kubernetes e2e suite report
Kubernetes e2e suite [ReportBeforeSuite]
Kubernetes e2e suite [ReportBeforeSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [It] [sig-api-machinery] API priority and fairness should ensure that requests can be classified by adding FlowSchema and PriorityLevelConfiguration
Kubernetes e2e suite [It] [sig-api-machinery] API priority and fairness should ensure that requests can be classified by adding FlowSchema and PriorityLevelConfiguration
Kubernetes e2e suite [It] [sig-api-machinery] API priority and fairness should ensure that requests can't be drowned out (fairness)
Kubernetes e2e suite [It] [sig-api-machinery] API priority and fairness should ensure that requests can't be drowned out (fairness)
Kubernetes e2e suite [It] [sig-api-machinery] API priority and fairness should ensure that requests can't be drowned out (priority)
Kubernetes e2e suite [It] [sig-api-machinery] API priority and fairness should ensure that requests can't be drowned out (priority)
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] [Flaky] kubectl explain works for CR with the same resource name as built-in object.
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] [Flaky] kubectl explain works for CR with the same resource name as built-in object.
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceValidationRules [Privileged:ClusterAdmin] MUST NOT fail validation for create of a custom resource that satisfies the x-kubernetes-validations rules
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceValidationRules [Privileged:ClusterAdmin] MUST NOT fail validation for create of a custom resource that satisfies the x-kubernetes-validations rules
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceValidationRules [Privileged:ClusterAdmin] MUST fail create of a custom resource definition that contains a x-kubernetes-validations rule that refers to a property that do not exist
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceValidationRules [Privileged:ClusterAdmin] MUST fail create of a custom resource definition that contains a x-kubernetes-validations rule that refers to a property that do not exist
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceValidationRules [Privileged:ClusterAdmin] MUST fail create of a custom resource definition that contains an x-kubernetes-validations rule that contains a syntax error
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceValidationRules [Privileged:ClusterAdmin] MUST fail create of a custom resource definition that contains an x-kubernetes-validations rule that contains a syntax error
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceValidationRules [Privileged:ClusterAdmin] MUST fail create of a custom resource definition that contains an x-kubernetes-validations rule that exceeds the estimated cost limit
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceValidationRules [Privileged:ClusterAdmin] MUST fail create of a custom resource definition that contains an x-kubernetes-validations rule that exceeds the estimated cost limit
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceValidationRules [Privileged:ClusterAdmin] MUST fail create of a custom resource that exceeds the runtime cost limit for x-kubernetes-validations rule execution
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceValidationRules [Privileged:ClusterAdmin] MUST fail create of a custom resource that exceeds the runtime cost limit for x-kubernetes-validations rule execution
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceValidationRules [Privileged:ClusterAdmin] MUST fail update of a custom resource that does not satisfy a x-kubernetes-validations transition rule
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceValidationRules [Privileged:ClusterAdmin] MUST fail update of a custom resource that does not satisfy a x-kubernetes-validations transition rule
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceValidationRules [Privileged:ClusterAdmin] MUST fail validation for create of a custom resource that does not satisfy the x-kubernetes-validations rules
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceValidationRules [Privileged:ClusterAdmin] MUST fail validation for create of a custom resource that does not satisfy the x-kubernetes-validations rules
Kubernetes e2e suite [It] [sig-api-machinery] Discovery Custom resource should have storage version hash
Kubernetes e2e suite [It] [sig-api-machinery] Discovery Custom resource should have storage version hash
Kubernetes e2e suite [It] [sig-api-machinery] Discovery should accurately determine present and missing resources
Kubernetes e2e suite [It] [sig-api-machinery] Discovery should accurately determine present and missing resources
Kubernetes e2e suite [It] [sig-api-machinery] Etcd failure [Disruptive] should recover from SIGKILL
Kubernetes e2e suite [It] [sig-api-machinery] Etcd failure [Disruptive] should recover from SIGKILL
Kubernetes e2e suite [It] [sig-api-machinery] Etcd failure [Disruptive] should recover from network partition with master
Kubernetes e2e suite [It] [sig-api-machinery] Etcd failure [Disruptive] should recover from network partition with master
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should delete jobs and pods created by cronjob
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should delete jobs and pods created by cronjob
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should orphan pods created by rc if deleteOptions.OrphanDependents is nil
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should orphan pods created by rc if deleteOptions.OrphanDependents is nil
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should support cascading deletion of custom resources
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should support cascading deletion of custom resources
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should support orphan deletion of custom resources
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should support orphan deletion of custom resources
Kubernetes e2e suite [It] [sig-api-machinery] Generated clientset should create pods, set the deletionTimestamp and deletionGracePeriodSeconds of the pod
Kubernetes e2e suite [It] [sig-api-machinery] Generated clientset should create pods, set the deletionTimestamp and deletionGracePeriodSeconds of the pod
Kubernetes e2e suite [It] [sig-api-machinery] Generated clientset should create v1 cronJobs, delete cronJobs, watch cronJobs
Kubernetes e2e suite [It] [sig-api-machinery] Generated clientset should create v1 cronJobs, delete cronJobs, watch cronJobs
Kubernetes e2e suite [It] [sig-api-machinery] Namespaces [Serial] should always delete fast (ALL of 100 namespaces in 150 seconds) [Feature:ComprehensiveNamespaceDraining]
Kubernetes e2e suite [It] [sig-api-machinery] Namespaces [Serial] should always delete fast (ALL of 100 namespaces in 150 seconds) [Feature:ComprehensiveNamespaceDraining]
Kubernetes e2e suite [It] [sig-api-machinery] Namespaces [Serial] should apply a finalizer to a Namespace [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Namespaces [Serial] should apply a finalizer to a Namespace [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Namespaces [Serial] should apply an update to a Namespace [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Namespaces [Serial] should apply an update to a Namespace [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Namespaces [Serial] should apply changes to a namespace status [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Namespaces [Serial] should apply changes to a namespace status [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Namespaces [Serial] should delete fast enough (90 percent of 100 namespaces in 150 seconds)
Kubernetes e2e suite [It] [sig-api-machinery] Namespaces [Serial] should delete fast enough (90 percent of 100 namespaces in 150 seconds)
Kubernetes e2e suite [It] [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's multiple priority class scope (quota set to pod count: 2) against 2 pods with same priority classes.
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's multiple priority class scope (quota set to pod count: 2) against 2 pods with same priority classes.
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's priority class scope (cpu, memory quota set) against a pod with same priority class.
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's priority class scope (cpu, memory quota set) against a pod with same priority class.
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's priority class scope (quota set to pod count: 1) against 2 pods with different priority class.
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's priority class scope (quota set to pod count: 1) against 2 pods with different priority class.
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's priority class scope (quota set to pod count: 1) against 2 pods with same priority class.
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's priority class scope (quota set to pod count: 1) against 2 pods with same priority class.
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's priority class scope (quota set to pod count: 1) against a pod with different priority class (ScopeSelectorOpExists).
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's priority class scope (quota set to pod count: 1) against a pod with different priority class (ScopeSelectorOpExists).
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's priority class scope (quota set to pod count: 1) against a pod with different priority class (ScopeSelectorOpNotIn).
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's priority class scope (quota set to pod count: 1) against a pod with different priority class (ScopeSelectorOpNotIn).
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's priority class scope (quota set to pod count: 1) against a pod with same priority class.
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's priority class scope (quota set to pod count: 1) against a pod with same priority class.
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota [Feature:ScopeSelectors] should verify ResourceQuota with best effort scope using scope-selectors.
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota [Feature:ScopeSelectors] should verify ResourceQuota with best effort scope using scope-selectors.
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota [Feature:ScopeSelectors] should verify ResourceQuota with terminating scopes through scope selectors.
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota [Feature:ScopeSelectors] should verify ResourceQuota with terminating scopes through scope selectors.
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a custom resource.
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a custom resource.
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim with a storage class
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim with a storage class
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should verify ResourceQuota with cross namespace pod affinity scope using scope-selectors.
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should verify ResourceQuota with cross namespace pod affinity scope using scope-selectors.
Kubernetes e2e suite [It] [sig-api-machinery] Server request timeout default timeout should be used if the specified timeout in the request URL is 0s
Kubernetes e2e suite [It] [sig-api-machinery] Server request timeout default timeout should be used if the specified timeout in the request URL is 0s
Kubernetes e2e suite [It] [sig-api-machinery] Server request timeout should return HTTP status code 400 if the user specifies an invalid timeout in the request URL
Kubernetes e2e suite [It] [sig-api-machinery] Server request timeout should return HTTP status code 400 if the user specifies an invalid timeout in the request URL
Kubernetes e2e suite [It] [sig-api-machinery] Server request timeout the request should be served with a default timeout if the specified timeout in the request URL exceeds maximum allowed
Kubernetes e2e suite [It] [sig-api-machinery] Server request timeout the request should be served with a default timeout if the specified timeout in the request URL exceeds maximum allowed
Kubernetes e2e suite [It] [sig-api-machinery] ServerSideApply should create an applied object if it does not already exist
Kubernetes e2e suite [It] [sig-api-machinery] ServerSideApply should create an applied object if it does not already exist
Kubernetes e2e suite [It] [sig-api-machinery] ServerSideApply should give up ownership of a field if forced applied by a controller
Kubernetes e2e suite [It] [sig-api-machinery] ServerSideApply should give up ownership of a field if forced applied by a controller
Kubernetes e2e suite [It] [sig-api-machinery] ServerSideApply should ignore conflict errors if force apply is used
Kubernetes e2e suite [It] [sig-api-machinery] ServerSideApply should ignore conflict errors if force apply is used
Kubernetes e2e suite [It] [sig-api-machinery] ServerSideApply should not remove a field if an owner unsets the field but other managers still have ownership of the field
Kubernetes e2e suite [It] [sig-api-machinery] ServerSideApply should not remove a field if an owner unsets the field but other managers still have ownership of the field
Kubernetes e2e suite [It] [sig-api-machinery] ServerSideApply should remove a field if it is owned but removed in the apply request
Kubernetes e2e suite [It] [sig-api-machinery] ServerSideApply should remove a field if it is owned but removed in the apply request
Kubernetes e2e suite [It] [sig-api-machinery] ServerSideApply should work for CRDs
Kubernetes e2e suite [It] [sig-api-machinery] ServerSideApply should work for CRDs
Kubernetes e2e suite [It] [sig-api-machinery] ServerSideApply should work for subresources
Kubernetes e2e suite [It] [sig-api-machinery] ServerSideApply should work for subresources
Kubernetes e2e suite [It] [sig-api-machinery] Servers with support for API chunking should return chunks of results for list calls
Kubernetes e2e suite [It] [sig-api-machinery] Servers with support for API chunking should return chunks of results for list calls
Kubernetes e2e suite [It] [sig-api-machinery] Servers with support for API chunking should support continue listing from the last key if the original version has been compacted away, though the list is inconsistent [Slow]
Kubernetes e2e suite [It] [sig-api-machinery] Servers with support for API chunking should support continue listing from the last key if the original version has been compacted away, though the list is inconsistent [Slow]
Kubernetes e2e suite [It] [sig-api-machinery] Servers with support for Table transformation should return chunks of table results for list calls
Kubernetes e2e suite [It] [sig-api-machinery] Servers with support for Table transformation should return chunks of table results for list calls
Kubernetes e2e suite [It] [sig-api-machinery] Servers with support for Table transformation should return generic metadata details across all namespaces for nodes
Kubernetes e2e suite [It] [sig-api-machinery] Servers with support for Table transformation should return generic metadata details across all namespaces for nodes
Kubernetes e2e suite [It] [sig-api-machinery] Servers with support for Table transformation should return pod details
Kubernetes e2e suite [It] [sig-api-machinery] Servers with support for Table transformation should return pod details
Kubernetes e2e suite [It] [sig-api-machinery] StorageVersion resources [Feature:StorageVersionAPI] storage version with non-existing id should be GC'ed
Kubernetes e2e suite [It] [sig-api-machinery] StorageVersion resources [Feature:StorageVersionAPI] storage version with non-existing id should be GC'ed
Kubernetes e2e suite [It] [sig-api-machinery] client-go should negotiate watch and report errors with accept "application/json"
Kubernetes e2e suite [It] [sig-api-machinery] client-go should negotiate watch and report errors with accept "application/json"
Kubernetes e2e suite [It] [sig-api-machinery] client-go should negotiate watch and report errors with accept "application/json,application/vnd.kubernetes.protobuf"
Kubernetes e2e suite [It] [sig-api-machinery] client-go should negotiate watch and report errors with accept "application/json,application/vnd.kubernetes.protobuf"
Kubernetes e2e suite [It] [sig-api-machinery] client-go should negotiate watch and report errors with accept "application/vnd.kubernetes.protobuf"
Kubernetes e2e suite [It] [sig-api-machinery] client-go should negotiate watch and report errors with accept "application/vnd.kubernetes.protobuf"
Kubernetes e2e suite [It] [sig-api-machinery] client-go should negotiate watch and report errors with accept "application/vnd.kubernetes.protobuf,application/json"
Kubernetes e2e suite [It] [sig-api-machinery] client-go should negotiate watch and report errors with accept "application/vnd.kubernetes.protobuf,application/json"
Kubernetes e2e suite [It] [sig-api-machinery] health handlers should contain necessary checks
Kubernetes e2e suite [It] [sig-api-machinery] health handlers should contain necessary checks
Kubernetes e2e suite [It] [sig-api-machinery] kube-apiserver identity [Feature:APIServerIdentity] kube-apiserver identity should persist after restart [Disruptive]
Kubernetes e2e suite [It] [sig-api-machinery] kube-apiserver identity [Feature:APIServerIdentity] kube-apiserver identity should persist after restart [Disruptive]
Kubernetes e2e suite [It] [sig-apps] ControllerRevision [Serial] should manage the lifecycle of a ControllerRevision [Conformance]
Kubernetes e2e suite [It] [sig-apps] ControllerRevision [Serial] should manage the lifecycle of a ControllerRevision [Conformance]
Kubernetes e2e suite [It] [sig-apps] CronJob should be able to schedule after more than 100 missed schedule
Kubernetes e2e suite [It] [sig-apps] CronJob should be able to schedule after more than 100 missed schedule
Kubernetes e2e suite [It] [sig-apps] CronJob should delete failed finished jobs with limit of one job
Kubernetes e2e suite [It] [sig-apps] CronJob should delete failed finished jobs with limit of one job
Kubernetes e2e suite [It] [sig-apps] CronJob should delete successful finished jobs with limit of one successful job
Kubernetes e2e suite [It] [sig-apps] CronJob should delete successful finished jobs with limit of one successful job
Kubernetes e2e suite [It] [sig-apps] CronJob should not emit unexpected warnings
Kubernetes e2e suite [It] [sig-apps] CronJob should not emit unexpected warnings
Kubernetes e2e suite [It] [sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]
Kubernetes e2e suite [It] [sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]
Kubernetes e2e suite [It] [sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]
Kubernetes e2e suite [It] [sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]
Kubernetes e2e suite [It] [sig-apps] CronJob should remove from active list jobs that have been deleted
Kubernetes e2e suite [It] [sig-apps] CronJob should remove from active list jobs that have been deleted
Kubernetes e2e suite [It] [sig-apps] CronJob should support timezone
Kubernetes e2e suite [It] [sig-apps] CronJob should support timezone
Kubernetes e2e suite [It] [sig-apps] Daemon set [Serial] should list and delete a collection of DaemonSets [Conformance]
Kubernetes e2e suite [It] [sig-apps] Daemon set [Serial] should list and delete a collection of DaemonSets [Conformance]
Kubernetes e2e suite [It] [sig-apps] Daemon set [Serial] should not update pod when spec was updated and update strategy is OnDelete
Kubernetes e2e suite [It] [sig-apps] Daemon set [Serial] should not update pod when spec was updated and update strategy is OnDelete
Kubernetes e2e suite [It] [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]
Kubernetes e2e suite [It] [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]
Kubernetes e2e suite [It] [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]
Kubernetes e2e suite [It] [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]
Kubernetes e2e suite [It] [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]
Kubernetes e2e suite [It] [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]
Kubernetes e2e suite [It] [sig-apps] Daemon set [Serial] should run and stop complex daemon with node affinity
Kubernetes e2e suite [It] [sig-apps] Daemon set [Serial] should run and stop complex daemon with node affinity
Kubernetes e2e suite [It] [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]
Kubernetes e2e suite [It] [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]
Kubernetes e2e suite [It] [sig-apps] Daemon set [Serial] should surge pods onto nodes when spec was updated and update strategy is RollingUpdate
Kubernetes e2e suite [It] [sig-apps] Daemon set [Serial] should surge pods onto nodes when spec was updated and update strategy is RollingUpdate
Kubernetes e2e suite [It] [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
Kubernetes e2e suite [It] [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
Kubernetes e2e suite [It] [sig-apps] Daemon set [Serial] should verify changes to a daemon set status [Conformance]
Kubernetes e2e suite [It] [sig-apps] Daemon set [Serial] should verify changes to a daemon set status [Conformance]
Kubernetes e2e suite [It] [sig-apps] DaemonRestart [Disruptive] Controller Manager should not create/delete replicas across restart
Kubernetes e2e suite [It] [sig-apps] DaemonRestart [Disruptive] Controller Manager should not create/delete replicas across restart
Kubernetes e2e suite [It] [sig-apps] DaemonRestart [Disruptive] Kube-proxy should recover after being killed accidentally
Kubernetes e2e suite [It] [sig-apps] DaemonRestart [Disruptive] Kube-proxy should recover after being killed accidentally
Kubernetes e2e suite [It] [sig-apps] DaemonRestart [Disruptive] Kubelet should not restart containers across restart
Kubernetes e2e suite [It] [sig-apps] DaemonRestart [Disruptive] Kubelet should not restart containers across restart
Kubernetes e2e suite [It] [sig-apps] DaemonRestart [Disruptive] Scheduler should continue assigning pods to nodes across restart
Kubernetes e2e suite [It] [sig-apps] DaemonRestart [Disruptive] Scheduler should continue assigning pods to nodes across restart
Kubernetes e2e suite [It] [sig-apps] Deployment deployment reaping should cascade to its replica sets and pods
Kubernetes e2e suite [It] [sig-apps] Deployment deployment reaping should cascade to its replica sets and pods
Kubernetes e2e suite [It] [sig-apps] Deployment iterative rollouts should eventually progress
Kubernetes e2e suite [It] [sig-apps] Deployment iterative rollouts should eventually progress
Kubernetes e2e suite [It] [sig-apps] Deployment should not disrupt a cloud load-balancer's connectivity during rollout
Kubernetes e2e suite [It] [sig-apps] Deployment should not disrupt a cloud load-balancer's connectivity during rollout
Kubernetes e2e suite [It] [sig-apps] Deployment test Deployment ReplicaSet orphaning and adoption regarding controllerRef
Kubernetes e2e suite [It] [sig-apps] Deployment test Deployment ReplicaSet orphaning and adoption regarding controllerRef
Kubernetes e2e suite [It] [sig-apps] DisruptionController evictions: enough pods, absolute => should allow an eviction
Kubernetes e2e suite [It] [sig-apps] DisruptionController evictions: enough pods, absolute => should allow an eviction
Kubernetes e2e suite [It] [sig-apps] DisruptionController evictions: enough pods, replicaSet, percentage => should allow an eviction
Kubernetes e2e suite [It] [sig-apps] DisruptionController evictions: enough pods, replicaSet, percentage => should allow an eviction
Kubernetes e2e suite [It] [sig-apps] DisruptionController evictions: maxUnavailable allow single eviction, percentage => should allow an eviction
Kubernetes e2e suite [It] [sig-apps] DisruptionController evictions: maxUnavailable allow single eviction, percentage => should allow an eviction
Kubernetes e2e suite [It] [sig-apps] DisruptionController evictions: maxUnavailable deny evictions, integer => should not allow an eviction [Serial]
Kubernetes e2e suite [It] [sig-apps] DisruptionController evictions: maxUnavailable deny evictions, integer => should not allow an eviction [Serial]
Kubernetes e2e suite [It] [sig-apps] DisruptionController evictions: no PDB => should allow an eviction
Kubernetes e2e suite [It] [sig-apps] DisruptionController evictions: no PDB => should allow an eviction
Kubernetes e2e suite [It] [sig-apps] DisruptionController evictions: too few pods, absolute => should not allow an eviction
Kubernetes e2e suite [It] [sig-apps] DisruptionController evictions: too few pods, absolute => should not allow an eviction
Kubernetes e2e suite [It] [sig-apps] DisruptionController evictions: too few pods, replicaSet, percentage => should not allow an eviction [Serial]
Kubernetes e2e suite [It] [sig-apps] DisruptionController evictions: too few pods, replicaSet, percentage => should not allow an eviction [Serial]
Kubernetes e2e suite [It] [sig-apps] DisruptionController should observe that the PodDisruptionBudget status is not updated for unmanaged pods
Kubernetes e2e suite [It] [sig-apps] DisruptionController should observe that the PodDisruptionBudget status is not updated for unmanaged pods
Kubernetes e2e suite [It] [sig-apps] Job Using a pod failure policy to not count some failures towards the backoffLimit Ignore DisruptionTarget condition
Kubernetes e2e suite [It] [sig-apps] Job Using a pod failure policy to not count some failures towards the backoffLimit Ignore DisruptionTarget condition
Kubernetes e2e suite [It] [sig-apps] Job Using a pod failure policy to not count some failures towards the backoffLimit Ignore exit code 137
Kubernetes e2e suite [It] [sig-apps] Job Using a pod failure policy to not count some failures towards the backoffLimit Ignore exit code 137
Kubernetes e2e suite [It] [sig-apps] Job should allow to use the pod failure policy on exit code to fail the job early
Kubernetes e2e suite [It] [sig-apps] Job should allow to use the pod failure policy on exit code to fail the job early
Kubernetes e2e suite [It] [sig-apps] Job should allow to use the pod failure policy to not count the failure towards the backoffLimit
Kubernetes e2e suite [It] [sig-apps] Job should allow to use the pod failure policy to not count the failure towards the backoffLimit
Kubernetes e2e suite [It] [sig-apps] Job should delete pods when suspended
Kubernetes e2e suite [It] [sig-apps] Job should delete pods when suspended
Kubernetes e2e suite [It] [sig-apps] Job should fail to exceed backoffLimit
Kubernetes e2e suite [It] [sig-apps] Job should fail to exceed backoffLimit
Kubernetes e2e suite [It] [sig-apps] Job should fail when exceeds active deadline
Kubernetes e2e suite [It] [sig-apps] Job should fail when exceeds active deadline
Kubernetes e2e suite [It] [sig-apps] Job should not create pods when created in suspend state
Kubernetes e2e suite [It] [sig-apps] Job should not create pods when created in suspend state
Kubernetes e2e suite [It] [sig-apps] Job should remove pods when job is deleted
Kubernetes e2e suite [It] [sig-apps] Job should remove pods when job is deleted
Kubernetes e2e suite [It] [sig-apps] Job should run a job to completion when tasks sometimes fail and are not locally restarted
Kubernetes e2e suite [It] [sig-apps] Job should run a job to completion when tasks sometimes fail and are not locally restarted
Kubernetes e2e suite [It] [sig-apps] Job should run a job to completion when tasks succeed
Kubernetes e2e suite [It] [sig-apps] Job should run a job to completion when tasks succeed
Kubernetes e2e suite [It] [sig-apps] Job should run a job to completion with CPU requests [Serial]
Kubernetes e2e suite [It] [sig-apps] Job should run a job to completion with CPU requests [Serial]
Kubernetes e2e suite [It] [sig-apps] ReplicaSet should serve a basic image on each replica with a private image
Kubernetes e2e suite [It] [sig-apps] ReplicaSet should serve a basic image on each replica with a private image
Kubernetes e2e suite [It] [sig-apps] ReplicaSet should surface a failure condition on a common issue like exceeded quota
Kubernetes e2e suite [It] [sig-apps] ReplicaSet should surface a failure condition on a common issue like exceeded quota
Kubernetes e2e suite [It] [sig-apps] ReplicationController should serve a basic image on each replica with a private image
Kubernetes e2e suite [It] [sig-apps] ReplicationController should serve a basic image on each replica with a private image
Kubernetes e2e suite [It] [sig-apps] StatefulSet AvailableReplicas should get updated accordingly when MinReadySeconds is enabled
Kubernetes e2e suite [It] [sig-apps] StatefulSet AvailableReplicas should get updated accordingly when MinReadySeconds is enabled
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should adopt matching orphans and release non-matching pods
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should adopt matching orphans and release non-matching pods
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should implement legacy replacement when the update strategy is OnDelete
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should implement legacy replacement when the update strategy is OnDelete
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should not deadlock when a pod's predecessor fails
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should not deadlock when a pod's predecessor fails
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications with PVCs
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications with PVCs
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should provide basic identity
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should provide basic identity
Kubernetes e2e suite [It] [sig-apps] StatefulSet Deploy clustered applications [Feature:StatefulSet] [Slow] should creating a working CockroachDB cluster
Kubernetes e2e suite [It] [sig-apps] StatefulSet Deploy clustered applications [Feature:StatefulSet] [Slow] should creating a working CockroachDB cluster
Kubernetes e2e suite [It] [sig-apps] StatefulSet Deploy clustered applications [Feature:StatefulSet] [Slow] should creating a working mysql cluster
Kubernetes e2e suite [It] [sig-apps] StatefulSet Deploy clustered applications [Feature:StatefulSet] [Slow] should creating a working mysql cluster
Kubernetes e2e suite [It] [sig-apps] StatefulSet Deploy clustered applications [Feature:StatefulSet] [Slow] should creating a working redis cluster
Kubernetes e2e suite [It] [sig-apps] StatefulSet Deploy clustered applications [Feature:StatefulSet] [Slow] should creating a working redis cluster
Kubernetes e2e suite [It] [sig-apps] StatefulSet Deploy clustered applications [Feature:StatefulSet] [Slow] should creating a working zookeeper cluster
Kubernetes e2e suite [It] [sig-apps] StatefulSet Deploy clustered applications [Feature:StatefulSet] [Slow] should creating a working zookeeper cluster
Kubernetes e2e suite [It] [sig-apps] StatefulSet MinReadySeconds should be honored when enabled
Kubernetes e2e suite [It] [sig-apps] StatefulSet MinReadySeconds should be honored when enabled
Kubernetes e2e suite [It] [sig-apps] StatefulSet Non-retain StatefulSetPersistentVolumeClaimPolicy [Feature:StatefulSetAutoDeletePVC] should delete PVCs after adopting pod (WhenDeleted)
Kubernetes e2e suite [It] [sig-apps] StatefulSet Non-retain StatefulSetPersistentVolumeClaimPolicy [Feature:StatefulSetAutoDeletePVC] should delete PVCs after adopting pod (WhenDeleted)
Kubernetes e2e suite [It] [sig-apps] StatefulSet Non-retain StatefulSetPersistentVolumeClaimPolicy [Feature:StatefulSetAutoDeletePVC] should delete PVCs after adopting pod (WhenScaled) [Feature:StatefulSetAutoDeletePVC]
Kubernetes e2e suite [It] [sig-apps] StatefulSet Non-retain StatefulSetPersistentVolumeClaimPolicy [Feature:StatefulSetAutoDeletePVC] should delete PVCs after adopting pod (WhenScaled) [Feature:StatefulSetAutoDeletePVC]
Kubernetes e2e suite [It] [sig-apps] StatefulSet Non-retain StatefulSetPersistentVolumeClaimPolicy [Feature:StatefulSetAutoDeletePVC] should delete PVCs with a OnScaledown policy
Kubernetes e2e suite [It] [sig-apps] StatefulSet Non-retain StatefulSetPersistentVolumeClaimPolicy [Feature:StatefulSetAutoDeletePVC] should delete PVCs with a OnScaledown policy
Kubernetes e2e suite [It] [sig-apps] StatefulSet Non-retain StatefulSetPersistentVolumeClaimPolicy [Feature:StatefulSetAutoDeletePVC] should delete PVCs with a WhenDeleted policy
Kubernetes e2e suite [It] [sig-apps] StatefulSet Non-retain StatefulSetPersistentVolumeClaimPolicy [Feature:StatefulSetAutoDeletePVC] should delete PVCs with a WhenDeleted policy
Kubernetes e2e suite [It] [sig-apps] TTLAfterFinished job should be deleted once it finishes after TTL seconds
Kubernetes e2e suite [It] [sig-apps] TTLAfterFinished job should be deleted once it finishes after TTL seconds
Kubernetes e2e suite [It] [sig-apps] stateful Upgrade [Feature:StatefulUpgrade] stateful upgrade should maintain a functioning cluster
Kubernetes e2e suite [It] [sig-apps] stateful Upgrade [Feature:StatefulUpgrade] stateful upgrade should maintain a functioning cluster
Kubernetes e2e suite [It] [sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR
Kubernetes e2e suite [It] [sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR
Kubernetes e2e suite [It] [sig-auth] SelfSubjectReview [Feature:APISelfSubjectReview] should support SelfSubjectReview API operations
Kubernetes e2e suite [It] [sig-auth] SelfSubjectReview [Feature:APISelfSubjectReview] should support SelfSubjectReview API operations
Kubernetes e2e suite [It] [sig-auth] ServiceAccount admission controller migration [Feature:BoundServiceAccountTokenVolume] master upgrade should maintain a functioning cluster
Kubernetes e2e suite [It] [sig-auth] ServiceAccount admission controller migration [Feature:BoundServiceAccountTokenVolume] master upgrade should maintain a functioning cluster
Kubernetes e2e suite [It] [sig-auth] ServiceAccounts no secret-based service account token should be auto-generated
Kubernetes e2e suite [It] [sig-auth] ServiceAccounts no secret-based service account token should be auto-generated
Kubernetes e2e suite [It] [sig-auth] ServiceAccounts should set ownership and permission when RunAsUser or FsGroup is present [LinuxOnly] [NodeFeature:FSGroup]
Kubernetes e2e suite [It] [sig-auth] ServiceAccounts should set ownership and permission when RunAsUser or FsGroup is present [LinuxOnly] [NodeFeature:FSGroup]
Kubernetes e2e suite [It] [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow]
Kubernetes e2e suite [It] [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow]
Kubernetes e2e suite [It] [sig-auth] [Feature:NodeAuthenticator] The kubelet can delegate ServiceAccount tokens to the API server
Kubernetes e2e suite [It] [sig-auth] [Feature:NodeAuthenticator] The kubelet can delegate ServiceAccount tokens to the API server
Kubernetes e2e suite [It] [sig-auth] [Feature:NodeAuthenticator] The kubelet's main port 10250 should reject requests with no credentials
Kubernetes e2e suite [It] [sig-auth] [Feature:NodeAuthenticator] The kubelet's main port 10250 should reject requests with no credentials
Kubernetes e2e suite [It] [sig-auth] [Feature:NodeAuthorizer] A node shouldn't be able to create another node
Kubernetes e2e suite [It] [sig-auth] [Feature:NodeAuthorizer] A node shouldn't be able to create another node
Kubernetes e2e suite [It] [sig-auth] [Feature:NodeAuthorizer] A node shouldn't be able to delete another node
Kubernetes e2e suite [It] [sig-auth] [Feature:NodeAuthorizer] A node shouldn't be able to delete another node
Kubernetes e2e suite [It] [sig-auth] [Feature:NodeAuthorizer] Getting a non-existent configmap should exit with the Forbidden error, not a NotFound error
Kubernetes e2e suite [It] [sig-auth] [Feature:NodeAuthorizer] Getting a non-existent configmap should exit with the Forbidden error, not a NotFound error
Kubernetes e2e suite [It] [sig-auth] [Feature:NodeAuthorizer] Getting a non-existent secret should exit with the Forbidden error, not a NotFound error
Kubernetes e2e suite [It] [sig-auth] [Feature:NodeAuthorizer] Getting a non-existent secret should exit with the Forbidden error, not a NotFound error
Kubernetes e2e suite [It] [sig-auth] [Feature:NodeAuthorizer] Getting a secret for a workload the node has access to should succeed
Kubernetes e2e suite [It] [sig-auth] [Feature:NodeAuthorizer] Getting a secret for a workload the node has access to should succeed
Kubernetes e2e suite [It] [sig-auth] [Feature:NodeAuthorizer] Getting an existing configmap should exit with the Forbidden error
Kubernetes e2e suite [It] [sig-auth] [Feature:NodeAuthorizer] Getting an existing configmap should exit with the Forbidden error
Kubernetes e2e suite [It] [sig-auth] [Feature:NodeAuthorizer] Getting an existing secret should exit with the Forbidden error
Kubernetes e2e suite [It] [sig-auth] [Feature:NodeAuthorizer] Getting an existing secret should exit with the Forbidden error
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaler scalability [Slow] CA ignores unschedulable pods while scheduling schedulable pods [Feature:ClusterAutoscalerScalability6]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaler scalability [Slow] CA ignores unschedulable pods while scheduling schedulable pods [Feature:ClusterAutoscalerScalability6]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaler scalability [Slow] should scale down empty nodes [Feature:ClusterAutoscalerScalability3]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaler scalability [Slow] should scale down empty nodes [Feature:ClusterAutoscalerScalability3]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaler scalability [Slow] should scale down underutilized nodes [Feature:ClusterAutoscalerScalability4]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaler scalability [Slow] should scale down underutilized nodes [Feature:ClusterAutoscalerScalability4]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaler scalability [Slow] should scale up at all [Feature:ClusterAutoscalerScalability1]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaler scalability [Slow] should scale up at all [Feature:ClusterAutoscalerScalability1]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaler scalability [Slow] should scale up twice [Feature:ClusterAutoscalerScalability2]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaler scalability [Slow] should scale up twice [Feature:ClusterAutoscalerScalability2]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaler scalability [Slow] shouldn't scale down with underutilized nodes due to host port conflicts [Feature:ClusterAutoscalerScalability5]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaler scalability [Slow] shouldn't scale down with underutilized nodes due to host port conflicts [Feature:ClusterAutoscalerScalability5]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] Should be able to scale a node group down to 0[Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] Should be able to scale a node group down to 0[Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] Should be able to scale a node group up from 0[Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] Should be able to scale a node group up from 0[Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] Should not scale GPU pool up if pod does not require GPUs [GpuType:] [Feature:ClusterSizeAutoscalingGpu]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] Should not scale GPU pool up if pod does not require GPUs [GpuType:] [Feature:ClusterSizeAutoscalingGpu]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] Should scale down GPU pool from 1 [GpuType:] [Feature:ClusterSizeAutoscalingGpu]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] Should scale down GPU pool from 1 [GpuType:] [Feature:ClusterSizeAutoscalingGpu]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] Should scale up GPU pool from 0 [GpuType:] [Feature:ClusterSizeAutoscalingGpu]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] Should scale up GPU pool from 0 [GpuType:] [Feature:ClusterSizeAutoscalingGpu]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] Should scale up GPU pool from 1 [GpuType:] [Feature:ClusterSizeAutoscalingGpu]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] Should scale up GPU pool from 1 [GpuType:] [Feature:ClusterSizeAutoscalingGpu]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] Shouldn't perform scale up operation and should list unhealthy status if most of the cluster is broken[Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] Shouldn't perform scale up operation and should list unhealthy status if most of the cluster is broken[Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should add node to the particular mig [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should add node to the particular mig [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining multiple pods one by one as dictated by pdb[Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining multiple pods one by one as dictated by pdb[Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining system pods with pdb[Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining system pods with pdb[Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down when rescheduling a pod is required and pdb allows for it[Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down when rescheduling a pod is required and pdb allows for it[Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed [Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed [Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed and one node is broken [Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed and one node is broken [Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed when there is non autoscaled pool[Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed when there is non autoscaled pool[Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should disable node pool autoscaling [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should disable node pool autoscaling [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pending pods are small [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pending pods are small [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pending pods are small and one node is broken [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pending pods are small and one node is broken [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pending pods are small and there is another node pool that is not autoscaled [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pending pods are small and there is another node pool that is not autoscaled [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pod requesting EmptyDir volume is pending [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pod requesting EmptyDir volume is pending [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pod requesting volume is pending [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pod requesting volume is pending [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pods are pending due to host port conflict [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pods are pending due to host port conflict [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pods are pending due to pod anti-affinity [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pods are pending due to pod anti-affinity [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should scale down when expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should scale down when expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should scale up correct target pool [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should scale up correct target pool [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should scale up when non expendable pod is created [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should scale up when non expendable pod is created [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't be able to scale down when rescheduling a pod is required, but pdb doesn't allow drain[Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't be able to scale down when rescheduling a pod is required, but pdb doesn't allow drain[Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't increase cluster size if pending pod is too large [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't increase cluster size if pending pod is too large [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't scale down when non expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't scale down when non expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't scale up when expendable pod is created [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't scale up when expendable pod is created [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't scale up when expendable pod is preempted [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't scale up when expendable pod is preempted [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't trigger additional scale-ups during processing scale-up [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't trigger additional scale-ups during processing scale-up [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] DNS horizontal autoscaling [Serial] [Slow] kube-dns-autoscaler should scale kube-dns pods when cluster size changed
Kubernetes e2e suite [It] [sig-autoscaling] DNS horizontal autoscaling [Serial] [Slow] kube-dns-autoscaler should scale kube-dns pods when cluster size changed
Kubernetes e2e suite [It] [sig-autoscaling] DNS horizontal autoscaling kube-dns-autoscaler should scale kube-dns pods in both nonfaulty and faulty scenarios
Kubernetes e2e suite [It] [sig-autoscaling] DNS horizontal autoscaling kube-dns-autoscaler should scale kube-dns pods in both nonfaulty and faulty scenarios
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:ClusterSizeAutoscalingScaleUp] [Slow] Autoscaling Autoscaling a service from 1 pod and 3 nodes to 8 pods and >=4 nodes takes less than 15 minutes
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:ClusterSizeAutoscalingScaleUp] [Slow] Autoscaling Autoscaling a service from 1 pod and 3 nodes to 8 pods and >=4 nodes takes less than 15 minutes
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) CustomResourceDefinition Should scale with a CRD targetRef
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) CustomResourceDefinition Should scale with a CRD targetRef
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) ReplicationController light Should scale from 1 pod to 2 pods
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) ReplicationController light Should scale from 1 pod to 2 pods
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) ReplicationController light [Slow] Should scale from 2 pods to 1 pod
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) ReplicationController light [Slow] Should scale from 2 pods to 1 pod
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] Deployment (Container Resource) Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods using Average Utilization for aggregation
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] Deployment (Container Resource) Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods using Average Utilization for aggregation
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] Deployment (Container Resource) Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods using Average Value for aggregation
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] Deployment (Container Resource) Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods using Average Value for aggregation
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] Deployment (Pod Resource) Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods using Average Utilization for aggregation
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] Deployment (Pod Resource) Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods using Average Utilization for aggregation
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] Deployment (Pod Resource) Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods using Average Value for aggregation
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] Deployment (Pod Resource) Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods using Average Value for aggregation
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] Deployment (Pod Resource) Should scale from 5 pods to 3 pods and then from 3 pods to 1 pod using Average Utilization for aggregation
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] Deployment (Pod Resource) Should scale from 5 pods to 3 pods and then from 3 pods to 1 pod using Average Utilization for aggregation
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicaSet Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicaSet Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicaSet Should scale from 5 pods to 3 pods and then from 3 pods to 1 pod
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicaSet Should scale from 5 pods to 3 pods and then from 3 pods to 1 pod
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicaSet with idle sidecar (ContainerResource use case) Should not scale up on a busy sidecar with an idle application
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicaSet with idle sidecar (ContainerResource use case) Should not scale up on a busy sidecar with an idle application
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicaSet with idle sidecar (ContainerResource use case) Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods on a busy application with an idle sidecar container
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicaSet with idle sidecar (ContainerResource use case) Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods on a busy application with an idle sidecar container
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicationController Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods and verify decision stability
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicationController Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods and verify decision stability
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicationController Should scale from 5 pods to 3 pods and then from 3 pods to 1 pod and verify decision stability
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicationController Should scale from 5 pods to 3 pods and then from 3 pods to 1 pod and verify decision stability
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: Memory) [Serial] [Slow] Deployment (Container Resource) Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods using Average Utilization for aggregation
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: Memory) [Serial] [Slow] Deployment (Container Resource) Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods using Average Utilization for aggregation
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: Memory) [Serial] [Slow] Deployment (Container Resource) Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods using Average Value for aggregation
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: Memory) [Serial] [Slow] Deployment (Container Resource) Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods using Average Value for aggregation
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: Memory) [Serial] [Slow] Deployment (Pod Resource) Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods using Average Utilization for aggregation
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: Memory) [Serial] [Slow] Deployment (Pod Resource) Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods using Average Utilization for aggregation
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: Memory) [Serial] [Slow] Deployment (Pod Resource) Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods using Average Value for aggregation
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: Memory) [Serial] [Slow] Deployment (Pod Resource) Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods using Average Value for aggregation
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) with autoscaling disabled shouldn't scale down
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) with autoscaling disabled shouldn't scale down
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) with autoscaling disabled shouldn't scale up
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) with autoscaling disabled shouldn't scale up
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) with both scale up and down controls configured should keep recommendation within the range over two stabilization windows
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) with both scale up and down controls configured should keep recommendation within the range over two stabilization windows
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) with both scale up and down controls configured should keep recommendation within the range with stabilization window and pod limit rate
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) with both scale up and down controls configured should keep recommendation within the range with stabilization window and pod limit rate
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) with long upscale stabilization window should scale up only after the stabilization period
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) with long upscale stabilization window should scale up only after the stabilization period
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) with scale limited by number of Pods rate should scale down no more than given number of Pods per minute
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) with scale limited by number of Pods rate should scale down no more than given number of Pods per minute
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) with scale limited by number of Pods rate should scale up no more than given number of Pods per minute
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) with scale limited by number of Pods rate should scale up no more than given number of Pods per minute
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) with scale limited by percentage should scale down no more than given percentage of current Pods per minute
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) with scale limited by percentage should scale down no more than given percentage of current Pods per minute
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) with scale limited by percentage should scale up no more than given percentage of current Pods per minute
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) with scale limited by percentage should scale up no more than given percentage of current Pods per minute
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) with short downscale stabilization window should scale down soon after the stabilization period
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) with short downscale stabilization window should scale down soon after the stabilization period
Kubernetes e2e suite [It] [sig-autoscaling] [HPA] [Feature:CustomMetricsAutoscaling] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) with Custom Metric of type Object from Stackdriver should scale down
Kubernetes e2e suite [It] [sig-autoscaling] [HPA] [Feature:CustomMetricsAutoscaling] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) with Custom Metric of type Object from Stackdriver should scale down
Kubernetes e2e suite [It] [sig-autoscaling] [HPA] [Feature:CustomMetricsAutoscaling] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) with Custom Metric of type Object from Stackdriver should scale down to 0
Kubernetes e2e suite [It] [sig-autoscaling] [HPA] [Feature:CustomMetricsAutoscaling] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) with Custom Metric of type Object from Stackdriver should scale down to 0
Kubernetes e2e suite [It] [sig-autoscaling] [HPA] [Feature:CustomMetricsAutoscaling] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) with Custom Metric of type Pod from Stackdriver should scale down
Kubernetes e2e suite [It] [sig-autoscaling] [HPA] [Feature:CustomMetricsAutoscaling] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) with Custom Metric of type Pod from Stackdriver should scale down
Kubernetes e2e suite [It] [sig-autoscaling] [HPA] [Feature:CustomMetricsAutoscaling] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) with Custom Metric of type Pod from Stackdriver should scale down with Prometheus
Kubernetes e2e suite [It] [sig-autoscaling] [HPA] [Feature:CustomMetricsAutoscaling] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) with Custom Metric of type Pod from Stackdriver should scale down with Prometheus
Kubernetes e2e suite [It] [sig-autoscaling] [HPA] [Feature:CustomMetricsAutoscaling] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) with Custom Metric of type Pod from Stackdriver should scale up with two metrics
Kubernetes e2e suite [It] [sig-autoscaling] [HPA] [Feature:CustomMetricsAutoscaling] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) with Custom Metric of type Pod from Stackdriver should scale up with two metrics
Kubernetes e2e suite [It] [sig-autoscaling] [HPA] [Feature:CustomMetricsAutoscaling] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) with External Metric from Stackdriver should scale down with target average value
Kubernetes e2e suite [It] [sig-autoscaling] [HPA] [Feature:CustomMetricsAutoscaling] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) with External Metric from Stackdriver should scale down with target average value
Kubernetes e2e suite [It] [sig-autoscaling] [HPA] [Feature:CustomMetricsAutoscaling] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) with External Metric from Stackdriver should scale down with target value
Kubernetes e2e suite [It] [sig-autoscaling] [HPA] [Feature:CustomMetricsAutoscaling] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) with External Metric from Stackdriver should scale down with target value
Kubernetes e2e suite [It] [sig-autoscaling] [HPA] [Feature:CustomMetricsAutoscaling] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) with External Metric from Stackdriver should scale up with two metrics
Kubernetes e2e suite [It] [sig-autoscaling] [HPA] [Feature:CustomMetricsAutoscaling] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) with External Metric from Stackdriver should scale up with two metrics
Kubernetes e2e suite [It] [sig-autoscaling] [HPA] [Feature:CustomMetricsAutoscaling] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) with multiple metrics of different types should not scale down when one metric is missing (Container Resource and External Metrics)
Kubernetes e2e suite [It] [sig-autoscaling] [HPA] [Feature:CustomMetricsAutoscaling] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) with multiple metrics of different types should not scale down when one metric is missing (Container Resource and External Metrics)
Kubernetes e2e suite [It] [sig-autoscaling] [HPA] [Feature:CustomMetricsAutoscaling] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) with multiple metrics of different types should not scale down when one metric is missing (Pod and Object Metrics)
Kubernetes e2e suite [It] [sig-autoscaling] [HPA] [Feature:CustomMetricsAutoscaling] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) with multiple metrics of different types should not scale down when one metric is missing (Pod and Object Metrics)
Kubernetes e2e suite [It] [sig-autoscaling] [HPA] [Feature:CustomMetricsAutoscaling] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) with multiple metrics of different types should scale up when one metric is missing (Pod and External metrics)
Kubernetes e2e suite [It] [sig-autoscaling] [HPA] [Feature:CustomMetricsAutoscaling] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) with multiple metrics of different types should scale up when one metric is missing (Pod and External metrics)
Kubernetes e2e suite [It] [sig-autoscaling] [HPA] [Feature:CustomMetricsAutoscaling] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) with multiple metrics of different types should scale up when one metric is missing (Resource and Object metrics)
Kubernetes e2e suite [It] [sig-autoscaling] [HPA] [Feature:CustomMetricsAutoscaling] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) with multiple metrics of different types should scale up when one metric is missing (Resource and Object metrics)
Kubernetes e2e suite [It] [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 should support forwarding over websockets
Kubernetes e2e suite [It] [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 should support forwarding over websockets
Kubernetes e2e suite [It] [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 that expects NO client request should support a client that connects, sends DATA, and disconnects
Kubernetes e2e suite [It] [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 that expects NO client request should support a client that connects, sends DATA, and disconnects
Kubernetes e2e suite [It] [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 that expects a client request should support a client that connects, sends DATA, and disconnects
Kubernetes e2e suite [It] [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 that expects a client request should support a client that connects, sends DATA, and disconnects
Kubernetes e2e suite [It] [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 that expects a client request should support a client that connects, sends NO DATA, and disconnects
Kubernetes e2e suite [It] [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 that expects a client request should support a client that connects, sends NO DATA, and disconnects
Kubernetes e2e suite [It] [sig-cli] Kubectl Port forwarding With a server listening on localhost should support forwarding over websockets
Kubernetes e2e suite [It] [sig-cli] Kubectl Port forwarding With a server listening on localhost should support forwarding over websockets
Kubernetes e2e suite [It] [sig-cli] Kubectl Port forwarding With a server listening on localhost that expects NO client request should support a client that connects, sends DATA, and disconnects
Kubernetes e2e suite [It] [sig-cli] Kubectl Port forwarding With a server listening on localhost that expects NO client request should support a client that connects, sends DATA, and disconnects
Kubernetes e2e suite [It] [sig-cli] Kubectl Port forwarding With a server listening on localhost that expects a client request should support a client that connects, sends DATA, and disconnects
Kubernetes e2e suite [It] [sig-cli] Kubectl Port forwarding With a server listening on localhost that expects a client request should support a client that connects, sends DATA, and disconnects
Kubernetes e2e suite [It] [sig-cli] Kubectl Port forwarding With a server listening on localhost that expects a client request should support a client that connects, sends NO DATA, and disconnects
Kubernetes e2e suite [It] [sig-cli] Kubectl Port forwarding With a server listening on localhost that expects a client request should support a client that connects, sends NO DATA, and disconnects
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl apply apply set/view last-applied
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl apply apply set/view last-applied
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl apply should apply a new configuration to an existing RC
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl apply should apply a new configuration to an existing RC
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl apply should reuse port when apply to an existing SVC
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl apply should reuse port when apply to an existing SVC
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl cluster-info dump should check if cluster-info dump succeeds
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl cluster-info dump should check if cluster-info dump succeeds
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl copy should copy a file from a running Pod
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl copy should copy a file from a running Pod
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl create quota should create a quota with scopes
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl create quota should create a quota with scopes
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl create quota should create a quota without scopes
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl create quota should create a quota without scopes
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl create quota should reject quota with invalid scopes
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl create quota should reject quota with invalid scopes
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for cronjob
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for cronjob
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl events should show event when pod is created
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl events should show event when pod is created
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl get componentstatuses should get componentstatuses
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl get componentstatuses should get componentstatuses
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl taint [Serial] should remove all the taints with the same key off a node
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl taint [Serial] should remove all the taints with the same key off a node
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl taint [Serial] should update the taint on a node
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl taint [Serial] should update the taint on a node
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl validation should create/apply a CR with unknown fields for CRD with no validation schema
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl validation should create/apply a CR with unknown fields for CRD with no validation schema
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl validation should create/apply a valid CR for CRD with validation schema
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl validation should create/apply a valid CR for CRD with validation schema
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl validation should create/apply an invalid/valid CR with arbitrary-extra properties for CRD with partially-specified validation schema
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl validation should create/apply an invalid/valid CR with arbitrary-extra properties for CRD with partially-specified validation schema
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl validation should detect unknown metadata fields in both the root and embedded object of a CR
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl validation should detect unknown metadata fields in both the root and embedded object of a CR
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl validation should detect unknown metadata fields of a typed object
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl validation should detect unknown metadata fields of a typed object
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod Kubectl run [Slow] running a failing command with --leave-stdin-open
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod Kubectl run [Slow] running a failing command with --leave-stdin-open
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod Kubectl run [Slow] running a failing command without --restart=Never
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod Kubectl run [Slow] running a failing command without --restart=Never
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod Kubectl run [Slow] running a failing command without --restart=Never, but with --rm
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod Kubectl run [Slow] running a failing command without --restart=Never, but with --rm
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod Kubectl run running a failing command
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod Kubectl run running a failing command
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod Kubectl run running a successful command
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod Kubectl run running a successful command
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod should contain last line of the log
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod should contain last line of the log
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod should return command exit codes execing into a container with a failing command
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod should return command exit codes execing into a container with a failing command
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod should return command exit codes execing into a container with a successful command
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod should return command exit codes execing into a container with a successful command
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod should return command exit codes should handle in-cluster config
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod should return command exit codes should handle in-cluster config
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod should return command exit codes should support port-forward
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod should return command exit codes should support port-forward
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod should support exec
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod should support exec
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod should support exec through an HTTP proxy
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod should support exec through an HTTP proxy
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod should support exec through kubectl proxy
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod should support exec through kubectl proxy
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod should support exec using resource/name
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod should support exec using resource/name
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod should support inline execution and attach
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod should support inline execution and attach
Kubernetes e2e suite [It] [sig-cli] Kubectl client kubectl wait should ignore not found error with --for=delete
Kubernetes e2e suite [It] [sig-cli] Kubectl client kubectl wait should ignore not found error with --for=delete
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Addon update should propagate add-on file changes [Slow]
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Addon update should propagate add-on file changes [Slow]
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Downgrade [Feature:Downgrade] cluster downgrade should maintain a functioning cluster [Feature:ClusterDowngrade]
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Downgrade [Feature:Downgrade] cluster downgrade should maintain a functioning cluster [Feature:ClusterDowngrade]
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] GKE node pools [Feature:GKENodePool] should create a cluster with multiple node pools [Feature:GKENodePool]
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] GKE node pools [Feature:GKENodePool] should create a cluster with multiple node pools [Feature:GKENodePool]
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] HA-master [Feature:HAMaster] survive addition/removal replicas different zones [Serial][Disruptive]
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] HA-master [Feature:HAMaster] survive addition/removal replicas different zones [Serial][Disruptive]
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] HA-master [Feature:HAMaster] survive addition/removal replicas multizone workers [Serial][Disruptive]
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] HA-master [Feature:HAMaster] survive addition/removal replicas multizone workers [Serial][Disruptive]
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] HA-master [Feature:HAMaster] survive addition/removal replicas same zone [Serial][Disruptive]
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] HA-master [Feature:HAMaster] survive addition/removal replicas same zone [Serial][Disruptive]
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Nodes [Disruptive] Resize [Slow] should be able to add nodes
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Nodes [Disruptive] Resize [Slow] should be able to add nodes
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Nodes [Disruptive] Resize [Slow] should be able to delete nodes
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Nodes [Disruptive] Resize [Slow] should be able to delete nodes
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Ports Security Check [Feature:KubeletSecurity] should not be able to proxy to cadvisor port 4194 using proxy subresource
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Ports Security Check [Feature:KubeletSecurity] should not be able to proxy to cadvisor port 4194 using proxy subresource
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Ports Security Check [Feature:KubeletSecurity] should not be able to proxy to the readonly kubelet port 10255 using proxy subresource
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Ports Security Check [Feature:KubeletSecurity] should not be able to proxy to the readonly kubelet port 10255 using proxy subresource
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Ports Security Check [Feature:KubeletSecurity] should not have port 10255 open on its all public IP addresses
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Ports Security Check [Feature:KubeletSecurity] should not have port 10255 open on its all public IP addresses
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Ports Security Check [Feature:KubeletSecurity] should not have port 4194 open on its all public IP addresses
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Ports Security Check [Feature:KubeletSecurity] should not have port 4194 open on its all public IP addresses
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by dropping all inbound packets for a while and ensure they function afterwards
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by dropping all inbound packets for a while and ensure they function afterwards
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by dropping all outbound packets for a while and ensure they function afterwards
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by dropping all outbound packets for a while and ensure they function afterwards
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by ordering clean reboot and ensure they function upon restart
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by ordering clean reboot and ensure they function upon restart
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by ordering unclean reboot and ensure they function upon restart
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by ordering unclean reboot and ensure they function upon restart
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by switching off the network interface and ensure they function upon switch on
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by switching off the network interface and ensure they function upon switch on
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by triggering kernel panic and ensure they function upon restart
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by triggering kernel panic and ensure they function upon restart
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Recreate [Feature:Recreate] recreate nodes and ensure they function upon restart
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Recreate [Feature:Recreate] recreate nodes and ensure they function upon restart
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Upgrade [Feature:Upgrade] cluster upgrade should maintain a functioning cluster [Feature:ClusterUpgrade]
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Upgrade [Feature:Upgrade] cluster upgrade should maintain a functioning cluster [Feature:ClusterUpgrade]
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Upgrade [Feature:Upgrade] master upgrade should maintain a functioning cluster [Feature:MasterUpgrade]
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Upgrade [Feature:Upgrade] master upgrade should maintain a functioning cluster [Feature:MasterUpgrade]
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] [Disruptive]NodeLease NodeLease deletion node lease should be deleted when corresponding node is deleted
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] [Disruptive]NodeLease NodeLease deletion node lease should be deleted when corresponding node is deleted
Kubernetes e2e suite [It] [sig-cloud-provider] [Feature:CloudProvider][Disruptive] Nodes should be deleted on API server if it doesn't exist in the cloud provider
Kubernetes e2e suite [It] [sig-cloud-provider] [Feature:CloudProvider][Disruptive] Nodes should be deleted on API server if it doesn't exist in the cloud provider
Kubernetes e2e suite [It] [sig-cluster-lifecycle] [Feature:BootstrapTokens] should delete the signed bootstrap tokens from clusterInfo ConfigMap when bootstrap token is deleted
Kubernetes e2e suite [It] [sig-cluster-lifecycle] [Feature:BootstrapTokens] should delete the signed bootstrap tokens from clusterInfo ConfigMap when bootstrap token is deleted
Kubernetes e2e suite [It] [sig-cluster-lifecycle] [Feature:BootstrapTokens] should delete the token secret when the secret expired
Kubernetes e2e suite [It] [sig-cluster-lifecycle] [Feature:BootstrapTokens] should delete the token secret when the secret expired
Kubernetes e2e suite [It] [sig-cluster-lifecycle] [Feature:BootstrapTokens] should not delete the token secret when the secret is not expired
Kubernetes e2e suite [It] [sig-cluster-lifecycle] [Feature:BootstrapTokens] should not delete the token secret when the secret is not expired
Kubernetes e2e suite [It] [sig-cluster-lifecycle] [Feature:BootstrapTokens] should resign the bootstrap tokens when the clusterInfo ConfigMap updated [Serial][Disruptive]
Kubernetes e2e suite [It] [sig-cluster-lifecycle] [Feature:BootstrapTokens] should resign the bootstrap tokens when the clusterInfo ConfigMap updated [Serial][Disruptive]
Kubernetes e2e suite [It] [sig-cluster-lifecycle] [Feature:BootstrapTokens] should sign the new added bootstrap tokens
Kubernetes e2e suite [It] [sig-cluster-lifecycle] [Feature:BootstrapTokens] should sign the new added bootstrap tokens
Kubernetes e2e suite [It] [sig-instrumentation] Logging soak [Performance] [Slow] [Disruptive] should survive logging 1KB every 1s seconds, for a duration of 2m0s
Kubernetes e2e suite [It] [sig-instrumentation] Logging soak [Performance] [Slow] [Disruptive] should survive logging 1KB every 1s seconds, for a duration of 2m0s
Kubernetes e2e suite [It] [sig-instrumentation] MetricsGrabber should grab all metrics from API server.
Kubernetes e2e suite [It] [sig-instrumentation] MetricsGrabber should grab all metrics from API server.
Kubernetes e2e suite [It] [sig-instrumentation] MetricsGrabber should grab all metrics from a ControllerManager.
Kubernetes e2e suite [It] [sig-instrumentation] MetricsGrabber should grab all metrics from a ControllerManager.
Kubernetes e2e suite [It] [sig-instrumentation] MetricsGrabber should grab all metrics from a Kubelet.
Kubernetes e2e suite [It] [sig-instrumentation] MetricsGrabber should grab all metrics from a Kubelet.
Kubernetes e2e suite [It] [sig-instrumentation] MetricsGrabber should grab all metrics from a Scheduler.
Kubernetes e2e suite [It] [sig-instrumentation] MetricsGrabber should grab all metrics from a Scheduler.
Kubernetes e2e suite [It] [sig-instrumentation] Stackdriver Monitoring should have accelerator metrics [Feature:StackdriverAcceleratorMonitoring]
Kubernetes e2e suite [It] [sig-instrumentation] Stackdriver Monitoring should have accelerator metrics [Feature:StackdriverAcceleratorMonitoring]
Kubernetes e2e suite [It] [sig-instrumentation] Stackdriver Monitoring should have cluster metrics [Feature:StackdriverMonitoring]
Kubernetes e2e suite [It] [sig-instrumentation] Stackdriver Monitoring should have cluster metrics [Feature:StackdriverMonitoring]
Kubernetes e2e suite [It] [sig-instrumentation] Stackdriver Monitoring should run Custom Metrics - Stackdriver Adapter for external metrics [Feature:StackdriverExternalMetrics]
Kubernetes e2e suite [It] [sig-instrumentation] Stackdriver Monitoring should run Custom Metrics - Stackdriver Adapter for external metrics [Feature:StackdriverExternalMetrics]
Kubernetes e2e suite [It] [sig-instrumentation] Stackdriver Monitoring should run Custom Metrics - Stackdriver Adapter for new resource model [Feature:StackdriverCustomMetrics]
Kubernetes e2e suite [It] [sig-instrumentation] Stackdriver Monitoring should run Custom Metrics - Stackdriver Adapter for new resource model [Feature:StackdriverCustomMetrics]
Kubernetes e2e suite [It] [sig-instrumentation] Stackdriver Monitoring should run Custom Metrics - Stackdriver Adapter for old resource model [Feature:StackdriverCustomMetrics]
Kubernetes e2e suite [It] [sig-instrumentation] Stackdriver Monitoring should run Custom Metrics - Stackdriver Adapter for old resource model [Feature:StackdriverCustomMetrics]
Kubernetes e2e suite [It] [sig-instrumentation] Stackdriver Monitoring should run Stackdriver Metadata Agent [Feature:StackdriverMetadataAgent]
Kubernetes e2e suite [It] [sig-instrumentation] Stackdriver Monitoring should run Stackdriver Metadata Agent [Feature:StackdriverMetadataAgent]
Kubernetes e2e suite [It] [sig-network] CVE-2021-29923 IPv4 Service Type ClusterIP with leading zeros should work interpreted as decimal
Kubernetes e2e suite [It] [sig-network] CVE-2021-29923 IPv4 Service Type ClusterIP with leading zeros should work interpreted as decimal
Kubernetes e2e suite [It] [sig-network] ClusterDns [Feature:Example] should create pod that uses dns
Kubernetes e2e suite [It] [sig-network] ClusterDns [Feature:Example] should create pod that uses dns
Kubernetes e2e suite [It] [sig-network] Conntrack should be able to preserve UDP traffic when initial unready endpoints get ready
Kubernetes e2e suite [It] [sig-network] Conntrack should be able to preserve UDP traffic when initial unready endpoints get ready
Kubernetes e2e suite [It] [sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a ClusterIP service
Kubernetes e2e suite [It] [sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a ClusterIP service
Kubernetes e2e suite [It] [sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a NodePort service
Kubernetes e2e suite [It] [sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a NodePort service
Kubernetes e2e suite [It] [sig-network] Conntrack should drop INVALID conntrack entries [Privileged]
Kubernetes e2e suite [It] [sig-network] Conntrack should drop INVALID conntrack entries [Privileged]
Kubernetes e2e suite [It] [sig-network] DNS HostNetwork should resolve DNS of partial qualified names for services on hostNetwork pods with dnsPolicy: ClusterFirstWithHostNet [LinuxOnly]
Kubernetes e2e suite [It] [sig-network] DNS HostNetwork should resolve DNS of partial qualified names for services on hostNetwork pods with dnsPolicy: ClusterFirstWithHostNet [LinuxOnly]
Kubernetes e2e suite [It] [sig-network] DNS configMap nameserver Change stubDomain should be able to change stubDomain configuration [Slow][Serial]
Kubernetes e2e suite [It] [sig-network] DNS configMap nameserver Change stubDomain should be able to change stubDomain configuration [Slow][Serial]
Kubernetes e2e suite [It] [sig-network] DNS configMap nameserver Forward PTR lookup should forward PTR records lookup to upstream nameserver [Slow][Serial]
Kubernetes e2e suite [It] [sig-network] DNS configMap nameserver Forward PTR lookup should forward PTR records lookup to upstream nameserver [Slow][Serial]
Kubernetes e2e suite [It] [sig-network] DNS configMap nameserver Forward external name lookup should forward externalname lookup to upstream nameserver [Slow][Serial]
Kubernetes e2e suite [It] [sig-network] DNS configMap nameserver Forward external name lookup should forward externalname lookup to upstream nameserver [Slow][Serial]
Kubernetes e2e suite [It] [sig-network] DNS should provide DNS for the cluster [Provider:GCE]
Kubernetes e2e suite [It] [sig-network] DNS should provide DNS for the cluster [Provider:GCE]
Kubernetes e2e suite [It] [sig-network] DNS should resolve DNS of partial qualified names for the cluster [LinuxOnly]
Kubernetes e2e suite [It] [sig-network] DNS should resolve DNS of partial qualified names for the cluster [LinuxOnly]
Kubernetes e2e suite [It] [sig-network] DNS should support configurable pod resolv.conf
Kubernetes e2e suite [It] [sig-network] DNS should support configurable pod resolv.conf
Kubernetes e2e suite [It] [sig-network] DNS should work with the pod containing more than 6 DNS search paths and longer than 256 search list characters
Kubernetes e2e suite [It] [sig-network] DNS should work with the pod containing more than 6 DNS search paths and longer than 256 search list characters
Kubernetes e2e suite [It] [sig-network] EndpointSliceMirroring should mirror a custom Endpoint with multiple subsets and same IP address
Kubernetes e2e suite [It] [sig-network] EndpointSliceMirroring should mirror a custom Endpoint with multiple subsets and same IP address
Kubernetes e2e suite [It] [sig-network] Firewall rule [Slow] [Serial] should create valid firewall rules for LoadBalancer type service
Kubernetes e2e suite [It] [sig-network] Firewall rule [Slow] [Serial] should create valid firewall rules for LoadBalancer type service
Kubernetes e2e suite [It] [sig-network] Firewall rule control plane should not expose well-known ports
Kubernetes e2e suite [It] [sig-network] Firewall rule control plane should not expose well-known ports
Kubernetes e2e suite [It] [sig-network] Firewall rule should have correct firewall rules for e2e cluster
Kubernetes e2e suite [It] [sig-network] Firewall rule should have correct firewall rules for e2e cluster
Kubernetes e2e suite [It] [sig-network] IngressClass [Feature:Ingress] should allow IngressClass to have Namespace-scoped parameters [Serial]
Kubernetes e2e suite [It] [sig-network] IngressClass [Feature:Ingress] should allow IngressClass to have Namespace-scoped parameters [Serial]
Kubernetes e2e suite [It] [sig-network] IngressClass [Feature:Ingress] should choose the one with the later CreationTimestamp, if equal the one with the lower name when two ingressClasses are marked as default[Serial]
Kubernetes e2e suite [It] [sig-network] IngressClass [Feature:Ingress] should choose the one with the later CreationTimestamp, if equal the one with the lower name when two ingressClasses are marked as default[Serial]
Kubernetes e2e suite [It] [sig-network] IngressClass [Feature:Ingress] should not set default value if no default IngressClass [Serial]
Kubernetes e2e suite [It] [sig-network] IngressClass [Feature:Ingress] should not set default value if no default IngressClass [Serial]
Kubernetes e2e suite [It] [sig-network] IngressClass [Feature:Ingress] should set default value on new IngressClass [Serial]
Kubernetes e2e suite [It] [sig-network] IngressClass [Feature:Ingress] should set default value on new IngressClass [Serial]
Kubernetes e2e suite [It] [sig-network] KubeProxy should set TCP CLOSE_WAIT timeout [Privileged]
Kubernetes e2e suite [It] [sig-network] KubeProxy should set TCP CLOSE_WAIT timeout [Privileged]
Kubernetes e2e suite [It] [sig-network] LoadBalancers ESIPP [Slow] should handle updates to ExternalTrafficPolicy field
Kubernetes e2e suite [It] [sig-network] LoadBalancers ESIPP [Slow] should handle updates to ExternalTrafficPolicy field
Kubernetes e2e suite [It] [sig-network] LoadBalancers ESIPP [Slow] should only target nodes with endpoints
Kubernetes e2e suite [It] [sig-network] LoadBalancers ESIPP [Slow] should only target nodes with endpoints
Kubernetes e2e suite [It] [sig-network] LoadBalancers ESIPP [Slow] should work for type=LoadBalancer
Kubernetes e2e suite [It] [sig-network] LoadBalancers ESIPP [Slow] should work for type=LoadBalancer
Kubernetes e2e suite [It] [sig-network] LoadBalancers ESIPP [Slow] should work for type=NodePort
Kubernetes e2e suite [It] [sig-network] LoadBalancers ESIPP [Slow] should work for type=NodePort
Kubernetes e2e suite [It] [sig-network] LoadBalancers ESIPP [Slow] should work from pods
Kubernetes e2e suite [It] [sig-network] LoadBalancers ESIPP [Slow] should work from pods
Kubernetes e2e suite [It] [sig-network] LoadBalancers should be able to change the type and ports of a TCP service [Slow]
Kubernetes e2e suite [It] [sig-network] LoadBalancers should be able to change the type and ports of a TCP service [Slow]
Kubernetes e2e suite [It] [sig-network] LoadBalancers should be able to change the type and ports of a UDP service [Slow]
Kubernetes e2e suite [It] [sig-network] LoadBalancers should be able to change the type and ports of a UDP service [Slow]
Kubernetes e2e suite [It] [sig-network] LoadBalancers should be able to create LoadBalancer Service without NodePort and change it [Slow]
Kubernetes e2e suite [It] [sig-network] LoadBalancers should be able to create LoadBalancer Service without NodePort and change it [Slow]
Kubernetes e2e suite [It] [sig-network] LoadBalancers should be able to create an internal type load balancer [Slow]
Kubernetes e2e suite [It] [sig-network] LoadBalancers should be able to create an internal type load balancer [Slow]
Kubernetes e2e suite [It] [sig-network] LoadBalancers should be able to preserve UDP traffic when server pod cycles for a LoadBalancer service on different nodes
Kubernetes e2e suite [It] [sig-network] LoadBalancers should be able to preserve UDP traffic when server pod cycles for a LoadBalancer service on different nodes
Kubernetes e2e suite [It] [sig-network] LoadBalancers should be able to preserve UDP traffic when server pod cycles for a LoadBalancer service on the same nodes
Kubernetes e2e suite [It] [sig-network] LoadBalancers should be able to preserve UDP traffic when server pod cycles for a LoadBalancer service on the same nodes
Kubernetes e2e suite [It] [sig-network] LoadBalancers should be able to switch session affinity for LoadBalancer service with ESIPP off [Slow] [LinuxOnly]
Kubernetes e2e suite [It] [sig-network] LoadBalancers should be able to switch session affinity for LoadBalancer service with ESIPP off [Slow] [LinuxOnly]
Kubernetes e2e suite [It] [sig-network] LoadBalancers should be able to switch session affinity for LoadBalancer service with ESIPP on [Slow] [LinuxOnly]
Kubernetes e2e suite [It] [sig-network] LoadBalancers should be able to switch session affinity for LoadBalancer service with ESIPP on [Slow] [LinuxOnly]
Kubernetes e2e suite [It] [sig-network] LoadBalancers should handle load balancer cleanup finalizer for service [Slow]
Kubernetes e2e suite [It] [sig-network] LoadBalancers should handle load balancer cleanup finalizer for service [Slow]
Kubernetes e2e suite [It] [sig-network] LoadBalancers should have session affinity work for LoadBalancer service with ESIPP off [Slow] [LinuxOnly]
Kubernetes e2e suite [It] [sig-network] LoadBalancers should have session affinity work for LoadBalancer service with ESIPP off [Slow] [LinuxOnly]
Kubernetes e2e suite [It] [sig-network] LoadBalancers should have session affinity work for LoadBalancer service with ESIPP on [Slow] [LinuxOnly]
Kubernetes e2e suite [It] [sig-network] LoadBalancers should have session affinity work for LoadBalancer service with ESIPP on [Slow] [LinuxOnly]
Kubernetes e2e suite [It] [sig-network] LoadBalancers should not have connectivity disruption during rolling update with externalTrafficPolicy=Cluster [Slow]
Kubernetes e2e suite [It] [sig-network] LoadBalancers should not have connectivity disruption during rolling update with externalTrafficPolicy=Cluster [Slow]
Kubernetes e2e suite [It] [sig-network] LoadBalancers should not have connectivity disruption during rolling update with externalTrafficPolicy=Local [Slow]
Kubernetes e2e suite [It] [sig-network] LoadBalancers should not have connectivity disruption during rolling update with externalTrafficPolicy=Local [Slow]
Kubernetes e2e suite [It] [sig-network] LoadBalancers should only allow access from service loadbalancer source ranges [Slow]
Kubernetes e2e suite [It] [sig-network] LoadBalancers should only allow access from service loadbalancer source ranges [Slow]
Kubernetes e2e suite [It] [sig-network] LoadBalancers should reconcile LB health check interval [Slow][Serial][Disruptive]
Kubernetes e2e suite [It] [sig-network] LoadBalancers should reconcile LB health check interval [Slow][Serial][Disruptive]
Kubernetes e2e suite [It] [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:Ingress] should conform to Ingress spec
Kubernetes e2e suite [It] [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:Ingress] should conform to Ingress spec
Kubernetes e2e suite [It] [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:NEG] rolling update backend pods should not cause service disruption
Kubernetes e2e suite [It] [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:NEG] rolling update backend pods should not cause service disruption
Kubernetes e2e suite [It] [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:NEG] should be able to create a ClusterIP service
Kubernetes e2e suite [It] [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:NEG] should be able to create a ClusterIP service
Kubernetes e2e suite [It] [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:NEG] should be able to switch between IG and NEG modes
Kubernetes e2e suite [It] [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:NEG] should be able to switch between IG and NEG modes
Kubernetes e2e suite [It] [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:NEG] should conform to Ingress spec
Kubernetes e2e suite [It] [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:NEG] should conform to Ingress spec
Kubernetes e2e suite [It] [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:NEG] should create NEGs for all ports with the Ingress annotation, and NEGs for the standalone annotation otherwise
Kubernetes e2e suite [It] [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:NEG] should create NEGs for all ports with the Ingress annotation, and NEGs for the standalone annotation otherwise
Kubernetes e2e suite [It] [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:NEG] should sync endpoints for both Ingress-referenced NEG and standalone NEG
Kubernetes e2e suite [It] [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:NEG] should sync endpoints for both Ingress-referenced NEG and standalone NEG
Kubernetes e2e suite [It] [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:NEG] should sync endpoints to NEG
Kubernetes e2e suite [It] [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:NEG] should sync endpoints to NEG
Kubernetes e2e suite [It] [sig-network] Loadbalancing: L7 Scalability GCE [Slow] [Serial] [Feature:IngressScale] Creating and updating ingresses should happen promptly with small/medium/large amount of ingresses
Kubernetes e2e suite [It] [sig-network] Loadbalancing: L7 Scalability GCE [Slow] [Serial] [Feature:IngressScale] Creating and updating ingresses should happen promptly with small/medium/large amount of ingresses
Kubernetes e2e suite [It] [sig-network] Netpol API should support creating NetworkPolicy API operations
Kubernetes e2e suite [It] [sig-network] Netpol API should support creating NetworkPolicy API operations
Kubernetes e2e suite [It] [sig-network] Netpol API should support creating NetworkPolicy API with endport field
Kubernetes e2e suite [It] [sig-network] Netpol API should support creating NetworkPolicy API with endport field
Kubernetes e2e suite [It] [sig-network] Netpol API should support creating NetworkPolicy with Status subresource [Feature:NetworkPolicyStatus]
Kubernetes e2e suite [It] [sig-network] Netpol API should support creating NetworkPolicy with Status subresource [Feature:NetworkPolicyStatus]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should allow egress access on one named port [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should allow egress access on one named port [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should allow egress access to server in CIDR block [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should allow egress access to server in CIDR block [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should allow ingress access from namespace on one named port [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should allow ingress access from namespace on one named port [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should allow ingress access from updated namespace [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should allow ingress access from updated namespace [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should allow ingress access from updated pod [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should allow ingress access from updated pod [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should allow ingress access on one named port [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should allow ingress access on one named port [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should deny egress from all pods in a namespace [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should deny egress from all pods in a namespace [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should deny egress from pods based on PodSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should deny egress from pods based on PodSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should deny ingress access to updated pod [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should deny ingress access to updated pod [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should deny ingress from pods on other namespaces [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should deny ingress from pods on other namespaces [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce egress policy allowing traffic to a server in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce egress policy allowing traffic to a server in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce except clause while egress access to server in CIDR block [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce except clause while egress access to server in CIDR block [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce ingress policy allowing any port traffic to a server on a specific protocol [Feature:NetworkPolicy] [Feature:UDP]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce ingress policy allowing any port traffic to a server on a specific protocol [Feature:NetworkPolicy] [Feature:UDP]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce multiple egress policies with egress allow-all policy taking precedence [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce multiple egress policies with egress allow-all policy taking precedence [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce multiple ingress policies with ingress allow-all policy taking precedence [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce multiple ingress policies with ingress allow-all policy taking precedence [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce multiple, stacked policies with overlapping podSelectors [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce multiple, stacked policies with overlapping podSelectors [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policies to check ingress and egress policies can be controlled independently based on PodSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policies to check ingress and egress policies can be controlled independently based on PodSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy based on Multiple PodSelectors and NamespaceSelectors [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy based on Multiple PodSelectors and NamespaceSelectors [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy based on NamespaceSelector with MatchExpressions using default ns label [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy based on NamespaceSelector with MatchExpressions using default ns label [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy based on NamespaceSelector with MatchExpressions[Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy based on NamespaceSelector with MatchExpressions[Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy based on PodSelector or NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy based on PodSelector or NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy based on PodSelector with MatchExpressions[Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy based on PodSelector with MatchExpressions[Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy based on Ports [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy based on Ports [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy based on any PodSelectors [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy based on any PodSelectors [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy to allow ingress traffic for a target [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy to allow ingress traffic for a target [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy to allow ingress traffic from pods in all namespaces [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy to allow ingress traffic from pods in all namespaces [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy to allow traffic based on NamespaceSelector with MatchLabels using default ns label [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy to allow traffic based on NamespaceSelector with MatchLabels using default ns label [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy to allow traffic from pods within server namespace based on PodSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy to allow traffic from pods within server namespace based on PodSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy to allow traffic only from a different namespace, based on NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy to allow traffic only from a different namespace, based on NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy to allow traffic only from a pod in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy to allow traffic only from a pod in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce updated policy [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce updated policy [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should ensure an IP overlapping both IPBlock.CIDR and IPBlock.Except is allowed [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should ensure an IP overlapping both IPBlock.CIDR and IPBlock.Except is allowed [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should not allow access by TCP when a policy specifies only UDP [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should not allow access by TCP when a policy specifies only UDP [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should not mistakenly treat 'protocol: SCTP' as 'protocol: TCP', even if the plugin doesn't support SCTP [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should not mistakenly treat 'protocol: SCTP' as 'protocol: TCP', even if the plugin doesn't support SCTP [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should properly isolate pods that are selected by a policy allowing SCTP, even if the plugin doesn't support SCTP [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should properly isolate pods that are selected by a policy allowing SCTP, even if the plugin doesn't support SCTP [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should stop enforcing policies after they are deleted [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should stop enforcing policies after they are deleted [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should support a 'default-deny-all' policy [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should support a 'default-deny-all' policy [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should support a 'default-deny-ingress' policy [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should support a 'default-deny-ingress' policy [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should support allow-all policy [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should support allow-all policy [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should support denying of egress traffic on the client side (even if the server explicitly allows this traffic) [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should support denying of egress traffic on the client side (even if the server explicitly allows this traffic) [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should work with Ingress, Egress specified together [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should work with Ingress, Egress specified together [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol [Feature:SCTPConnectivity][LinuxOnly] NetworkPolicy between server and client using SCTP should enforce policy based on Ports [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol [Feature:SCTPConnectivity][LinuxOnly] NetworkPolicy between server and client using SCTP should enforce policy based on Ports [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol [Feature:SCTPConnectivity][LinuxOnly] NetworkPolicy between server and client using SCTP should enforce policy to allow traffic only from a pod in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol [Feature:SCTPConnectivity][LinuxOnly] NetworkPolicy between server and client using SCTP should enforce policy to allow traffic only from a pod in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol [Feature:SCTPConnectivity][LinuxOnly] NetworkPolicy between server and client using SCTP should support a 'default-deny-ingress' policy [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol [Feature:SCTPConnectivity][LinuxOnly] NetworkPolicy between server and client using SCTP should support a 'default-deny-ingress' policy [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol [LinuxOnly] NetworkPolicy between server and client using UDP should enforce policy based on Ports [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol [LinuxOnly] NetworkPolicy between server and client using UDP should enforce policy based on Ports [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol [LinuxOnly] NetworkPolicy between server and client using UDP should enforce policy to allow traffic only from a pod in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol [LinuxOnly] NetworkPolicy between server and client using UDP should enforce policy to allow traffic only from a pod in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol [LinuxOnly] NetworkPolicy between server and client using UDP should support a 'default-deny-ingress' policy [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol [LinuxOnly] NetworkPolicy between server and client using UDP should support a 'default-deny-ingress' policy [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicy API should support creating NetworkPolicy API operations
Kubernetes e2e suite [It] [sig-network] NetworkPolicy API should support creating NetworkPolicy API operations
Kubernetes e2e suite [It] [sig-network] NetworkPolicy [Feature:SCTPConnectivity][LinuxOnly] NetworkPolicy between server and client using SCTP should enforce policy based on Ports [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicy [Feature:SCTPConnectivity][LinuxOnly] NetworkPolicy between server and client using SCTP should enforce policy based on Ports [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicy [Feature:SCTPConnectivity][LinuxOnly] NetworkPolicy between server and client using SCTP should enforce policy to allow traffic only from a pod in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicy [Feature:SCTPConnectivity][LinuxOnly] NetworkPolicy between server and client using SCTP should enforce policy to allow traffic only from a pod in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicy [Feature:SCTPConnectivity][LinuxOnly] NetworkPolicy between server and client using SCTP should support a 'default-deny' policy [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicy [Feature:SCTPConnectivity][LinuxOnly] NetworkPolicy between server and client using SCTP should support a 'default-deny' policy [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should allow egress access on one named port [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should allow egress access on one named port [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should allow egress access to server in CIDR block [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should allow egress access to server in CIDR block [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should allow ingress access from namespace on one named port [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should allow ingress access from namespace on one named port [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should allow ingress access from updated namespace [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should allow ingress access from updated namespace [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should allow ingress access from updated pod [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should allow ingress access from updated pod [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should allow ingress access on one named port [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should allow ingress access on one named port [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should deny ingress access to updated pod [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should deny ingress access to updated pod [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce egress policy allowing traffic to a server in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce egress policy allowing traffic to a server in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce except clause while egress access to server in CIDR block [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce except clause while egress access to server in CIDR block [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce multiple egress policies with egress allow-all policy taking precedence [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce multiple egress policies with egress allow-all policy taking precedence [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce multiple ingress policies with ingress allow-all policy taking precedence [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce multiple ingress policies with ingress allow-all policy taking precedence [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce multiple, stacked policies with overlapping podSelectors [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce multiple, stacked policies with overlapping podSelectors [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policies to check ingress and egress policies can be controlled independently based on PodSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policies to check ingress and egress policies can be controlled independently based on PodSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on NamespaceSelector with MatchExpressions[Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on NamespaceSelector with MatchExpressions[Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on PodSelector or NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on PodSelector or NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on PodSelector with MatchExpressions[Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on PodSelector with MatchExpressions[Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on Ports [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on Ports [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy to allow traffic from pods within server namespace based on PodSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy to allow traffic from pods within server namespace based on PodSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy to allow traffic only from a different namespace, based on NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy to allow traffic only from a different namespace, based on NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy to allow traffic only from a pod in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy to allow traffic only from a pod in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce updated policy [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce updated policy [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should ensure an IP overlapping both IPBlock.CIDR and IPBlock.Except is allowed [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should ensure an IP overlapping both IPBlock.CIDR and IPBlock.Except is allowed [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should not allow access by TCP when a policy specifies only SCTP [Feature:NetworkPolicy] [Serial]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should not allow access by TCP when a policy specifies only SCTP [Feature:NetworkPolicy] [Serial]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should stop enforcing policies after they are deleted [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should stop enforcing policies after they are deleted [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should support a 'default-deny-all' policy [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should support a 'default-deny-all' policy [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should support a 'default-deny-ingress' policy [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should support a 'default-deny-ingress' policy [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should support allow-all policy [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should support allow-all policy [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should work with Ingress,Egress specified together [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should work with Ingress,Egress specified together [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: sctp [LinuxOnly][Feature:SCTPConnectivity]
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: sctp [LinuxOnly][Feature:SCTPConnectivity]
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Pods should function for node-pod communication: sctp [LinuxOnly][Feature:SCTPConnectivity]
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Pods should function for node-pod communication: sctp [LinuxOnly][Feature:SCTPConnectivity]
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should be able to handle large requests: http
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should be able to handle large requests: http
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should be able to handle large requests: udp
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should be able to handle large requests: udp
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should function for client IP based session affinity: http [LinuxOnly]
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should function for client IP based session affinity: http [LinuxOnly]
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should function for client IP based session affinity: udp [LinuxOnly]
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should function for client IP based session affinity: udp [LinuxOnly]
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should function for endpoint-Service: http
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should function for endpoint-Service: http
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should function for endpoint-Service: sctp [Feature:SCTPConnectivity]
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should function for endpoint-Service: sctp [Feature:SCTPConnectivity]
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should function for endpoint-Service: udp
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should function for endpoint-Service: udp
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should function for multiple endpoint-Services with same selector
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should function for multiple endpoint-Services with same selector
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should function for node-Service: http
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should function for node-Service: http
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should function for node-Service: sctp [Feature:SCTPConnectivity]
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should function for node-Service: sctp [Feature:SCTPConnectivity]
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should function for node-Service: udp
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should function for node-Service: udp
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should function for pod-Service: http
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should function for pod-Service: http
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should function for pod-Service: sctp [Feature:SCTPConnectivity]
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should function for pod-Service: sctp [Feature:SCTPConnectivity]
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should function for pod-Service: udp
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should function for pod-Service: udp
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should function for service endpoints using hostNetwork
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should function for service endpoints using hostNetwork
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should support basic nodePort: udp functionality
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should support basic nodePort: udp functionality
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should update endpoints: http
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should update endpoints: http
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should update endpoints: udp
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should update endpoints: udp
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should update nodePort: http [Slow]
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should update nodePort: http [Slow]
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should update nodePort: udp [Slow]
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should update nodePort: udp [Slow]
Kubernetes e2e suite [It] [sig-network] Networking IPerf2 [Feature:Networking-Performance] should run iperf2
Kubernetes e2e suite [It] [sig-network] Networking IPerf2 [Feature:Networking-Performance] should run iperf2
Kubernetes e2e suite [It] [sig-network] Networking should allow creating a Pod with an SCTP HostPort [LinuxOnly] [Serial]
Kubernetes e2e suite [It] [sig-network] Networking should allow creating a Pod with an SCTP HostPort [LinuxOnly] [Serial]
Kubernetes e2e suite [It] [sig-network] Networking should check kube-proxy urls
Kubernetes e2e suite [It] [sig-network] Networking should check kube-proxy urls
Kubernetes e2e suite [It] [sig-network] Networking should provide Internet connection for containers [Feature:Networking-IPv4]
Kubernetes e2e suite [It] [sig-network] Networking should provide Internet connection for containers [Feature:Networking-IPv4]
Kubernetes e2e suite [It] [sig-network] Networking should provide Internet connection for containers [Feature:Networking-IPv6][Experimental][LinuxOnly]
Kubernetes e2e suite [It] [sig-network] Networking should provide Internet connection for containers [Feature:Networking-IPv6][Experimental][LinuxOnly]
Kubernetes e2e suite [It] [sig-network] Networking should provide unchanging, static URL paths for kubernetes api services
Kubernetes e2e suite [It] [sig-network] Networking should provide unchanging, static URL paths for kubernetes api services
Kubernetes e2e suite [It] [sig-network] Networking should provider Internet connection for containers using DNS [Feature:Networking-DNS]
Kubernetes e2e suite [It] [sig-network] Networking should provider Internet connection for containers using DNS [Feature:Networking-DNS]
Kubernetes e2e suite [It] [sig-network] Networking should recreate its iptables rules if they are deleted [Disruptive]
Kubernetes e2e suite [It] [sig-network] Networking should recreate its iptables rules if they are deleted [Disruptive]
Kubernetes e2e suite [It] [sig-network] NoSNAT [Feature:NoSNAT] [Slow] Should be able to send traffic between Pods without SNAT
Kubernetes e2e suite [It] [sig-network] NoSNAT [Feature:NoSNAT] [Slow] Should be able to send traffic between Pods without SNAT
Kubernetes e2e suite [It] [sig-network] Proxy version v1 should proxy logs on node using proxy subresource
Kubernetes e2e suite [It] [sig-network] Proxy version v1 should proxy logs on node using proxy subresource
Kubernetes e2e suite [It] [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource
Kubernetes e2e suite [It] [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource
Kubernetes e2e suite [It] [sig-network] Services GCE [Slow] should be able to create and tear down a standard-tier load balancer [Slow]
Kubernetes e2e suite [It] [sig-network] Services GCE [Slow] should be able to create and tear down a standard-tier load balancer [Slow]
Kubernetes e2e suite [It] [sig-network] Services should allow creating a basic SCTP service with pod and endpoints [LinuxOnly] [Serial]
Kubernetes e2e suite [It] [sig-network] Services should allow creating a basic SCTP service with pod and endpoints [LinuxOnly] [Serial]
Kubernetes e2e suite [It] [sig-network] Services should allow pods to hairpin back to themselves through services
Kubernetes e2e suite [It] [sig-network] Services should allow pods to hairpin back to themselves through services
Kubernetes e2e suite [It] [sig-network] Services should be able to connect to terminating and unready endpoints if PublishNotReadyAddresses is true
Kubernetes e2e suite [It] [sig-network] Services should be able to connect to terminating and unready endpoints if PublishNotReadyAddresses is true
Kubernetes e2e suite [It] [sig-network] Services should be able to up and down services
Kubernetes e2e suite [It] [sig-network] Services should be able to up and down services
Kubernetes e2e suite [It] [sig-network] Services should be able to update service type to NodePort listening on same port number but different protocols
Kubernetes e2e suite [It] [sig-network] Services should be able to update service type to NodePort listening on same port number but different protocols
Kubernetes e2e suite [It] [sig-network] Services should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node
Kubernetes e2e suite [It] [sig-network] Services should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node
Kubernetes e2e suite [It] [sig-network] Services should be rejected for evicted pods (no endpoints exist)
Kubernetes e2e suite [It] [sig-network] Services should be rejected for evicted pods (no endpoints exist)
Kubernetes e2e suite [It] [sig-network] Services should be rejected when no endpoints exist
Kubernetes e2e suite [It] [sig-network] Services should be rejected when no endpoints exist
Kubernetes e2e suite [It] [sig-network] Services should be updated after adding or deleting ports
Kubernetes e2e suite [It] [sig-network] Services should be updated after adding or deleting ports
Kubernetes e2e suite [It] [sig-network] Services should check NodePort out-of-range
Kubernetes e2e suite [It] [sig-network] Services should check NodePort out-of-range
Kubernetes e2e suite [It] [sig-network] Services should create endpoints for unready pods
Kubernetes e2e suite [It] [sig-network] Services should create endpoints for unready pods
Kubernetes e2e suite [It] [sig-network] Services should fail health check node port if there are only terminating endpoints [Feature:ProxyTerminatingEndpoints]
Kubernetes e2e suite [It] [sig-network] Services should fail health check node port if there are only terminating endpoints [Feature:ProxyTerminatingEndpoints]
Kubernetes e2e suite [It] [sig-network] Services should fallback to local terminating endpoints when there are no ready endpoints with externalTrafficPolicy=Local [Feature:ProxyTerminatingEndpoints]
Kubernetes e2e suite [It] [sig-network] Services should fallback to local terminating endpoints when there are no ready endpoints with externalTrafficPolicy=Local [Feature:ProxyTerminatingEndpoints]
Kubernetes e2e suite [It] [sig-network] Services should fallback to local terminating endpoints when there are no ready endpoints with internalTrafficPolicy=Local [Feature:ProxyTerminatingEndpoints]
Kubernetes e2e suite [It] [sig-network] Services should fallback to local terminating endpoints when there are no ready endpoints with internalTrafficPolicy=Local [Feature:ProxyTerminatingEndpoints]
Kubernetes e2e suite [It] [sig-network] Services should fallback to terminating endpoints when there are no ready endpoints with externallTrafficPolicy=Cluster [Feature:ProxyTerminatingEndpoints]
Kubernetes e2e suite [It] [sig-network] Services should fallback to terminating endpoints when there are no ready endpoints with externallTrafficPolicy=Cluster [Feature:ProxyTerminatingEndpoints]
Kubernetes e2e suite [It] [sig-network] Services should fallback to terminating endpoints when there are no ready endpoints with internalTrafficPolicy=Cluster [Feature:ProxyTerminatingEndpoints]
Kubernetes e2e suite [It] [sig-network] Services should fallback to terminating endpoints when there are no ready endpoints with internalTrafficPolicy=Cluster [Feature:ProxyTerminatingEndpoints]
Kubernetes e2e suite [It] [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly]
Kubernetes e2e suite [It] [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly]
Kubernetes e2e suite [It] [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly]
Kubernetes e2e suite [It] [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly]
Kubernetes e2e suite [It] [sig-network] Services should implement service.kubernetes.io/headless
Kubernetes e2e suite [It] [sig-network] Services should implement service.kubernetes.io/headless
Kubernetes e2e suite [It] [sig-network] Services should implement service.kubernetes.io/service-proxy-name
Kubernetes e2e suite [It] [sig-network] Services should implement service.kubernetes.io/service-proxy-name
Kubernetes e2e suite [It] [sig-network] Services should not be able to connect to terminating and unready endpoints if PublishNotReadyAddresses is false
Kubernetes e2e suite [It] [sig-network] Services should not be able to connect to terminating and unready endpoints if PublishNotReadyAddresses is false
Kubernetes e2e suite [It] [sig-network] Services should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]
Kubernetes e2e suite [It] [sig-network] Services should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]
Kubernetes e2e suite [It] [sig-network] Services should prevent NodePort collisions
Kubernetes e2e suite [It] [sig-network] Services should prevent NodePort collisions
Kubernetes e2e suite [It] [sig-network] Services should release NodePorts on delete
Kubernetes e2e suite [It] [sig-network] Services should release NodePorts on delete
Kubernetes e2e suite [It] [sig-network] Services should respect internalTrafficPolicy=Local Pod (hostNetwork: true) to Pod [Feature:ServiceInternalTrafficPolicy]
Kubernetes e2e suite [It] [sig-network] Services should respect internalTrafficPolicy=Local Pod (hostNetwork: true) to Pod [Feature:ServiceInternalTrafficPolicy]
Kubernetes e2e suite [It] [sig-network] Services should respect internalTrafficPolicy=Local Pod and Node, to Pod (hostNetwork: true) [Feature:ServiceInternalTrafficPolicy]
Kubernetes e2e suite [It] [sig-network] Services should respect internalTrafficPolicy=Local Pod and Node, to Pod (hostNetwork: true) [Feature:ServiceInternalTrafficPolicy]
Kubernetes e2e suite [It] [sig-network] Services should respect internalTrafficPolicy=Local Pod to Pod [Feature:ServiceInternalTrafficPolicy]
Kubernetes e2e suite [It] [sig-network] Services should respect internalTrafficPolicy=Local Pod to Pod [Feature:ServiceInternalTrafficPolicy]
Kubernetes e2e suite [It] [sig-network] Services should serve endpoints on same port and different protocol for internal traffic on Type LoadBalancer
Kubernetes e2e suite [It] [sig-network] Services should serve endpoints on same port and different protocol for internal traffic on Type LoadBalancer
Kubernetes e2e suite [It] [sig-network] Services should work after restarting apiserver [Disruptive]
Kubernetes e2e suite [It] [sig-network] Services should work after restarting apiserver [Disruptive]
Kubernetes e2e suite [It] [sig-network] Services should work after restarting kube-proxy [Disruptive]
Kubernetes e2e suite [It] [sig-network] Services should work after restarting kube-proxy [Disruptive]
Kubernetes e2e suite [It] [sig-network] Services should work after the service has been recreated
Kubernetes e2e suite [It] [sig-network] Services should work after the service has been recreated
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should be able to handle large requests: http
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should be able to handle large requests: http
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should be able to handle large requests: udp
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should be able to handle large requests: udp
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for client IP based session affinity: http [LinuxOnly]
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for client IP based session affinity: http [LinuxOnly]
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for client IP based session affinity: udp [LinuxOnly]
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for client IP based session affinity: udp [LinuxOnly]
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for endpoint-Service: http
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for endpoint-Service: http
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for endpoint-Service: udp
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for endpoint-Service: udp
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for node-Service: http
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for node-Service: http
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for node-Service: udp
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for node-Service: udp
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for pod-Service: http
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for pod-Service: http
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for pod-Service: sctp [Feature:SCTPConnectivity]
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for pod-Service: sctp [Feature:SCTPConnectivity]
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for pod-Service: udp
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for pod-Service: udp
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for service endpoints using hostNetwork
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for service endpoints using hostNetwork
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should update endpoints: http
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should update endpoints: http
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should update endpoints: udp
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should update endpoints: udp
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] should be able to reach pod on ipv4 and ipv6 ip
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] should be able to reach pod on ipv4 and ipv6 ip
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] should create a single stack service with cluster ip from primary service range
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] should create a single stack service with cluster ip from primary service range
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] should create pod, add ipv6 and ipv4 ip to pod ips
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] should create pod, add ipv6 and ipv4 ip to pod ips
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] should create service with ipv4 cluster ip
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] should create service with ipv4 cluster ip
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] should create service with ipv4,v6 cluster ip
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] should create service with ipv4,v6 cluster ip
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] should create service with ipv6 cluster ip
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] should create service with ipv6 cluster ip
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] should create service with ipv6,v4 cluster ip
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] should create service with ipv6,v4 cluster ip
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] should have ipv4 and ipv6 internal node ip
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] should have ipv4 and ipv6 internal node ip
Kubernetes e2e suite [It] [sig-network] [Feature:PerformanceDNS][Serial] Should answer DNS query for maximum number of services per cluster
Kubernetes e2e suite [It] [sig-network] [Feature:PerformanceDNS][Serial] Should answer DNS query for maximum number of services per cluster
Kubernetes e2e suite [It] [sig-network] [Feature:Topology Hints] should distribute endpoints evenly
Kubernetes e2e suite [It] [sig-network] [Feature:Topology Hints] should distribute endpoints evenly
Kubernetes e2e suite [It] [sig-network] kube-proxy migration [Feature:KubeProxyDaemonSetMigration] Downgrade kube-proxy from a DaemonSet to static pods should maintain a functioning cluster [Feature:KubeProxyDaemonSetDowngrade]
Kubernetes e2e suite [It] [sig-network] kube-proxy migration [Feature:KubeProxyDaemonSetMigration] Downgrade kube-proxy from a DaemonSet to static pods should maintain a functioning cluster [Feature:KubeProxyDaemonSetDowngrade]
Kubernetes e2e suite [It] [sig-network] kube-proxy migration [Feature:KubeProxyDaemonSetMigration] Upgrade kube-proxy from static pods to a DaemonSet should maintain a functioning cluster [Feature:KubeProxyDaemonSetUpgrade]
Kubernetes e2e suite [It] [sig-network] kube-proxy migration [Feature:KubeProxyDaemonSetMigration] Upgrade kube-proxy from static pods to a DaemonSet should maintain a functioning cluster [Feature:KubeProxyDaemonSetUpgrade]
Kubernetes e2e suite [It] [sig-node] AppArmor load AppArmor profiles can disable an AppArmor profile, using unconfined
Kubernetes e2e suite [It] [sig-node] AppArmor load AppArmor profiles can disable an AppArmor profile, using unconfined
Kubernetes e2e suite [It] [sig-node] AppArmor load AppArmor profiles should enforce an AppArmor profile
Kubernetes e2e suite [It] [sig-node] AppArmor load AppArmor profiles should enforce an AppArmor profile
Kubernetes e2e suite [It] [sig-node] ConfigMap should update ConfigMap successfully
Kubernetes e2e suite [It] [sig-node] ConfigMap should update ConfigMap successfully
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] cluster with delayed allocation supports external claim referenced by multiple containers of multiple pods
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] cluster with delayed allocation supports external claim referenced by multiple containers of multiple pods
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] cluster with delayed allocation supports external claim referenced by multiple pods
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] cluster with delayed allocation supports external claim referenced by multiple pods
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] cluster with delayed allocation supports init containers
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] cluster with delayed allocation supports init containers
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] cluster with delayed allocation supports inline claim referenced by multiple containers
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] cluster with delayed allocation supports inline claim referenced by multiple containers
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] cluster with delayed allocation supports simple pod referencing external resource claim
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] cluster with delayed allocation supports simple pod referencing external resource claim
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] cluster with delayed allocation supports simple pod referencing inline resource claim
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] cluster with delayed allocation supports simple pod referencing inline resource claim
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] cluster with immediate allocation supports external claim referenced by multiple containers of multiple pods
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] cluster with immediate allocation supports external claim referenced by multiple containers of multiple pods
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] cluster with immediate allocation supports external claim referenced by multiple pods
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] cluster with immediate allocation supports external claim referenced by multiple pods
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] cluster with immediate allocation supports init containers
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] cluster with immediate allocation supports init containers
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] cluster with immediate allocation supports inline claim referenced by multiple containers
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] cluster with immediate allocation supports inline claim referenced by multiple containers
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] cluster with immediate allocation supports simple pod referencing external resource claim
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] cluster with immediate allocation supports simple pod referencing external resource claim
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] cluster with immediate allocation supports simple pod referencing inline resource claim
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] cluster with immediate allocation supports simple pod referencing inline resource claim
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] driver supports claim and class parameters
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] driver supports claim and class parameters
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] kubelet must not run a pod if a claim is not reserved for it
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] kubelet must not run a pod if a claim is not reserved for it
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] kubelet must retry NodePrepareResource
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] kubelet must retry NodePrepareResource
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] kubelet must unprepare resources for force-deleted pod
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] kubelet must unprepare resources for force-deleted pod
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] kubelet registers plugin
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] kubelet registers plugin
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] multiple drivers work
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] multiple drivers work
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] multiple nodes reallocation works
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] multiple nodes reallocation works
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] multiple nodes with network-attached resources schedules onto different nodes
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] multiple nodes with network-attached resources schedules onto different nodes
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] multiple nodes with node-local resources with delayed allocation uses all resources
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] multiple nodes with node-local resources with delayed allocation uses all resources
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] multiple nodes with node-local resources with immediate allocation uses all resources
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] multiple nodes with node-local resources with immediate allocation uses all resources
Kubernetes e2e suite [It] [sig-node] Downward API [Serial] [Disruptive] [NodeFeature:DownwardAPIHugePages] Downward API tests for hugepages should provide container's limits.hugepages-<pagesize> and requests.hugepages-<pagesize> as env vars
Kubernetes e2e suite [It] [sig-node] Downward API [Serial] [Disruptive] [NodeFeature:DownwardAPIHugePages] Downward API tests for hugepages should provide container's limits.hugepages-<pagesize> and requests.hugepages-<pagesize> as env vars
Kubernetes e2e suite [It] [sig-node] Downward API [Serial] [Disruptive] [NodeFeature:DownwardAPIHugePages] Downward API tests for hugepages should provide default limits.hugepages-<pagesize> from node allocatable
Kubernetes e2e suite [It] [sig-node] Downward API [Serial] [Disruptive] [NodeFeature:DownwardAPIHugePages] Downward API tests for hugepages should provide default limits.hugepages-<pagesize> from node allocatable
Kubernetes e2e suite [It] [sig-node] Downward API should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly]
Kubernetes e2e suite [It] [sig-node] Downward API should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly]
Kubernetes e2e suite [It] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running
Kubernetes e2e suite [It] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running
Kubernetes e2e suite [It] [sig-node] Kubelet [Serial] [Slow] experimental resource usage tracking [Feature:ExperimentalResourceUsageTracking] resource tracking for 100 pods per node
Kubernetes e2e suite [It] [sig-node] Kubelet [Serial] [Slow] experimental resource usage tracking [Feature:ExperimentalResourceUsageTracking] resource tracking for 100 pods per node
Kubernetes e2e suite [It] [sig-node] Kubelet [Serial] [Slow] regular resource usage tracking [Feature:RegularResourceUsageTracking] resource tracking for 0 pods per node
Kubernetes e2e suite [It] [sig-node] Kubelet [Serial] [Slow] regular resource usage tracking [Feature:RegularResourceUsageTracking] resource tracking for 0 pods per node
Kubernetes e2e suite [It] [sig-node] Kubelet [Serial] [Slow] regular resource usage tracking [Feature:RegularResourceUsageTracking] resource tracking for 100 pods per node
Kubernetes e2e suite [It] [sig-node] Kubelet [Serial] [Slow] regular resource usage tracking [Feature:RegularResourceUsageTracking] resource tracking for 100 pods per node
Kubernetes e2e suite [It] [sig-node] Mount propagation should propagate mounts within defined scopes
Kubernetes e2e suite [It] [sig-node] Mount propagation should propagate mounts within defined scopes
Kubernetes e2e suite [It] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]
Kubernetes e2e suite [It] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]
Kubernetes e2e suite [It] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] only evicts pods without tolerations from tainted nodes
Kubernetes e2e suite [It] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] only evicts pods without tolerations from tainted nodes
Kubernetes e2e suite [It] [sig-node] NoExecuteTaintManager Single Pod [Serial] doesn't evict pod with tolerations from tainted nodes
Kubernetes e2e suite [It] [sig-node] NoExecuteTaintManager Single Pod [Serial] doesn't evict pod with tolerations from tainted nodes
Kubernetes e2e suite [It] [sig-node] NoExecuteTaintManager Single Pod [Serial] eventually evict pod with finite tolerations from tainted nodes
Kubernetes e2e suite [It] [sig-node] NoExecuteTaintManager Single Pod [Serial] eventually evict pod with finite tolerations from tainted nodes
Kubernetes e2e suite [It] [sig-node] NoExecuteTaintManager Single Pod [Serial] evicts pods from tainted nodes
Kubernetes e2e suite [It] [sig-node] NoExecuteTaintManager Single Pod [Serial] evicts pods from tainted nodes
Kubernetes e2e suite [It] [sig-node] NoExecuteTaintManager Single Pod [Serial] pods evicted from tainted nodes have pod disruption condition
Kubernetes e2e suite [It] [sig-node] NoExecuteTaintManager Single Pod [Serial] pods evicted from tainted nodes have pod disruption condition
Kubernetes e2e suite [It] [sig-node] NoExecuteTaintManager Single Pod [Serial] removing taint cancels eviction [Disruptive] [Conformance]
Kubernetes e2e suite [It] [sig-node] NoExecuteTaintManager Single Pod [Serial] removing taint cancels eviction [Disruptive] [Conformance]
Kubernetes e2e suite [It] [sig-node] NodeLease NodeLease should have OwnerReferences set
Kubernetes e2e suite [It] [sig-node] NodeLease NodeLease should have OwnerReferences set
Kubernetes e2e suite [It] [sig-node] NodeLease NodeLease the kubelet should create and update a lease in the kube-node-lease namespace
Kubernetes e2e suite [It] [sig-node] NodeLease NodeLease the kubelet should create and update a lease in the kube-node-lease namespace
Kubernetes e2e suite [It] [sig-node] NodeLease NodeLease the kubelet should report node status infrequently
Kubernetes e2e suite [It] [sig-node] NodeLease NodeLease the kubelet should report node status infrequently
Kubernetes e2e suite [It] [sig-node] NodeProblemDetector should run without error
Kubernetes e2e suite [It] [sig-node] NodeProblemDetector should run without error
Kubernetes e2e suite [It] [sig-node] Pod garbage collector [Feature:PodGarbageCollector] [Slow] should handle the creation of 1000 pods
Kubernetes e2e suite [It] [sig-node] Pod garbage collector [Feature:PodGarbageCollector] [Slow] should handle the creation of 1000 pods
Kubernetes e2e suite [It] [sig-node] Pods Extended Delete Grace Period should be submitted and removed
Kubernetes e2e suite [It] [sig-node] Pods Extended Delete Grace Period should be submitted and removed
Kubernetes e2e suite [It] [sig-node] Pods Extended Pod Container Status should never report container start when an init container fails
Kubernetes e2e suite [It] [sig-node] Pods Extended Pod Container Status should never report container start when an init container fails
Kubernetes e2e suite [It] [sig-node] Pods Extended Pod Container Status should never report success for a pending container
Kubernetes e2e suite [It] [sig-node] Pods Extended Pod Container Status should never report success for a pending container
Kubernetes e2e suite [It] [sig-node] Pods Extended Pod Container lifecycle evicted pods should be terminal
Kubernetes e2e suite [It] [sig-node] Pods Extended Pod Container lifecycle evicted pods should be terminal
Kubernetes e2e suite [It] [sig-node] Pods Extended Pod Container lifecycle should not create extra sandbox if all containers are done
Kubernetes e2e suite [It] [sig-node] Pods Extended Pod Container lifecycle should not create extra sandbox if all containers are done
Kubernetes e2e suite [It] [sig-node] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance]
Kubernetes e2e suite [It] [sig-node] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance]
Kubernetes e2e suite [It] [sig-node] Pods should have their auto-restart back-off timer reset on image update [Slow][NodeConformance]
Kubernetes e2e suite [It] [sig-node] Pods should have their auto-restart back-off timer reset on image update [Slow][NodeConformance]
Kubernetes e2e suite [It] [sig-node] PreStop graceful pod terminated should wait until preStop hook completes the process
Kubernetes e2e suite [It] [sig-node] PreStop graceful pod terminated should wait until preStop hook completes the process
Kubernetes e2e suite [It] [sig-node] Probing container should *not* be restarted by liveness probe because startup probe delays it
Kubernetes e2e suite [It] [sig-node] Probing container should *not* be restarted by liveness probe because startup probe delays it
Kubernetes e2e suite [It] [sig-node] Probing container should *not* be restarted with a non-local redirect http liveness probe
Kubernetes e2e suite [It] [sig-node] Probing container should *not* be restarted with a non-local redirect http liveness probe
Kubernetes e2e suite [It] [sig-node] Probing container should be ready immediately after startupProbe succeeds
Kubernetes e2e suite [It] [sig-node] Probing container should be ready immediately after startupProbe succeeds
Kubernetes e2e suite [It] [sig-node] Probing container should be restarted by liveness probe after startup probe enables it
Kubernetes e2e suite [It] [sig-node] Probing container should be restarted by liveness probe after startup probe enables it
Kubernetes e2e suite [It] [sig-node] Probing container should be restarted startup probe fails
Kubernetes e2e suite [It] [sig-node] Probing container should be restarted startup probe fails
Kubernetes e2e suite [It] [sig-node] Probing container should be restarted with a failing exec liveness probe that took longer than the timeout
Kubernetes e2e suite [It] [sig-node] Probing container should be restarted with a failing exec liveness probe that took longer than the timeout
Kubernetes e2e suite [It] [sig-node] Probing container should be restarted with a local redirect http liveness probe
Kubernetes e2e suite [It] [sig-node] Probing container should be restarted with a local redirect http liveness probe
Kubernetes e2e suite [It] [sig-node] Probing container should mark readiness on pods to false and disable liveness probes while pod is in progress of terminating
Kubernetes e2e suite [It] [sig-node] Probing container should mark readiness on pods to false and disable liveness probes while pod is in progress of terminating
Kubernetes e2e suite [It] [sig-node] Probing container should mark readiness on pods to false while pod is in progress of terminating when a pod has a readiness probe
Kubernetes e2e suite [It] [sig-node] Probing container should mark readiness on pods to false while pod is in progress of terminating when a pod has a readiness probe
Kubernetes e2e suite [It] [sig-node] Probing container should override timeoutGracePeriodSeconds when LivenessProbe field is set [Feature:ProbeTerminationGracePeriod]
Kubernetes e2e suite [It] [sig-node] Probing container should override timeoutGracePeriodSeconds when LivenessProbe field is set [Feature:ProbeTerminationGracePeriod]
Kubernetes e2e suite [It] [sig-node] Probing container should override timeoutGracePeriodSeconds when StartupProbe field is set [Feature:ProbeTerminationGracePeriod]
Kubernetes e2e suite [It] [sig-node] Probing container should override timeoutGracePeriodSeconds when StartupProbe field is set [Feature:ProbeTerminationGracePeriod]
Kubernetes e2e suite [It] [sig-node] RuntimeClass should reject a Pod requesting a RuntimeClass with an unconfigured handler [NodeFeature:RuntimeHandler]
Kubernetes e2e suite [It] [sig-node] RuntimeClass should reject a Pod requesting a RuntimeClass with an unconfigured handler [NodeFeature:RuntimeHandler]
Kubernetes e2e suite [It] [sig-node] RuntimeClass should reject a Pod requesting a RuntimeClass with conflicting node selector
Kubernetes e2e suite [It] [sig-node] RuntimeClass should reject a Pod requesting a RuntimeClass with conflicting node selector
Kubernetes e2e suite [It] [sig-node] RuntimeClass should run a Pod requesting a RuntimeClass with a configured handler [NodeFeature:RuntimeHandler]
Kubernetes e2e suite [It] [sig-node] RuntimeClass should run a Pod requesting a RuntimeClass with a configured handler [NodeFeature:RuntimeHandler]
Kubernetes e2e suite [It] [sig-node] RuntimeClass should run a Pod requesting a RuntimeClass with scheduling with taints [Serial]
Kubernetes e2e suite [It] [sig-node] RuntimeClass should run a Pod requesting a RuntimeClass with scheduling with taints [Serial]
Kubernetes e2e suite [It] [sig-node] RuntimeClass should run a Pod requesting a RuntimeClass with scheduling without taints
Kubernetes e2e suite [It] [sig-node] RuntimeClass should run a Pod requesting a RuntimeClass with scheduling without taints
Kubernetes e2e suite [It] [sig-node] SSH should SSH to all nodes and run commands
Kubernetes e2e suite [It] [sig-node] SSH should SSH to all nodes and run commands
Kubernetes e2e suite [It] [sig-node] Security Context When creating a container with runAsNonRoot should not run with an explicit root user ID [LinuxOnly]
Kubernetes e2e suite [It] [sig-node] Security Context When creating a container with runAsNonRoot should not run with an explicit root user ID [LinuxOnly]
Kubernetes e2e suite [It] [sig-node] Security Context When creating a container with runAsNonRoot should not run without a specified user ID
Kubernetes e2e suite [It] [sig-node] Security Context When creating a container with runAsNonRoot should not run without a specified user ID
Kubernetes e2e suite [It] [sig-node] Security Context When creating a container with runAsNonRoot should run with an explicit non-root user ID [LinuxOnly]
Kubernetes e2e suite [It] [sig-node] Security Context When creating a container with runAsNonRoot should run with an explicit non-root user ID [LinuxOnly]
Kubernetes e2e suite [It] [sig-node] Security Context When creating a container with runAsNonRoot should run with an image specified user ID
Kubernetes e2e suite [It] [sig-node] Security Context When creating a container with runAsNonRoot should run with an image specified user ID
Kubernetes e2e suite [It] [sig-node] Security Context When creating a pod with HostUsers must create the user namespace if set to false [LinuxOnly] [Feature:UserNamespacesStatelessPodsSupport]
Kubernetes e2e suite [It] [sig-node] Security Context When creating a pod with HostUsers must create the user namespace if set to false [LinuxOnly] [Feature:UserNamespacesStatelessPodsSupport]
Kubernetes e2e suite [It] [sig-node] Security Context When creating a pod with HostUsers must not create the user namespace if set to true [LinuxOnly] [Feature:UserNamespacesStatelessPodsSupport]
Kubernetes e2e suite [It] [sig-node] Security Context When creating a pod with HostUsers must not create the user namespace if set to true [LinuxOnly] [Feature:UserNamespacesStatelessPodsSupport]
Kubernetes e2e suite [It] [sig-node] Security Context When creating a pod with HostUsers should mount all volumes with proper permissions with hostUsers=false [LinuxOnly] [Feature:UserNamespacesStatelessPodsSupport]
Kubernetes e2e suite [It] [sig-node] Security Context When creating a pod with HostUsers should mount all volumes with proper permissions with hostUsers=false [LinuxOnly] [Feature:UserNamespacesStatelessPodsSupport]
Kubernetes e2e suite [It] [sig-node] Security Context When creating a pod with HostUsers should set FSGroup to user inside the container with hostUsers=false [LinuxOnly] [Feature:UserNamespacesStatelessPodsSupport]
Kubernetes e2e suite [It] [sig-node] Security Context When creating a pod with HostUsers should set FSGroup to user inside the container with hostUsers=false [LinuxOnly] [Feature:UserNamespacesStatelessPodsSupport]
Kubernetes e2e suite [It] [sig-node] Security Context When creating a pod with privileged should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess]
Kubernetes e2e suite [It] [sig-node] Security Context When creating a pod with privileged should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess]
Kubernetes e2e suite [It] [sig-node] Security Context should support container.SecurityContext.RunAsUser [LinuxOnly]
Kubernetes e2e suite [It] [sig-node] Security Context should support container.SecurityContext.RunAsUser [LinuxOnly]
Kubernetes e2e suite [It] [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser [LinuxOnly]
Kubernetes e2e suite [It] [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser [LinuxOnly]
Kubernetes e2e suite [It] [sig-node] Security Context should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly]
Kubernetes e2e suite [It] [sig-node] Security Context should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly]
Kubernetes e2e suite [It] [sig-node] Security Context should support seccomp default which is unconfined [LinuxOnly]
Kubernetes e2e suite [It] [sig-node] Security Context should support seccomp default which is unconfined [LinuxOnly]
Kubernetes e2e suite [It] [sig-node] Security Context should support seccomp runtime/default [LinuxOnly]
Kubernetes e2e suite [It] [sig-node] Security Context should support seccomp runtime/default [LinuxOnly]
Kubernetes e2e suite [It] [sig-node] Security Context should support seccomp unconfined on the container [LinuxOnly]
Kubernetes e2e suite [It] [sig-node] Security Context should support seccomp unconfined on the container [LinuxOnly]
Kubernetes e2e suite [It] [sig-node] Security Context should support seccomp unconfined on the pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-node] Security Context should support seccomp unconfined on the pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-node] Security Context should support volume SELinux relabeling [Flaky] [LinuxOnly]
Kubernetes e2e suite [It] [sig-node] Security Context should support volume SELinux relabeling [Flaky] [LinuxOnly]
Kubernetes e2e suite [It] [sig-node] Security Context should support volume SELinux relabeling when using hostIPC [Flaky] [LinuxOnly]
Kubernetes e2e suite [It] [sig-node] Security Context should support volume SELinux relabeling when using hostIPC [Flaky] [LinuxOnly]
Kubernetes e2e suite [It] [sig-node] Security Context should support volume SELinux relabeling when using hostPID [Flaky] [LinuxOnly]
Kubernetes e2e suite [It] [sig-node] Security Context should support volume SELinux relabeling when using hostPID [Flaky] [LinuxOnly]
Kubernetes e2e suite [It] [sig-node] Security Context when if the container's primary UID belongs to some groups in the image [LinuxOnly] should add pod.Spec.SecurityContext.SupplementalGroups to them [LinuxOnly] in resultant supplementary groups for the container processes
Kubernetes e2e suite [It] [sig-node] Security Context when if the container's primary UID belongs to some groups in the image [LinuxOnly] should add pod.Spec.SecurityContext.SupplementalGroups to them [LinuxOnly] in resultant supplementary groups for the container processes
Kubernetes e2e suite [It] [sig-node] Variable Expansion should fail substituting values in a volume subpath with absolute path [Slow] [Conformance]
Kubernetes e2e suite [It] [sig-node] Variable Expansion should fail substituting values in a volume subpath with absolute path [Slow] [Conformance]
Kubernetes e2e suite [It] [sig-node] Variable Expansion should fail substituting values in a volume subpath with backticks [Slow] [Conformance]
Kubernetes e2e suite [It] [sig-node] Variable Expansion should fail substituting values in a volume subpath with backticks [Slow] [Conformance]
Kubernetes e2e suite [It] [sig-node] Variable Expansion should succeed in writing subpaths in container [Slow] [Conformance]
Kubernetes e2e suite [It] [sig-node] Variable Expansion should succeed in writing subpaths in container [Slow] [Conformance]
Kubernetes e2e suite [It] [sig-node] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance]
Kubernetes e2e suite [It] [sig-node] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance]
Kubernetes e2e suite [It] [sig-node] [Feature:Example] Downward API should create a pod that prints his name and namespace
Kubernetes e2e suite [It] [sig-node] [Feature:Example] Downward API should create a pod that prints his name and namespace
Kubernetes e2e suite [It] [sig-node] [Feature:Example] Liveness liveness pods should be automatically restarted
Kubernetes e2e suite [It] [sig-node] [Feature:Example] Liveness liveness pods should be automatically restarted
Kubernetes e2e suite [It] [sig-node] [Feature:Example] Secret should create a pod that reads a secret
Kubernetes e2e suite [It] [sig-node] [Feature:Example] Secret should create a pod that reads a secret
Kubernetes e2e suite [It] [sig-node] crictl should be able to run crictl on the node
Kubernetes e2e suite [It] [sig-node] crictl should be able to run crictl on the node
Kubernetes e2e suite [It] [sig-node] gpu Upgrade [Feature:GPUUpgrade] cluster downgrade should be able to run gpu pod after downgrade [Feature:GPUClusterDowngrade]
Kubernetes e2e suite [It] [sig-node] gpu Upgrade [Feature:GPUUpgrade] cluster downgrade should be able to run gpu pod after downgrade [Feature:GPUClusterDowngrade]
Kubernetes e2e suite [It] [sig-node] gpu Upgrade [Feature:GPUUpgrade] cluster upgrade should be able to run gpu pod after upgrade [Feature:GPUClusterUpgrade]
Kubernetes e2e suite [It] [sig-node] gpu Upgrade [Feature:GPUUpgrade] cluster upgrade should be able to run gpu pod after upgrade [Feature:GPUClusterUpgrade]
Kubernetes e2e suite [It] [sig-node] gpu Upgrade [Feature:GPUUpgrade] master upgrade should NOT disrupt gpu pod [Feature:GPUMasterUpgrade]
Kubernetes e2e suite [It] [sig-node] gpu Upgrade [Feature:GPUUpgrade] master upgrade should NOT disrupt gpu pod [Feature:GPUMasterUpgrade]
Kubernetes e2e suite [It] [sig-node] kubelet Clean up pods on node kubelet should be able to delete 10 pods per node in 1m0s.
Kubernetes e2e suite [It] [sig-node] kubelet Clean up pods on node kubelet should be able to delete 10 pods per node in 1m0s.
Kubernetes e2e suite [It] [sig-node] kubelet host cleanup with volume mounts [HostCleanup][Flaky] Host cleanup after disrupting NFS volume [NFS] after stopping the nfs-server and deleting the (active) client pod, the NFS mount and the pod's UID directory should be removed.
Kubernetes e2e suite [It] [sig-node] kubelet host cleanup with volume mounts [HostCleanup][Flaky] Host cleanup after disrupting NFS volume [NFS] after stopping the nfs-server and deleting the (active) client pod, the NFS mount and the pod's UID directory should be removed.
Kubernetes e2e suite [It] [sig-node] kubelet host cleanup with volume mounts [HostCleanup][Flaky] Host cleanup after disrupting NFS volume [NFS] after stopping the nfs-server and deleting the (sleeping) client pod, the NFS mount and the pod's UID directory should be removed.
Kubernetes e2e suite [It] [sig-node] kubelet host cleanup with volume mounts [HostCleanup][Flaky] Host cleanup after disrupting NFS volume [NFS] after stopping the nfs-server and deleting the (sleeping) client pod, the NFS mount and the pod's UID directory should be removed.
Kubernetes e2e suite [It] [sig-scheduling] GPUDevicePluginAcrossRecreate [Feature:Recreate] run Nvidia GPU Device Plugin tests with a recreation
Kubernetes e2e suite [It] [sig-scheduling] GPUDevicePluginAcrossRecreate [Feature:Recreate] run Nvidia GPU Device Plugin tests with a recreation
Kubernetes e2e suite [It] [sig-scheduling] Multi-AZ Clusters should spread the pods of a replication controller across zones [Serial]
Kubernetes e2e suite [It] [sig-scheduling] Multi-AZ Clusters should spread the pods of a replication controller across zones [Serial]
Kubernetes e2e suite [It] [sig-scheduling] Multi-AZ Clusters should spread the pods of a service across zones [Serial]
Kubernetes e2e suite [It] [sig-scheduling] Multi-AZ Clusters should spread the pods of a service across zones [Serial]
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPredicates [Serial] PodTopologySpread Filtering validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPredicates [Serial] PodTopologySpread Filtering validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPredicates [Serial] validates Pods with non-empty schedulingGates are blocked on scheduling [Feature:PodSchedulingReadiness] [alpha]
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPredicates [Serial] validates Pods with non-empty schedulingGates are blocked on scheduling [Feature:PodSchedulingReadiness] [alpha]
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPredicates [Serial] validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation]
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPredicates [Serial] validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation]
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPredicates [Serial] validates pod overhead is considered along with resource limits of pods that are allowed to run verify pod overhead is accounted for
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPredicates [Serial] validates pod overhead is considered along with resource limits of pods that are allowed to run verify pod overhead is accounted for
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPreemption [Serial] PodTopologySpread Preemption validates proper pods are preempted
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPreemption [Serial] PodTopologySpread Preemption validates proper pods are preempted
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance]
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance]
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPreemption [Serial] PriorityClass endpoints verify PriorityClass endpoints can be operated with different HTTP methods [Conformance]
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPreemption [Serial] PriorityClass endpoints verify PriorityClass endpoints can be operated with different HTTP methods [Conformance]
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance]
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance]
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPreemption [Serial] validates pod disruption condition is added to the preempted pod
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPreemption [Serial] validates pod disruption condition is added to the preempted pod
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPriorities [Serial] Pod should be preferably scheduled to nodes pod can tolerate
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPriorities [Serial] Pod should be preferably scheduled to nodes pod can tolerate
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPriorities [Serial] Pod should be scheduled to node that don't match the PodAntiAffinity terms
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPriorities [Serial] Pod should be scheduled to node that don't match the PodAntiAffinity terms
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPriorities [Serial] PodTopologySpread Scoring validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPriorities [Serial] PodTopologySpread Scoring validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed
Kubernetes e2e suite [It] [sig-scheduling] [Feature:GPUDevicePlugin] run Nvidia GPU Device Plugin tests
Kubernetes e2e suite [It] [sig-scheduling] [Feature:GPUDevicePlugin] run Nvidia GPU Device Plugin tests
Kubernetes e2e suite [It] [sig-storage] CSI Mock fsgroup as mount option Delegate FSGroup to CSI driver [LinuxOnly] should not pass FSGroup to CSI driver if it is set in pod and driver supports VOLUME_MOUNT_GROUP
Kubernetes e2e suite [It] [sig-storage] CSI Mock fsgroup as mount option Delegate FSGroup to CSI driver [LinuxOnly] should not pass FSGroup to CSI driver if it is set in pod and driver supports VOLUME_MOUNT_GROUP
Kubernetes e2e suite [It] [sig-storage] CSI Mock fsgroup as mount option Delegate FSGroup to CSI driver [LinuxOnly] should pass FSGroup to CSI driver if it is set in pod and driver supports VOLUME_MOUNT_GROUP
Kubernetes e2e suite [It] [sig-storage] CSI Mock fsgroup as mount option Delegate FSGroup to CSI driver [LinuxOnly] should pass FSGroup to CSI driver if it is set in pod and driver supports VOLUME_MOUNT_GROUP
Kubernetes e2e suite [It] [sig-storage] CSI Mock selinux on mount SELinuxMount [LinuxOnly][Feature:SELinux][Feature:SELinuxMountReadWriteOncePod] should add SELinux mount option to existing mount options
Kubernetes e2e suite [It] [sig-storage] CSI Mock selinux on mount SELinuxMount [LinuxOnly][Feature:SELinux][Feature:SELinuxMountReadWriteOncePod] should add SELinux mount option to existing mount options
Kubernetes e2e suite [It] [sig-storage] CSI Mock selinux on mount SELinuxMount [LinuxOnly][Feature:SELinux][Feature:SELinuxMountReadWriteOncePod] should not pass SELinux mount option for CSI driver that does not support SELinux mount
Kubernetes e2e suite [It] [sig-storage] CSI Mock selinux on mount SELinuxMount [LinuxOnly][Feature:SELinux][Feature:SELinuxMountReadWriteOncePod] should not pass SELinux mount option for CSI driver that does not support SELinux mount
Kubernetes e2e suite [It] [sig-storage] CSI Mock selinux on mount SELinuxMount [LinuxOnly][Feature:SELinux][Feature:SELinuxMountReadWriteOncePod] should not pass SELinux mount option for Pod without SELinux context
Kubernetes e2e suite [It] [sig-storage] CSI Mock selinux on mount SELinuxMount [LinuxOnly][Feature:SELinux][Feature:SELinuxMountReadWriteOncePod] should not pass SELinux mount option for Pod without SELinux context
Kubernetes e2e suite [It] [sig-storage] CSI Mock selinux on mount SELinuxMount [LinuxOnly][Feature:SELinux][Feature:SELinuxMountReadWriteOncePod] should not pass SELinux mount option for RWO volume
Kubernetes e2e suite [It] [sig-storage] CSI Mock selinux on mount SELinuxMount [LinuxOnly][Feature:SELinux][Feature:SELinuxMountReadWriteOncePod] should not pass SELinux mount option for RWO volume
Kubernetes e2e suite [It] [sig-storage] CSI Mock selinux on mount SELinuxMount [LinuxOnly][Feature:SELinux][Feature:SELinuxMountReadWriteOncePod] should pass SELinux mount option for RWOP volume and Pod with SELinux context set
Kubernetes e2e suite [It] [sig-storage] CSI Mock selinux on mount SELinuxMount [LinuxOnly][Feature:SELinux][Feature:SELinuxMountReadWriteOncePod] should pass SELinux mount option for RWOP volume and Pod with SELinux context set
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume attach CSI CSIDriver deployment after pod creation using non-attachable mock driver should bringup pod after deploying CSIDriver attach=false [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume attach CSI CSIDriver deployment after pod creation using non-attachable mock driver should bringup pod after deploying CSIDriver attach=false [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume attach CSI attach test using mock driver should not require VolumeAttach for drivers without attachment
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume attach CSI attach test using mock driver should not require VolumeAttach for drivers without attachment
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume attach CSI attach test using mock driver should preserve attachment policy when no CSIDriver present
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume attach CSI attach test using mock driver should preserve attachment policy when no CSIDriver present
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume attach CSI attach test using mock driver should require VolumeAttach for drivers with attachment
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume attach CSI attach test using mock driver should require VolumeAttach for drivers with attachment
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume attach CSI attach test using mock driver should require VolumeAttach for ephemermal volume and drivers with attachment
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume attach CSI attach test using mock driver should require VolumeAttach for ephemermal volume and drivers with attachment
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume expansion CSI Volume expansion should expand volume by restarting pod if attach=off, nodeExpansion=on
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume expansion CSI Volume expansion should expand volume by restarting pod if attach=off, nodeExpansion=on
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume expansion CSI Volume expansion should expand volume by restarting pod if attach=on, nodeExpansion=on
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume expansion CSI Volume expansion should expand volume by restarting pod if attach=on, nodeExpansion=on
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume expansion CSI Volume expansion should expand volume without restarting pod if nodeExpansion=off
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume expansion CSI Volume expansion should expand volume without restarting pod if nodeExpansion=off
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume expansion CSI Volume expansion should not expand volume if resizingOnDriver=off, resizingOnSC=on
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume expansion CSI Volume expansion should not expand volume if resizingOnDriver=off, resizingOnSC=on
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume expansion CSI online volume expansion should expand volume without restarting pod if attach=off, nodeExpansion=on
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume expansion CSI online volume expansion should expand volume without restarting pod if attach=off, nodeExpansion=on
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume expansion CSI online volume expansion should expand volume without restarting pod if attach=on, nodeExpansion=on
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume expansion CSI online volume expansion should expand volume without restarting pod if attach=on, nodeExpansion=on
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume expansion Expansion with recovery[Feature:RecoverVolumeExpansionFailure] recovery should not be possible in partially expanded volumes
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume expansion Expansion with recovery[Feature:RecoverVolumeExpansionFailure] recovery should not be possible in partially expanded volumes
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume expansion Expansion with recovery[Feature:RecoverVolumeExpansionFailure] should allow recovery if controller expansion fails with final error
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume expansion Expansion with recovery[Feature:RecoverVolumeExpansionFailure] should allow recovery if controller expansion fails with final error
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume expansion Expansion with recovery[Feature:RecoverVolumeExpansionFailure] should record target size in allocated resources
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume expansion Expansion with recovery[Feature:RecoverVolumeExpansionFailure] should record target size in allocated resources
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume fsgroup policies CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=File
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume fsgroup policies CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=File
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume fsgroup policies CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=default
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume fsgroup policies CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=default
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume fsgroup policies CSI FSGroupPolicy [LinuxOnly] should not modify fsGroup if fsGroupPolicy=None
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume fsgroup policies CSI FSGroupPolicy [LinuxOnly] should not modify fsGroup if fsGroupPolicy=None
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume limit CSI volume limit information using mock driver should report attach limit for generic ephemeral volume when persistent volume is attached [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume limit CSI volume limit information using mock driver should report attach limit for generic ephemeral volume when persistent volume is attached [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume limit CSI volume limit information using mock driver should report attach limit for persistent volume when generic ephemeral volume is attached [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume limit CSI volume limit information using mock driver should report attach limit for persistent volume when generic ephemeral volume is attached [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume limit CSI volume limit information using mock driver should report attach limit when limit is bigger than 0 [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume limit CSI volume limit information using mock driver should report attach limit when limit is bigger than 0 [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume node stage CSI NodeStage error cases [Slow] should call NodeUnstage after NodeStage ephemeral error
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume node stage CSI NodeStage error cases [Slow] should call NodeUnstage after NodeStage ephemeral error
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume node stage CSI NodeStage error cases [Slow] should call NodeUnstage after NodeStage success
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume node stage CSI NodeStage error cases [Slow] should call NodeUnstage after NodeStage success
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume node stage CSI NodeStage error cases [Slow] should not call NodeUnstage after NodeStage final error
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume node stage CSI NodeStage error cases [Slow] should not call NodeUnstage after NodeStage final error
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume node stage CSI NodeStage error cases [Slow] should retry NodeStage after NodeStage ephemeral error
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume node stage CSI NodeStage error cases [Slow] should retry NodeStage after NodeStage ephemeral error
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume node stage CSI NodeStage error cases [Slow] should retry NodeStage after NodeStage final error
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume node stage CSI NodeStage error cases [Slow] should retry NodeStage after NodeStage final error
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume node stage CSI NodeUnstage error cases [Slow] should call NodeStage after NodeUnstage success
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume node stage CSI NodeUnstage error cases [Slow] should call NodeStage after NodeUnstage success
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume node stage CSI NodeUnstage error cases [Slow] two pods: should call NodeStage after previous NodeUnstage final error
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume node stage CSI NodeUnstage error cases [Slow] two pods: should call NodeStage after previous NodeUnstage final error
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume node stage CSI NodeUnstage error cases [Slow] two pods: should call NodeStage after previous NodeUnstage transient error
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume node stage CSI NodeUnstage error cases [Slow] two pods: should call NodeStage after previous NodeUnstage transient error
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume service account token CSIServiceAccountToken token should be plumbed down when csiServiceAccountTokenEnabled=true
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume service account token CSIServiceAccountToken token should be plumbed down when csiServiceAccountTokenEnabled=true
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume service account token CSIServiceAccountToken token should not be plumbed down when CSIDriver is not deployed
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume service account token CSIServiceAccountToken token should not be plumbed down when CSIDriver is not deployed
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume service account token CSIServiceAccountToken token should not be plumbed down when csiServiceAccountTokenEnabled=false
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume service account token CSIServiceAccountToken token should not be plumbed down when csiServiceAccountTokenEnabled=false
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume snapshot CSI Snapshot Controller metrics [Feature:VolumeSnapshotDataSource] snapshot controller should emit dynamic CreateSnapshot, CreateSnapshotAndReady, and DeleteSnapshot metrics
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume snapshot CSI Snapshot Controller metrics [Feature:VolumeSnapshotDataSource] snapshot controller should emit dynamic CreateSnapshot, CreateSnapshotAndReady, and DeleteSnapshot metrics
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume snapshot CSI Snapshot Controller metrics [Feature:VolumeSnapshotDataSource] snapshot controller should emit pre-provisioned CreateSnapshot, CreateSnapshotAndReady, and DeleteSnapshot metrics
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume snapshot CSI Snapshot Controller metrics [Feature:VolumeSnapshotDataSource] snapshot controller should emit pre-provisioned CreateSnapshot, CreateSnapshotAndReady, and DeleteSnapshot metrics
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume snapshot CSI Volume Snapshots [Feature:VolumeSnapshotDataSource] volumesnapshotcontent and pvc in Bound state with deletion timestamp set should not get deleted while snapshot finalizer exists
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume snapshot CSI Volume Snapshots [Feature:VolumeSnapshotDataSource] volumesnapshotcontent and pvc in Bound state with deletion timestamp set should not get deleted while snapshot finalizer exists
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume snapshot CSI Volume Snapshots secrets [Feature:VolumeSnapshotDataSource] volume snapshot create/delete with secrets
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume snapshot CSI Volume Snapshots secrets [Feature:VolumeSnapshotDataSource] volume snapshot create/delete with secrets
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume storage capacity CSIStorageCapacity CSIStorageCapacity disabled
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume storage capacity CSIStorageCapacity CSIStorageCapacity disabled
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume storage capacity CSIStorageCapacity CSIStorageCapacity unused
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume storage capacity CSIStorageCapacity CSIStorageCapacity unused
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume storage capacity CSIStorageCapacity CSIStorageCapacity used, have capacity
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume storage capacity CSIStorageCapacity CSIStorageCapacity used, have capacity
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume storage capacity CSIStorageCapacity CSIStorageCapacity used, insufficient capacity
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume storage capacity CSIStorageCapacity CSIStorageCapacity used, insufficient capacity
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume storage capacity CSIStorageCapacity CSIStorageCapacity used, no capacity
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume storage capacity CSIStorageCapacity CSIStorageCapacity used, no capacity
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume storage capacity storage capacity exhausted, immediate binding
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume storage capacity storage capacity exhausted, immediate binding
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume storage capacity storage capacity exhausted, late binding, no topology
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume storage capacity storage capacity exhausted, late binding, no topology
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume storage capacity storage capacity exhausted, late binding, with topology
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume storage capacity storage capacity exhausted, late binding, with topology
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume storage capacity storage capacity unlimited
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume storage capacity storage capacity unlimited
Kubernetes e2e suite [It] [sig-storage] CSI Mock workload info CSI workload information using mock driver contain ephemeral=true when using inline volume
Kubernetes e2e suite [It] [sig-storage] CSI Mock workload info CSI workload information using mock driver contain ephemeral=true when using inline volume
Kubernetes e2e suite [It] [sig-storage] CSI Mock workload info CSI workload information using mock driver should be passed when podInfoOnMount=true
Kubernetes e2e suite [It] [sig-storage] CSI Mock workload info CSI workload information using mock driver should be passed when podInfoOnMount=true
Kubernetes e2e suite [It] [sig-storage] CSI Mock workload info CSI workload information using mock driver should not be passed when CSIDriver does not exist
Kubernetes e2e suite [It] [sig-storage] CSI Mock workload info CSI workload information using mock driver should not be passed when CSIDriver does not exist
Kubernetes e2e suite [It] [sig-storage] CSI Mock workload info CSI workload information using mock driver should not be passed when podInfoOnMount=false
Kubernetes e2e suite [It] [sig-storage] CSI Mock workload info CSI workload information using mock driver should not be passed when podInfoOnMount=false
Kubernetes e2e suite [It] [sig-storage] CSI Mock workload info CSI workload information using mock driver should not be passed when podInfoOnMount=nil
Kubernetes e2e suite [It] [sig-storage] CSI Mock workload info CSI workload information using mock driver should not be passed when podInfoOnMount=nil
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should support expansion of pvcs created for ephemeral pvcs
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should support expansion of pvcs created for ephemeral pvcs
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] provisioning should mount multiple PV pointing to the same storage on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] provisioning should mount multiple PV pointing to the same storage on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with any volume data source [Serial]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with any volume data source [Serial]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with mount options
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with mount options
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with pvc data source
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with pvc data source
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with pvc data source in parallel [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with pvc data source in parallel [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] volume-stress multiple pods should access different volumes repeatedly [Slow] [Serial]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] volume-stress multiple pods should access different volumes repeatedly [Slow] [Serial]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] capacity provides storage capacity information
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] capacity provides storage capacity information
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with different fsgroup applied to the volume contents
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with different fsgroup applied to the volume contents
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with same fsgroup applied to the volume contents
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with same fsgroup applied to the volume contents
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with different fsgroup applied to the volume contents
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with different fsgroup applied to the volume contents
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with same fsgroup skips ownership changes to the volume contents
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with same fsgroup skips ownership changes to the volume contents
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] provisioning should mount multiple PV pointing to the same storage on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] provisioning should mount multiple PV pointing to the same storage on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with any volume data source [Serial]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with any volume data source [Serial]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with mount options
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with mount options
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source in parallel [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source in parallel [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] read-write-once-pod[Feature:ReadWriteOncePod] should block a second pod from using an in-use ReadWriteOncePod volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] read-write-once-pod[Feature:ReadWriteOncePod] should block a second pod from using an in-use ReadWriteOncePod volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] read-write-once-pod[Feature:ReadWriteOncePod] should block a second pod from using an in-use ReadWriteOncePod volume on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] read-write-once-pod[Feature:ReadWriteOncePod] should block a second pod from using an in-use ReadWriteOncePod volume on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] volume-stress multiple pods should access different volumes repeatedly [Slow] [Serial]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] volume-stress multiple pods should access different volumes repeatedly [Slow] [Serial]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (delayed binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (delayed binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (delayed binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (delayed binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext3)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext3)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext4)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext4)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down is usable by a new pod when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down is usable by a new pod when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down is usable by a new pod with a different SELinux context when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down is usable by a new pod with a different SELinux context when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down is usable by a new pod when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down is usable by a new pod when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down is usable by a new pod with a different SELinux context when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down is usable by a new pod with a different SELinux context when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] volume-lifecycle-performance should provision volumes at scale within performance constraints [Slow] [Serial]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] volume-lifecycle-performance should provision volumes at scale within performance constraints [Slow] [Serial]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should support volume limits [Serial]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should support volume limits [Serial]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should verify that all csinodes have volume limits
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should verify that all csinodes have volume limits
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (immediate binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (immediate binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (immediate binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (immediate binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should mount multiple PV pointing to the same storage on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should mount multiple PV pointing to the same storage on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with any volume data source [Serial]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with any volume data source [Serial]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with mount options
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with mount options
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with pvc data source
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with pvc data source
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with pvc data source in parallel [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with pvc data source in parallel [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic Snapshot (delete policy)] snapshottable-stress[Feature:VolumeSnapshotDataSource] should support snapshotting of many volumes repeatedly [Slow] [Serial]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic Snapshot (delete policy)] snapshottable-stress[Feature:VolumeSnapshotDataSource] should support snapshotting of many volumes repeatedly [Slow] [Serial]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic Snapshot (retain policy)] snapshottable-stress[Feature:VolumeSnapshotDataSource] should support snapshotting of many volumes repeatedly [Slow] [Serial]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic Snapshot (retain policy)] snapshottable-stress[Feature:VolumeSnapshotDataSource] should support snapshotting of many volumes repeatedly [Slow] [Serial]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Ephemeral Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Ephemeral Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Ephemeral Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Ephemeral Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Ephemeral Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Ephemeral Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Ephemeral Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Ephemeral Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should support expansion of pvcs created for ephemeral pvcs
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should support expansion of pvcs created for ephemeral pvcs
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support expansion of pvcs created for ephemeral pvcs
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support expansion of pvcs created for ephemeral pvcs
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support expansion of pvcs created for ephemeral pvcs
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support expansion of pvcs created for ephemeral pvcs
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs)] volumeLimits should support volume limits [Serial]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs)] volumeLimits should support volume limits [Serial]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs)] volumeLimits should verify that all csinodes have volume limits
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs)] volumeLimits should verify that all csinodes have volume limits
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (ext3)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (ext3)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (ext4)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (ext4)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (xfs)][Slow] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (xfs)][Slow] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (ext3)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (ext3)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (ext4)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (ext4)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down is usable by a new pod when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down is usable by a new pod when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down is usable by a new pod with a different SELinux context when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down is usable by a new pod with a different SELinux context when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down is usable by a new pod when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down is usable by a new pod when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down is usable by a new pod with a different SELinux context when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down is usable by a new pod with a different SELinux context when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should support expansion of pvcs created for ephemeral pvcs
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should support expansion of pvcs created for ephemeral pvcs
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: p