This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 2 succeeded
Started2023-01-22 23:02
Elapsed3h7m
Revisionrelease-1.7

Test Failures


capz-e2e [It] Conformance Tests conformance-tests 2h56m

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\s\[It\]\sConformance\sTests\sconformance\-tests$'
[FAILED] Unexpected error:
    <*errors.withStack | 0xc0009ac150>: {
        error: <*errors.withMessage | 0xc000956820>{
            cause: <*errors.errorString | 0xc0004f4b60>{
                s: "error container run failed with exit code 1",
            },
            msg: "Unable to run conformance tests",
        },
        stack: [0x3143379, 0x353bac7, 0x18e62fb, 0x18f9df8, 0x147c741],
    }
    Unable to run conformance tests: error container run failed with exit code 1
occurred
In [It] at: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:238 @ 01/23/23 01:56:33.91

				
				Click to see stdout/stderrfrom junit.e2e_suite.1.xml

Filter through log files | View test history on testgrid


Show 2 Passed Tests

Show 22 Skipped Tests

Error lines from build-log.txt

... skipping 483 lines ...
------------------------------
Conformance Tests conformance-tests
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:100
  INFO: Cluster name is capz-conf-zs64h3
  STEP: Creating namespace "capz-conf-zs64h3" for hosting the cluster @ 01/22/23 23:12:14.386
  Jan 22 23:12:14.386: INFO: starting to create namespace for hosting the "capz-conf-zs64h3" test spec
2023/01/22 23:12:14 failed trying to get namespace (capz-conf-zs64h3):namespaces "capz-conf-zs64h3" not found
  INFO: Creating namespace capz-conf-zs64h3
  INFO: Creating event watcher for namespace "capz-conf-zs64h3"
  conformance-tests - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:102 @ 01/22/23 23:12:14.453
    conformance-tests
    Name | N | Min | Median | Mean | StdDev | Max
  INFO: Creating the workload cluster with name "capz-conf-zs64h3" using the "conformance-ci-artifacts-windows-containerd" template (Kubernetes v1.24.11-rc.0.6+7c685ed7305e76, 1 control-plane machines, 0 worker machines)
... skipping 112 lines ...
  STEP: Waiting for the workload nodes to exist @ 01/22/23 23:19:03.205
  STEP: Checking all the machines controlled by capz-conf-zs64h3-md-win are in the "<None>" failure domain @ 01/22/23 23:21:53.503
  INFO: Waiting for the machine pools to be provisioned
  INFO: Using repo-list '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/data/kubetest/repo-list.yaml' for version 'v1.24.11-rc.0.6+7c685ed7305e76'
  STEP: Running e2e test: dir=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e, command=["-nodes=1" "-slowSpecThreshold=120" "/usr/local/bin/e2e.test" "--" "--report-prefix=kubetest." "--num-nodes=2" "--kubeconfig=/tmp/kubeconfig" "--provider=skeleton" "--report-dir=/output" "--e2e-output-dir=/output/e2e-output" "--dump-logs-on-failure=false" "-ginkgo.progress=true" "-ginkgo.skip=\\[LinuxOnly\\]|\\[Excluded:WindowsDocker\\]|device.plugin.for.Windows" "-ginkgo.slowSpecThreshold=120" "-node-os-distro=windows" "-dump-logs-on-failure=true" "-ginkgo.focus=(\\[sig-windows\\]|\\[sig-scheduling\\].SchedulerPreemption|\\[sig-autoscaling\\].\\[Feature:HPA\\]|\\[sig-apps\\].CronJob).*(\\[Serial\\]|\\[Slow\\])|(\\[Serial\\]|\\[Slow\\]).*(\\[Conformance\\]|\\[NodeConformance\\])|\\[sig-api-machinery\\].Garbage.collector" "-ginkgo.trace=true" "-ginkgo.v=true" "-prepull-images=true" "-disable-log-dump=true" "-ginkgo.flakeAttempts=0"] @ 01/22/23 23:21:53.811
  I0122 23:22:01.018730      14 e2e.go:129] Starting e2e run "e9b272d5-52c6-4cae-a53c-abd7836f7454" on Ginkgo node 1
  {"msg":"Test Suite starting","total":61,"completed":0,"skipped":0,"failed":0}

  Running Suite: Kubernetes e2e suite
  ===================================
  Random Seed: 1674429720 - Will randomize all specs
  Will run 61 of 6973 specs
  
  Jan 22 23:22:03.653: INFO: >>> kubeConfig: /tmp/kubeconfig
... skipping 72 lines ...
  [AfterEach] [sig-scheduling] SchedulerPreemption [Serial]
    test/e2e/framework/framework.go:188
  Jan 22 23:24:18.663: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
  STEP: Destroying namespace "sched-preemption-1329" for this suite.
  [AfterEach] [sig-scheduling] SchedulerPreemption [Serial]
    test/e2e/scheduling/preemption.go:80
  •{"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]","total":61,"completed":1,"skipped":59,"failed":0}

  SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
  ------------------------------
  [sig-api-machinery] Garbage collector 
    should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
    test/e2e/framework/framework.go:652
  [BeforeEach] [sig-api-machinery] Garbage collector
... skipping 35 lines ...
  For evicted_pods_total:
  
  [AfterEach] [sig-api-machinery] Garbage collector
    test/e2e/framework/framework.go:188
  Jan 22 23:24:20.545: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
  STEP: Destroying namespace "gc-5889" for this suite.
  •{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":61,"completed":2,"skipped":270,"failed":0}

  SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
  ------------------------------
  [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) ReplicationController light 
    Should scale from 2 pods to 1 pod [Slow]
    test/e2e/autoscaling/horizontal_pod_autoscaling.go:82
  [BeforeEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU)
... skipping 123 lines ...
  test/e2e/autoscaling/framework.go:23
    ReplicationController light
    test/e2e/autoscaling/horizontal_pod_autoscaling.go:69
      Should scale from 2 pods to 1 pod [Slow]
      test/e2e/autoscaling/horizontal_pod_autoscaling.go:82
  ------------------------------
  {"msg":"PASSED [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) ReplicationController light Should scale from 2 pods to 1 pod [Slow]","total":61,"completed":3,"skipped":305,"failed":0}

  SSSSSSSSSSSSSSSSSSSSSSSSSSS
  ------------------------------
  [sig-api-machinery] Garbage collector 
    should delete RS created by deployment when not orphaning [Conformance]
    test/e2e/framework/framework.go:652
  [BeforeEach] [sig-api-machinery] Garbage collector
... skipping 35 lines ...
  For evicted_pods_total:
  
  [AfterEach] [sig-api-machinery] Garbage collector
    test/e2e/framework/framework.go:188
  Jan 22 23:30:24.812: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
  STEP: Destroying namespace "gc-279" for this suite.
  •{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":61,"completed":4,"skipped":332,"failed":0}

  SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
  ------------------------------
  [sig-apps] Daemon set [Serial] 
    should run and stop complex daemon [Conformance]
    test/e2e/framework/framework.go:652
  [BeforeEach] [sig-apps] Daemon set [Serial]
... skipping 63 lines ...
  Jan 22 23:30:46.174: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"4175"},"items":null}
  
  [AfterEach] [sig-apps] Daemon set [Serial]
    test/e2e/framework/framework.go:188
  Jan 22 23:30:46.319: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
  STEP: Destroying namespace "daemonsets-3375" for this suite.
  •{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":61,"completed":5,"skipped":379,"failed":0}

  SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
  ------------------------------
  [sig-api-machinery] Garbage collector 
    should support cascading deletion of custom resources
    test/e2e/apimachinery/garbage_collector.go:905
  [BeforeEach] [sig-api-machinery] Garbage collector
... skipping 10 lines ...
  Jan 22 23:30:48.913: INFO: created dependent resource "dependenttnhdb"
  Jan 22 23:30:48.987: INFO: created canary resource "canaryhr48x"
  [AfterEach] [sig-api-machinery] Garbage collector
    test/e2e/framework/framework.go:188
  Jan 22 23:31:04.227: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
  STEP: Destroying namespace "gc-203" for this suite.
  •{"msg":"PASSED [sig-api-machinery] Garbage collector should support cascading deletion of custom resources","total":61,"completed":6,"skipped":467,"failed":0}

  SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
  ------------------------------
  [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicaSet 
    Should scale from 5 pods to 3 pods and from 3 to 1
    test/e2e/autoscaling/horizontal_pod_autoscaling.go:53
  [BeforeEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU)
... skipping 209 lines ...
  test/e2e/autoscaling/framework.go:23
    [Serial] [Slow] ReplicaSet
    test/e2e/autoscaling/horizontal_pod_autoscaling.go:48
      Should scale from 5 pods to 3 pods and from 3 to 1
      test/e2e/autoscaling/horizontal_pod_autoscaling.go:53
  ------------------------------
  {"msg":"PASSED [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicaSet Should scale from 5 pods to 3 pods and from 3 to 1","total":61,"completed":7,"skipped":507,"failed":0}

  SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
  ------------------------------
  [sig-api-machinery] Namespaces [Serial] 
    should ensure that all pods are removed when a namespace is deleted [Conformance]
    test/e2e/framework/framework.go:652
  [BeforeEach] [sig-api-machinery] Namespaces [Serial]
... skipping 17 lines ...
    test/e2e/framework/framework.go:188
  Jan 22 23:42:39.273: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
  STEP: Destroying namespace "namespaces-7376" for this suite.
  STEP: Destroying namespace "nsdeletetest-8918" for this suite.
  Jan 22 23:42:39.379: INFO: Namespace nsdeletetest-8918 was already deleted
  STEP: Destroying namespace "nsdeletetest-6879" for this suite.
  •{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":61,"completed":8,"skipped":737,"failed":0}

  SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
  ------------------------------
  [sig-windows] [Feature:Windows] GMSA Full [Serial] [Slow] GMSA support 
    works end to end
    test/e2e/windows/gmsa_full.go:97
  [BeforeEach] [sig-windows] [Feature:Windows] GMSA Full [Serial] [Slow]
... skipping 51 lines ...
    test/e2e/framework/framework.go:188
  Jan 22 23:42:46.441: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
  STEP: Destroying namespace "namespaces-6038" for this suite.
  STEP: Destroying namespace "nsdeletetest-9123" for this suite.
  Jan 22 23:42:46.545: INFO: Namespace nsdeletetest-9123 was already deleted
  STEP: Destroying namespace "nsdeletetest-9390" for this suite.
  •{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":61,"completed":9,"skipped":912,"failed":0}

  SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
  ------------------------------
  [sig-windows] [Feature:Windows] Kubelet-Stats [Serial] Kubelet stats collection for Windows nodes when running 10 pods 
    should return within 10 seconds
    test/e2e/windows/kubelet_stats.go:47
  [BeforeEach] [sig-windows] [Feature:Windows] Kubelet-Stats [Serial]
... skipping 201 lines ...
  STEP: Getting kubelet stats 5 times and checking average duration
  Jan 22 23:43:53.815: INFO: Getting kubelet stats for node capz-conf-2xrmj took an average of 332 milliseconds over 5 iterations
  [AfterEach] [sig-windows] [Feature:Windows] Kubelet-Stats [Serial]
    test/e2e/framework/framework.go:188
  Jan 22 23:43:53.815: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
  STEP: Destroying namespace "kubelet-stats-test-windows-serial-3795" for this suite.
  •{"msg":"PASSED [sig-windows] [Feature:Windows] Kubelet-Stats [Serial] Kubelet stats collection for Windows nodes when running 10 pods should return within 10 seconds","total":61,"completed":10,"skipped":1139,"failed":0}

  SSSSSSSSSS
  ------------------------------
  [sig-storage] EmptyDir wrapper volumes 
    should not cause race condition when used for configmaps [Serial] [Conformance]
    test/e2e/framework/framework.go:652
  [BeforeEach] [sig-storage] EmptyDir wrapper volumes
... skipping 29 lines ...
  Jan 22 23:45:06.998: INFO: Terminating ReplicationController wrapped-volume-race-780045ff-9e40-4ff6-a60e-d860b887c7b9 pods took: 101.194022ms
  STEP: Cleaning up the configMaps
  [AfterEach] [sig-storage] EmptyDir wrapper volumes
    test/e2e/framework/framework.go:188
  Jan 22 23:45:12.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
  STEP: Destroying namespace "emptydir-wrapper-8896" for this suite.
  •{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":61,"completed":11,"skipped":1149,"failed":0}

  SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
  ------------------------------
  [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) with short downscale stabilization window 
    should scale down soon after the stabilization period
    test/e2e/autoscaling/horizontal_pod_autoscaling_behavior.go:34
  [BeforeEach] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior)
... skipping 98 lines ...
  test/e2e/autoscaling/framework.go:23
    with short downscale stabilization window
    test/e2e/autoscaling/horizontal_pod_autoscaling_behavior.go:33
      should scale down soon after the stabilization period
      test/e2e/autoscaling/horizontal_pod_autoscaling_behavior.go:34
  ------------------------------
  {"msg":"PASSED [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) with short downscale stabilization window should scale down soon after the stabilization period","total":61,"completed":12,"skipped":1224,"failed":0}

  SSSSSSSSSSSSSSSS
  ------------------------------
  [sig-apps] Daemon set [Serial] 
    should retry creating failed daemon pods [Conformance]

    test/e2e/framework/framework.go:652
  [BeforeEach] [sig-apps] Daemon set [Serial]
    test/e2e/framework/framework.go:187
  STEP: Creating a kubernetes client
  Jan 22 23:48:53.107: INFO: >>> kubeConfig: /tmp/kubeconfig
  STEP: Building a namespace api object, basename daemonsets
  STEP: Waiting for a default service account to be provisioned in namespace
  STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
  [BeforeEach] [sig-apps] Daemon set [Serial]
    test/e2e/apps/daemon_set.go:145
  [It] should retry creating failed daemon pods [Conformance]

    test/e2e/framework/framework.go:652
  STEP: Creating a simple DaemonSet "daemon-set"
  STEP: Check that daemon pods launch on every node of the cluster.
  Jan 22 23:48:53.561: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
  Jan 22 23:48:53.594: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0
  Jan 22 23:48:53.594: INFO: Node capz-conf-2xrmj is running 0 daemon pod, expected 1
... skipping 9 lines ...
  Jan 22 23:48:57.633: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
  Jan 22 23:48:57.669: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0
  Jan 22 23:48:57.669: INFO: Node capz-conf-2xrmj is running 0 daemon pod, expected 1
  Jan 22 23:48:58.630: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
  Jan 22 23:48:58.664: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2
  Jan 22 23:48:58.665: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set
  STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.

  Jan 22 23:48:58.813: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
  Jan 22 23:48:58.846: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1
  Jan 22 23:48:58.846: INFO: Node capz-conf-2xrmj is running 0 daemon pod, expected 1
  Jan 22 23:48:59.884: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
  Jan 22 23:48:59.918: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1
  Jan 22 23:48:59.918: INFO: Node capz-conf-2xrmj is running 0 daemon pod, expected 1
... skipping 6 lines ...
  Jan 22 23:49:02.885: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
  Jan 22 23:49:02.918: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1
  Jan 22 23:49:02.918: INFO: Node capz-conf-2xrmj is running 0 daemon pod, expected 1
  Jan 22 23:49:03.884: INFO: DaemonSet pods can't tolerate node capz-conf-zs64h3-control-plane-dlccj with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
  Jan 22 23:49:03.918: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2
  Jan 22 23:49:03.918: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set
  STEP: Wait for the failed daemon pod to be completely deleted.

  [AfterEach] [sig-apps] Daemon set [Serial]
    test/e2e/apps/daemon_set.go:110
  STEP: Deleting DaemonSet "daemon-set"
  STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1609, will wait for the garbage collector to delete the pods
  Jan 22 23:49:04.103: INFO: Deleting DaemonSet.extensions daemon-set took: 36.334354ms
  Jan 22 23:49:04.204: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.652131ms
... skipping 4 lines ...
  Jan 22 23:49:09.303: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"9131"},"items":null}
  
  [AfterEach] [sig-apps] Daemon set [Serial]
    test/e2e/framework/framework.go:188
  Jan 22 23:49:09.405: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
  STEP: Destroying namespace "daemonsets-1609" for this suite.
  •{"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":61,"completed":13,"skipped":1240,"failed":0}

  SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
  ------------------------------
  [sig-windows] [Feature:Windows] Density [Serial] [Slow] create a batch of pods 
    latency/resource should be within limit when create 10 pods with 0s interval
    test/e2e/windows/density.go:68
  [BeforeEach] [sig-windows] [Feature:Windows] Density [Serial] [Slow]
... skipping 50 lines ...
  Jan 22 23:49:49.868: INFO: Pod test-ec417beb-eecb-40d1-b48d-35ba278bd923 no longer exists
  Jan 22 23:49:49.870: INFO: Pod test-3e55e96b-bdc1-44dd-9954-fb5543c55788 no longer exists
  [AfterEach] [sig-windows] [Feature:Windows] Density [Serial] [Slow]
    test/e2e/framework/framework.go:188
  Jan 22 23:49:49.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
  STEP: Destroying namespace "density-test-windows-4549" for this suite.
  •{"msg":"PASSED [sig-windows] [Feature:Windows] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 10 pods with 0s interval","total":61,"completed":14,"skipped":1314,"failed":0}

  SSSSSSSSSSSSSS
  ------------------------------
  [sig-apps] Daemon set [Serial] 
    should rollback without unnecessary restarts [Conformance]
    test/e2e/framework/framework.go:652
  [BeforeEach] [sig-apps] Daemon set [Serial]
... skipping 51 lines ...
  Jan 22 23:50:15.666: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"9712"},"items":null}
  
  [AfterEach] [sig-apps] Daemon set [Serial]
    test/e2e/framework/framework.go:188
  Jan 22 23:50:15.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
  STEP: Destroying namespace "daemonsets-5489" for this suite.
  •{"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":61,"completed":15,"skipped":1328,"failed":0}

  SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
  ------------------------------
  [sig-node] Variable Expansion 
    should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance]
    test/e2e/framework/framework.go:652
  [BeforeEach] [sig-node] Variable Expansion
... skipping 2 lines ...
  Jan 22 23:50:15.845: INFO: >>> kubeConfig: /tmp/kubeconfig
  STEP: Building a namespace api object, basename var-expansion
  STEP: Waiting for a default service account to be provisioned in namespace
  STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
  [It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance]
    test/e2e/framework/framework.go:652
  STEP: creating the pod with failed condition

  STEP: updating the pod
  Jan 22 23:52:16.806: INFO: Successfully updated pod "var-expansion-4dba3e13-e06f-4d89-980f-552c75326e8b"
  STEP: waiting for pod running
  STEP: deleting the pod gracefully
  Jan 22 23:52:28.874: INFO: Deleting pod "var-expansion-4dba3e13-e06f-4d89-980f-552c75326e8b" in namespace "var-expansion-7880"
  Jan 22 23:52:28.916: INFO: Wait up to 5m0s for pod "var-expansion-4dba3e13-e06f-4d89-980f-552c75326e8b" to be fully deleted
... skipping 5 lines ...
  • [SLOW TEST:139.215 seconds]
  [sig-node] Variable Expansion
  test/e2e/common/node/framework.go:23
    should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance]
    test/e2e/framework/framework.go:652
  ------------------------------
  {"msg":"PASSED [sig-node] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance]","total":61,"completed":16,"skipped":1618,"failed":0}

  SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
  ------------------------------
  [sig-apps] Daemon set [Serial] 
    should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
    test/e2e/framework/framework.go:652
  [BeforeEach] [sig-apps] Daemon set [Serial]
... skipping 86 lines ...
  Jan 22 23:53:02.292: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"10383"},"items":null}
  
  [AfterEach] [sig-apps] Daemon set [Serial]
    test/e2e/framework/framework.go:188
  Jan 22 23:53:02.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
  STEP: Destroying namespace "daemonsets-1413" for this suite.
  •{"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":61,"completed":17,"skipped":1898,"failed":0}

  SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
  ------------------------------
  [sig-node] Variable Expansion 
    should fail substituting values in a volume subpath with absolute path [Slow] [Conformance]

    test/e2e/framework/framework.go:652
  [BeforeEach] [sig-node] Variable Expansion
    test/e2e/framework/framework.go:187
  STEP: Creating a kubernetes client
  Jan 22 23:53:02.467: INFO: >>> kubeConfig: /tmp/kubeconfig
  STEP: Building a namespace api object, basename var-expansion
  STEP: Waiting for a default service account to be provisioned in namespace
  STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
  [It] should fail substituting values in a volume subpath with absolute path [Slow] [Conformance]

    test/e2e/framework/framework.go:652
  Jan 22 23:53:06.805: INFO: Deleting pod "var-expansion-13059b14-b127-49ee-a5f5-8d890851903b" in namespace "var-expansion-355"
  Jan 22 23:53:06.842: INFO: Wait up to 5m0s for pod "var-expansion-13059b14-b127-49ee-a5f5-8d890851903b" to be fully deleted
  [AfterEach] [sig-node] Variable Expansion
    test/e2e/framework/framework.go:188
  Jan 22 23:53:10.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
  STEP: Destroying namespace "var-expansion-355" for this suite.
  •{"msg":"PASSED [sig-node] Variable Expansion should fail substituting values in a volume subpath with absolute path [Slow] [Conformance]","total":61,"completed":18,"skipped":2023,"failed":0}

  SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
  ------------------------------
  [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] Deployment 
    Should scale from 1 pod to 3 pods and from 3 to 5
    test/e2e/autoscaling/horizontal_pod_autoscaling.go:40
  [BeforeEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU)
... skipping 73 lines ...
  test/e2e/autoscaling/framework.go:23
    [Serial] [Slow] Deployment
    test/e2e/autoscaling/horizontal_pod_autoscaling.go:38
      Should scale from 1 pod to 3 pods and from 3 to 5
      test/e2e/autoscaling/horizontal_pod_autoscaling.go:40
  ------------------------------
  {"msg":"PASSED [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] Deployment Should scale from 1 pod to 3 pods and from 3 to 5","total":61,"completed":19,"skipped":2117,"failed":0}

  SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
  ------------------------------
  [sig-api-machinery] Garbage collector 
    should orphan pods created by rc if delete options say so [Conformance]
    test/e2e/framework/framework.go:652
  [BeforeEach] [sig-api-machinery] Garbage collector
... skipping 135 lines ...
  Jan 22 23:56:17.051: INFO: Deleting pod "simpletest.rc-xh4hj" in namespace "gc-7044"
  Jan 22 23:56:17.100: INFO: Deleting pod "simpletest.rc-z9dw6" in namespace "gc-7044"
  [AfterEach] [sig-api-machinery] Garbage collector
    test/e2e/framework/framework.go:188
  Jan 22 23:56:17.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
  STEP: Destroying namespace "gc-7044" for this suite.
  •{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":61,"completed":20,"skipped":2218,"failed":0}

  SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
  ------------------------------
  [sig-apps] Daemon set [Serial] 
    should run and stop simple daemon [Conformance]
    test/e2e/framework/framework.go:652
  [BeforeEach] [sig-apps] Daemon set [Serial]
... skipping 293 lines ...
  Jan 22 23:57:51.583: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"13403"},"items":null}
  
  [AfterEach] [sig-apps] Daemon set [Serial]
    test/e2e/framework/framework.go:188
  Jan 22 23:57:51.688: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
  STEP: Destroying namespace "daemonsets-1851" for this suite.
  •{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":61,"completed":21,"skipped":2301,"failed":0}

  SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
  ------------------------------
  [sig-node] Variable Expansion 
    should succeed in writing subpaths in container [Slow] [Conformance]
    test/e2e/framework/framework.go:652
  [BeforeEach] [sig-node] Variable Expansion
... skipping 24 lines ...
  Jan 22 23:58:07.403: INFO: Deleting pod "var-expansion-8271b2a2-8b5d-42aa-8b9b-70b3e3ed416b" in namespace "var-expansion-4649"
  Jan 22 23:58:07.443: INFO: Wait up to 5m0s for pod "var-expansion-8271b2a2-8b5d-42aa-8b9b-70b3e3ed416b" to be fully deleted
  [AfterEach] [sig-node] Variable Expansion
    test/e2e/framework/framework.go:188
  Jan 22 23:58:13.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
  STEP: Destroying namespace "var-expansion-4649" for this suite.
  •{"msg":"PASSED [sig-node] Variable Expansion should succeed in writing subpaths in container [Slow] [Conformance]","total":61,"completed":22,"skipped":2405,"failed":0}

  SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
  ------------------------------
  [sig-windows] [Feature:Windows] GMSA Kubelet [Slow] kubelet GMSA support when creating a pod with correct GMSA credential specs 
    passes the credential specs down to the Pod's containers
    test/e2e/windows/gmsa_kubelet.go:45
  [BeforeEach] [sig-windows] [Feature:Windows] GMSA Kubelet [Slow]
... skipping 21 lines ...
  Jan 22 23:58:23.410: INFO: stderr: ""
  Jan 22 23:58:23.410: INFO: stdout: "contoso.org. (1)\r\nThe command completed successfully\r\n"
  [AfterEach] [sig-windows] [Feature:Windows] GMSA Kubelet [Slow]
    test/e2e/framework/framework.go:188
  Jan 22 23:58:23.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
  STEP: Destroying namespace "gmsa-kubelet-test-windows-3956" for this suite.
  •{"msg":"PASSED [sig-windows] [Feature:Windows] GMSA Kubelet [Slow] kubelet GMSA support when creating a pod with correct GMSA credential specs passes the credential specs down to the Pod's containers","total":61,"completed":23,"skipped":2632,"failed":0}

  SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
  ------------------------------
  [sig-apps] Daemon set [Serial] 
    should list and delete a collection of DaemonSets [Conformance]
    test/e2e/framework/framework.go:652
  [BeforeEach] [sig-apps] Daemon set [Serial]
... skipping 37 lines ...
  Jan 22 23:58:29.271: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"13635"},"items":[{"metadata":{"name":"daemon-set-csl9w","generateName":"daemon-set-","namespace":"daemonsets-9412","uid":"5f65b1be-96d0-4f14-a052-c221bc1a42f0","resourceVersion":"13635","creationTimestamp":"2023-01-22T23:58:23Z","deletionTimestamp":"2023-01-22T23:58:59Z","deletionGracePeriodSeconds":30,"labels":{"controller-revision-hash":"6df8db488c","daemonset-name":"daemon-set","pod-template-generation":"1"},"annotations":{"cni.projectcalico.org/containerID":"f808545888da8fa34ef50c7342af3a9a31bce07aa7f2b4d958a2faca6b326473","cni.projectcalico.org/podIP":"192.168.14.42/32","cni.projectcalico.org/podIPs":"192.168.14.42/32"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"daemon-set","uid":"70ca2e84-ac83-40e0-93a5-87fafa3b28f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-22T23:58:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:daemonset-name":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"70ca2e84-ac83-40e0-93a5-87fafa3b28f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\"name\":\"app\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":9376,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}}},{"manager":"calico.exe","operation":"Update","apiVersion":"v1","time":"2023-01-22T23:58:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}},"subresource":"status"},{"manager":"kubelet.exe","operation":"Update","apiVersion":"v1","time":"2023-01-22T23:58:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.14.42\"}":{".":{},"f:ip":{}}},"f:startTime":{}}},"subresource":"status"}]},"spec":{"volumes":[{"name":"kube-api-access-bp6l8","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}}],"defaultMode":420}}],"containers":[{"name":"app","image":"k8s.gcr.io/e2e-test-images/httpd:2.4.38-2","ports":[{"containerPort":9376,"protocol":"TCP"}],"resources":{},"volumeMounts":[{"name":"kube-api-access-bp6l8","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent","securityContext":{}}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"capz-conf-2xrmj","securityContext":{},"affinity":{"nodeAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchFields":[{"key":"metadata.name","operator":"In","values":["capz-conf-2xrmj"]}]}]}}},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/disk-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/memory-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/pid-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/unschedulable","operator":"Exists","effect":"NoSchedule"}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-01-22T23:58:23Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-01-22T23:58:28Z"},{"type":"ContainersReady","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-01-22T23:58:28Z"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-01-22T23:58:23Z"}],"hostIP":"10.1.0.5","podIP":"192.168.14.42","podIPs":[{"ip":"192.168.14.42"}],"startTime":"2023-01-22T23:58:23Z","containerStatuses":[{"name":"app","state":{"running":{"startedAt":"2023-01-22T23:58:27Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"k8s.gcr.io/e2e-test-images/httpd:2.4.38-2","imageID":"k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3","containerID":"containerd://e2ca4f87e85b1f8de4ebabe8b53846b496bab83db74eee1e9b34bdb3d9ca60d4","started":true}],"qosClass":"BestEffort"}},{"metadata":{"name":"daemon-set-lpchl","generateName":"daemon-set-","namespace":"daemonsets-9412","uid":"0c8eff20-b12b-47e5-8a27-adc41c9c9751","resourceVersion":"13634","creationTimestamp":"2023-01-22T23:58:23Z","deletionTimestamp":"2023-01-22T23:58:59Z","deletionGracePeriodSeconds":30,"labels":{"controller-revision-hash":"6df8db488c","daemonset-name":"daemon-set","pod-template-generation":"1"},"annotations":{"cni.projectcalico.org/containerID":"d349259a8543860d0148506b5972fe6c52c93c43645616144e5c179f45d6e5c4","cni.projectcalico.org/podIP":"192.168.198.34/32","cni.projectcalico.org/podIPs":"192.168.198.34/32"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"daemon-set","uid":"70ca2e84-ac83-40e0-93a5-87fafa3b28f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-22T23:58:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:daemonset-name":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"70ca2e84-ac83-40e0-93a5-87fafa3b28f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\"name\":\"app\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":9376,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}}},{"manager":"calico.exe","operation":"Update","apiVersion":"v1","time":"2023-01-22T23:58:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}},"subresource":"status"},{"manager":"kubelet.exe","operation":"Update","apiVersion":"v1","time":"2023-01-22T23:58:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.198.34\"}":{".":{},"f:ip":{}}},"f:startTime":{}}},"subresource":"status"}]},"spec":{"volumes":[{"name":"kube-api-access-xm2h5","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}}],"defaultMode":420}}],"containers":[{"name":"app","image":"k8s.gcr.io/e2e-test-images/httpd:2.4.38-2","ports":[{"containerPort":9376,"protocol":"TCP"}],"resources":{},"volumeMounts":[{"name":"kube-api-access-xm2h5","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent","securityContext":{}}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"capz-conf-96jhk","securityContext":{},"affinity":{"nodeAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchFields":[{"key":"metadata.name","operator":"In","values":["capz-conf-96jhk"]}]}]}}},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/disk-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/memory-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/pid-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/unschedulable","operator":"Exists","effect":"NoSchedule"}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-01-22T23:58:23Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-01-22T23:58:27Z"},{"type":"ContainersReady","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-01-22T23:58:27Z"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-01-22T23:58:23Z"}],"hostIP":"10.1.0.4","podIP":"192.168.198.34","podIPs":[{"ip":"192.168.198.34"}],"startTime":"2023-01-22T23:58:23Z","containerStatuses":[{"name":"app","state":{"running":{"startedAt":"2023-01-22T23:58:27Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"k8s.gcr.io/e2e-test-images/httpd:2.4.38-2","imageID":"k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3","containerID":"containerd://11f00f8d177ab9ad982484e50b8cd6d456d7f35aeddbb98006233e4be238a22b","started":true}],"qosClass":"BestEffort"}}]}
  
  [AfterEach] [sig-apps] Daemon set [Serial]
    test/e2e/framework/framework.go:188
  Jan 22 23:58:29.374: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
  STEP: Destroying namespace "daemonsets-9412" for this suite.
  •{"msg":"PASSED [sig-apps] Daemon set [Serial] should list and delete a collection of DaemonSets [Conformance]","total":61,"completed":24,"skipped":2714,"failed":0}

  SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
  ------------------------------
  [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicaSet with idle sidecar (ContainerResource use case) 
    Should scale from 1 pod to 3 pods and from 3 to 5 on a busy application with an idle sidecar container
    test/e2e/autoscaling/horizontal_pod_autoscaling.go:98
  [BeforeEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU)
... skipping 81 lines ...
  test/e2e/autoscaling/framework.go:23
    [Serial] [Slow] ReplicaSet with idle sidecar (ContainerResource use case)
    test/e2e/autoscaling/horizontal_pod_autoscaling.go:96
      Should scale from 1 pod to 3 pods and from 3 to 5 on a busy application with an idle sidecar container
      test/e2e/autoscaling/horizontal_pod_autoscaling.go:98
  ------------------------------
  {"msg":"PASSED [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicaSet with idle sidecar (ContainerResource use case) Should scale from 1 pod to 3 pods and from 3 to 5 on a busy application with an idle sidecar container","total":61,"completed":25,"skipped":2871,"failed":0}

  SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
  ------------------------------
  [sig-api-machinery] Namespaces [Serial] 
    should patch a Namespace [Conformance]
    test/e2e/framework/framework.go:652
  [BeforeEach] [sig-api-machinery] Namespaces [Serial]
... skipping 10 lines ...
  STEP: get the Namespace and ensuring it has the label
  [AfterEach] [sig-api-machinery] Namespaces [Serial]
    test/e2e/framework/framework.go:188
  Jan 23 00:01:10.821: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
  STEP: Destroying namespace "namespaces-6010" for this suite.
  STEP: Destroying namespace "nspatchtest-2837c980-446e-4fce-9b28-09f45d9af33c-8325" for this suite.
  •{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":61,"completed":26,"skipped":2943,"failed":0}

  SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
  ------------------------------
  [sig-scheduling] SchedulerPredicates [Serial] 
    validates resource limits of pods that are allowed to run  [Conformance]
    test/e2e/framework/framework.go:652
  [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
... skipping 80 lines ...
  [AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
    test/e2e/framework/framework.go:188
  Jan 23 00:01:19.103: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
  STEP: Destroying namespace "sched-pred-1217" for this suite.
  [AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
    test/e2e/scheduling/predicates.go:83
  •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]","total":61,"completed":27,"skipped":3098,"failed":0}

  SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
  ------------------------------
  [sig-apps] CronJob 
    should not schedule jobs when suspended [Slow] [Conformance]
    test/e2e/framework/framework.go:652
  [BeforeEach] [sig-apps] CronJob
... skipping 17 lines ...
  • [SLOW TEST:300.479 seconds]
  [sig-apps] CronJob
  test/e2e/apps/framework.go:23
    should not schedule jobs when suspended [Slow] [Conformance]
    test/e2e/framework/framework.go:652
  ------------------------------
  {"msg":"PASSED [sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]","total":61,"completed":28,"skipped":3127,"failed":0}

  SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
  ------------------------------
  [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] 
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
    test/e2e/framework/framework.go:652
  [BeforeEach] [sig-apps] StatefulSet
... skipping 104 lines ...
  Jan 23 00:07:56.163: INFO: Waiting for statefulset status.replicas updated to 0
  Jan 23 00:07:56.196: INFO: Deleting statefulset ss
  [AfterEach] [sig-apps] StatefulSet
    test/e2e/framework/framework.go:188
  Jan 23 00:07:56.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
  STEP: Destroying namespace "statefulset-8983" for this suite.
  •{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":61,"completed":29,"skipped":3577,"failed":0}

  SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
  ------------------------------
  [sig-windows] [Feature:Windows] GMSA Full [Serial] [Slow] GMSA support 
    can read and write file to remote SMB folder
    test/e2e/windows/gmsa_full.go:167
  [BeforeEach] [sig-windows] [Feature:Windows] GMSA Full [Serial] [Slow]
... skipping 87 lines ...
  For evicted_pods_total:
  
  [AfterEach] [sig-api-machinery] Garbage collector
    test/e2e/framework/framework.go:188
  Jan 23 00:08:09.642: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
  STEP: Destroying namespace "gc-9386" for this suite.
  •{"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":61,"completed":30,"skipped":3666,"failed":0}

  SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
  ------------------------------
  [sig-node] Pods 
    should cap back-off at MaxContainerBackOff [Slow][NodeConformance]
    test/e2e/common/node/pods.go:723
  [BeforeEach] [sig-node] Pods
... skipping 28 lines ...
  • [SLOW TEST:1631.415 seconds]
  [sig-node] Pods
  test/e2e/common/node/framework.go:23
    should cap back-off at MaxContainerBackOff [Slow][NodeConformance]
    test/e2e/common/node/pods.go:723
  ------------------------------
  {"msg":"PASSED [sig-node] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance]","total":61,"completed":31,"skipped":3717,"failed":0}

  SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
  ------------------------------
  [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicationController 
    Should scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability
    test/e2e/autoscaling/horizontal_pod_autoscaling.go:61
  [BeforeEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU)
... skipping 326 lines ...
  test/e2e/autoscaling/framework.go:23
    [Serial] [Slow] ReplicationController
    test/e2e/autoscaling/horizontal_pod_autoscaling.go:59
      Should scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability
      test/e2e/autoscaling/horizontal_pod_autoscaling.go:61
  ------------------------------
  {"msg":"PASSED [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicationController Should scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability","total":61,"completed":32,"skipped":3759,"failed":0}

  SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
  ------------------------------
  [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicationController 
    Should scale from 5 pods to 3 pods and from 3 to 1 and verify decision stability
    test/e2e/autoscaling/horizontal_pod_autoscaling.go:64
  [BeforeEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU)
... skipping 454 lines ...
  test/e2e/autoscaling/framework.go:23
    [Serial] [Slow] ReplicationController
    test/e2e/autoscaling/horizontal_pod_autoscaling.go:59
      Should scale from 5 pods to 3 pods and from 3 to 1 and verify decision stability
      test/e2e/autoscaling/horizontal_pod_autoscaling.go:64
  ------------------------------
  {"msg":"PASSED [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicationController Should scale from 5 pods to 3 pods and from 3 to 1 and verify decision stability","total":61,"completed":33,"skipped":4094,"failed":0}

  SSSS
  ------------------------------
  [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] 
    Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
    test/e2e/framework/framework.go:652
  [BeforeEach] [sig-apps] StatefulSet
... skipping 121 lines ...
  Jan 23 01:10:57.933: INFO: Waiting for statefulset status.replicas updated to 0
  Jan 23 01:10:57.965: INFO: Deleting statefulset ss
  [AfterEach] [sig-apps] StatefulSet
    test/e2e/framework/framework.go:188
  Jan 23 01:10:58.066: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
  STEP: Destroying namespace "statefulset-8264" for this suite.
  •{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":61,"completed":34,"skipped":4098,"failed":0}

  SSSSSSS
  ------------------------------
  [sig-api-machinery] Garbage collector 
    should not be blocked by dependency circle [Conformance]
    test/e2e/framework/framework.go:652
  [BeforeEach] [sig-api-machinery] Garbage collector
... skipping 9 lines ...
  Jan 23 01:10:58.569: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"acf187d3-d32e-4c0b-92fe-704733e1c609", Controller:(*bool)(0xc002703fd6), BlockOwnerDeletion:(*bool)(0xc002703fd7)}}
  Jan 23 01:10:58.607: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"2456750a-f30f-4480-ae2c-4452b083b784", Controller:(*bool)(0xc0005b37f6), BlockOwnerDeletion:(*bool)(0xc0005b37f7)}}
  [AfterEach] [sig-api-machinery] Garbage collector
    test/e2e/framework/framework.go:188
  Jan 23 01:11:03.677: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
  STEP: Destroying namespace "gc-1342" for this suite.
  •{"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":61,"completed":35,"skipped":4105,"failed":0}

  SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
  ------------------------------
  [sig-scheduling] SchedulerPreemption [Serial] PriorityClass endpoints 
    verify PriorityClass endpoints can be operated with different HTTP methods [Conformance]
    test/e2e/framework/framework.go:652
  [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial]
... skipping 29 lines ...
  [AfterEach] [sig-scheduling] SchedulerPreemption [Serial]
    test/e2e/framework/framework.go:188
  Jan 23 01:12:05.135: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
  STEP: Destroying namespace "sched-preemption-6551" for this suite.
  [AfterEach] [sig-scheduling] SchedulerPreemption [Serial]
    test/e2e/scheduling/preemption.go:80
  •{"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PriorityClass endpoints verify PriorityClass endpoints can be operated with different HTTP methods [Conformance]","total":61,"completed":36,"skipped":4238,"failed":0}

  SSSSSSSSSSSSSSSSSSSSSSSSSSSS
  ------------------------------
  [sig-apps] CronJob 
    should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]
    test/e2e/framework/framework.go:652
  [BeforeEach] [sig-apps] CronJob
... skipping 19 lines ...
  • [SLOW TEST:356.559 seconds]
  [sig-apps] CronJob
  test/e2e/apps/framework.go:23
    should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]
    test/e2e/framework/framework.go:652
  ------------------------------
  {"msg":"PASSED [sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","total":61,"completed":37,"skipped":4266,"failed":0}

  SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
  ------------------------------
  [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicaSet 
    Should scale from 1 pod to 3 pods and from 3 to 5
    test/e2e/autoscaling/horizontal_pod_autoscaling.go:50
  [BeforeEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU)
... skipping 53 lines ...
  Jan 23 01:19:40.479: INFO: Deleting ReplicationController rs-ctrl took: 35.547538ms
  Jan 23 01:19:40.579: INFO: Terminating ReplicationController rs-ctrl pods took: 100.516538ms
  [AfterEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU)
    test/e2e/framework/framework.go:188
  Jan 23 01:19:42.335: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
  STEP: Destroying namespace "horizontal-pod-autoscaling-5586" for this suite.
  •{"msg":"PASSED [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicaSet Should scale from 1 pod to 3 pods and from 3 to 5","total":61,"completed":38,"skipped":4484,"failed":0}

  SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
  ------------------------------
  [sig-node] Variable Expansion 
    should fail substituting values in a volume subpath with backticks [Slow] [Conformance]

    test/e2e/framework/framework.go:652
  [BeforeEach] [sig-node] Variable Expansion
    test/e2e/framework/framework.go:187
  STEP: Creating a kubernetes client
  Jan 23 01:19:42.415: INFO: >>> kubeConfig: /tmp/kubeconfig
  STEP: Building a namespace api object, basename var-expansion
  STEP: Waiting for a default service account to be provisioned in namespace
  STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
  [It] should fail substituting values in a volume subpath with backticks [Slow] [Conformance]

    test/e2e/framework/framework.go:652
  Jan 23 01:19:46.748: INFO: Deleting pod "var-expansion-07df5213-61e4-4a88-b139-eb0b322b3a1c" in namespace "var-expansion-7276"
  Jan 23 01:19:46.786: INFO: Wait up to 5m0s for pod "var-expansion-07df5213-61e4-4a88-b139-eb0b322b3a1c" to be fully deleted
  [AfterEach] [sig-node] Variable Expansion
    test/e2e/framework/framework.go:188
  Jan 23 01:19:48.854: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
  STEP: Destroying namespace "var-expansion-7276" for this suite.
  •{"msg":"PASSED [sig-node] Variable Expansion should fail substituting values in a volume subpath with backticks [Slow] [Conformance]","total":61,"completed":39,"skipped":4564,"failed":0}

  SSSSSSSSSS
  ------------------------------
  [sig-node] NoExecuteTaintManager Multiple Pods [Serial] 
    evicts pods with minTolerationSeconds [Disruptive] [Conformance]
    test/e2e/framework/framework.go:652
  [BeforeEach] [sig-node] NoExecuteTaintManager Multiple Pods [Serial]
... skipping 20 lines ...
  Jan 23 01:21:22.906: INFO: Noticed Pod "taint-eviction-b2" gets evicted.
  STEP: verifying the node doesn't have the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute
  [AfterEach] [sig-node] NoExecuteTaintManager Multiple Pods [Serial]
    test/e2e/framework/framework.go:188
  Jan 23 01:21:23.036: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
  STEP: Destroying namespace "taint-multiple-pods-6775" for this suite.
  •{"msg":"PASSED [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]","total":61,"completed":40,"skipped":4574,"failed":0}

  S
  ------------------------------
  [sig-node] Pods 
    should have their auto-restart back-off timer reset on image update [Slow][NodeConformance]
    test/e2e/common/node/pods.go:682
  [BeforeEach] [sig-node] Pods
... skipping 29 lines ...
  • [SLOW TEST:406.113 seconds]
  [sig-node] Pods
  test/e2e/common/node/framework.go:23
    should have their auto-restart back-off timer reset on image update [Slow][NodeConformance]
    test/e2e/common/node/pods.go:682
  ------------------------------
  {"msg":"PASSED [sig-node] Pods should have their auto-restart back-off timer reset on image update [Slow][NodeConformance]","total":61,"completed":41,"skipped":4575,"failed":0}

  SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
  ------------------------------
  [sig-windows] [Feature:Windows] Cpu Resources [Serial] Container limits 
    should not be exceeded after waiting 2 minutes
    test/e2e/windows/cpu_limits.go:43
  [BeforeEach] [sig-windows] [Feature:Windows] Cpu Resources [Serial]
... skipping 34 lines ...
  test/e2e/windows/framework.go:27
    Container limits
    test/e2e/windows/cpu_limits.go:42
      should not be exceeded after waiting 2 minutes
      test/e2e/windows/cpu_limits.go:43
  ------------------------------
  {"msg":"PASSED [sig-windows] [Feature:Windows] Cpu Resources [Serial] Container limits should not be exceeded after waiting 2 minutes","total":61,"completed":42,"skipped":4840,"failed":0}

  SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
  ------------------------------
  [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath 
    runs ReplicaSets to verify preemption running path [Conformance]
    test/e2e/framework/framework.go:652
  [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial]
... skipping 34 lines ...
  [AfterEach] [sig-scheduling] SchedulerPreemption [Serial]
    test/e2e/framework/framework.go:188
  Jan 23 01:31:53.213: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
  STEP: Destroying namespace "sched-preemption-2003" for this suite.
  [AfterEach] [sig-scheduling] SchedulerPreemption [Serial]
    test/e2e/scheduling/preemption.go:80
  •{"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance]","total":61,"completed":43,"skipped":4908,"failed":0}

  SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
  ------------------------------
  [sig-api-machinery] Garbage collector 
    should delete jobs and pods created by cronjob
    test/e2e/apimachinery/garbage_collector.go:1145
  [BeforeEach] [sig-api-machinery] Garbage collector
... skipping 35 lines ...
  For evicted_pods_total:
  
  [AfterEach] [sig-api-machinery] Garbage collector
    test/e2e/framework/framework.go:188
  Jan 23 01:32:00.975: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
  STEP: Destroying namespace "gc-2028" for this suite.
  •{"msg":"PASSED [sig-api-machinery] Garbage collector should delete jobs and pods created by cronjob","total":61,"completed":44,"skipped":4947,"failed":0}

  SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
  ------------------------------
  [sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow] Allocatable node memory 
    should be equal to a calculated allocatable memory value
    test/e2e/windows/memory_limits.go:54
  [BeforeEach] [sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow]
... skipping 14 lines ...
  Jan 23 01:32:01.366: INFO: nodeMem says: {capacity:{i:{value:17179398144 scale:0} d:{Dec:<nil>} s:16776756Ki Format:BinarySI} allocatable:{i:{value:17074540544 scale:0} d:{Dec:<nil>} s:16674356Ki Format:BinarySI} systemReserve:{i:{value:0 scale:0} d:{Dec:<nil>} s: Format:BinarySI} kubeReserve:{i:{value:0 scale:0} d:{Dec:<nil>} s: Format:BinarySI} softEviction:{i:{value:0 scale:0} d:{Dec:<nil>} s: Format:BinarySI} hardEviction:{i:{value:104857600 scale:0} d:{Dec:<nil>} s:100Mi Format:BinarySI}}
  STEP: Checking stated allocatable memory 16674356Ki against calculated allocatable memory {{17074540544 0} {<nil>}  BinarySI}
  [AfterEach] [sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow]
    test/e2e/framework/framework.go:188
  Jan 23 01:32:01.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
  STEP: Destroying namespace "memory-limit-test-windows-9668" for this suite.
  •{"msg":"PASSED [sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow] Allocatable node memory should be equal to a calculated allocatable memory value","total":61,"completed":45,"skipped":5211,"failed":0}

  SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
  ------------------------------
  [sig-api-machinery] Garbage collector 
    should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
    test/e2e/framework/framework.go:652
  [BeforeEach] [sig-api-machinery] Garbage collector
... skipping 89 lines ...
  Jan 23 01:32:21.606: INFO: Deleting pod "simpletest-rc-to-be-deleted-gvhc6" in namespace "gc-8821"
  Jan 23 01:32:21.653: INFO: Deleting pod "simpletest-rc-to-be-deleted-gxr2s" in namespace "gc-8821"
  [AfterEach] [sig-api-machinery] Garbage collector
    test/e2e/framework/framework.go:188
  Jan 23 01:32:21.698: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
  STEP: Destroying namespace "gc-8821" for this suite.
  •{"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":61,"completed":46,"skipped":5359,"failed":0}

  SSSSSSSSSSSSSSSSSSSSS
  ------------------------------
  [sig-scheduling] SchedulerPredicates [Serial] 
    validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
    test/e2e/framework/framework.go:652
  [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
... skipping 37 lines ...
  Jan 23 01:32:22.214: INFO: 	Container csi-proxy ready: true, restart count 0
  Jan 23 01:32:22.214: INFO: kube-proxy-windows-mrr95 from kube-system started at 2023-01-22 23:19:35 +0000 UTC (1 container statuses recorded)
  Jan 23 01:32:22.214: INFO: 	Container kube-proxy ready: true, restart count 0
  [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
    test/e2e/framework/framework.go:652
  STEP: Trying to launch a pod without a label to get a node which can launch it.
  Jan 23 01:33:22.357: FAIL: Unexpected error:

      <*errors.errorString | 0xc00021c1e0>: {
          s: "timed out waiting for the condition",
      }
      timed out waiting for the condition
  occurred
  
... skipping 138 lines ...
  • Failure [65.776 seconds]
  [sig-scheduling] SchedulerPredicates [Serial]
  test/e2e/scheduling/framework.go:40
    validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] [It]
    test/e2e/framework/framework.go:652
  
    Jan 23 01:33:22.357: Unexpected error:

        <*errors.errorString | 0xc00021c1e0>: {
            s: "timed out waiting for the condition",
        }
        timed out waiting for the condition
    occurred
  
... skipping 16 lines ...
    	test/e2e/e2e_test.go:136 +0x19
    testing.tRunner(0xc000503040, 0x741f9a8)
    	/usr/local/go/src/testing/testing.go:1446 +0x10b
    created by testing.(*T).Run
    	/usr/local/go/src/testing/testing.go:1493 +0x35f
  ------------------------------
  {"msg":"FAILED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":61,"completed":46,"skipped":5380,"failed":1,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]"]}

  SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
  ------------------------------
  [sig-scheduling] SchedulerPredicates [Serial] 
    validates that NodeSelector is respected if matching  [Conformance]
    test/e2e/framework/framework.go:652
  [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
... skipping 51 lines ...
  [AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
    test/e2e/framework/framework.go:188
  Jan 23 01:33:46.526: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
  STEP: Destroying namespace "sched-pred-568" for this suite.
  [AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
    test/e2e/scheduling/predicates.go:83
  •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching  [Conformance]","total":61,"completed":47,"skipped":5460,"failed":1,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]"]}

  SSSSSSSSSSSSSSSSSSSSSSSSS
  ------------------------------
  [sig-scheduling] SchedulerPreemption [Serial] PodTopologySpread Preemption 
    validates proper pods are preempted
    test/e2e/scheduling/preemption.go:355
  [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial]
... skipping 34 lines ...
  [AfterEach] [sig-scheduling] SchedulerPreemption [Serial]
    test/e2e/framework/framework.go:188
  Jan 23 01:35:34.873: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
  STEP: Destroying namespace "sched-preemption-602" for this suite.
  [AfterEach] [sig-scheduling] SchedulerPreemption [Serial]
    test/e2e/scheduling/preemption.go:80
  •{"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PodTopologySpread Preemption validates proper pods are preempted","total":61,"completed":48,"skipped":5485,"failed":1,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]"]}

  SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
  ------------------------------
  [sig-api-machinery] Garbage collector 
    should support orphan deletion of custom resources
    test/e2e/apimachinery/garbage_collector.go:1040
  [BeforeEach] [sig-api-machinery] Garbage collector
... skipping 11 lines ...
  STEP: wait for the owner to be deleted
  STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the dependent crd
  [AfterEach] [sig-api-machinery] Garbage collector
    test/e2e/framework/framework.go:188
  Jan 23 01:36:37.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
  STEP: Destroying namespace "gc-6911" for this suite.
  •{"msg":"PASSED [sig-api-machinery] Garbage collector should support orphan deletion of custom resources","total":61,"completed":49,"skipped":5587,"failed":1,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]"]}

  SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
  ------------------------------
  [sig-api-machinery] Garbage collector 
    should orphan pods created by rc if deleteOptions.OrphanDependents is nil
    test/e2e/apimachinery/garbage_collector.go:439
  [BeforeEach] [sig-api-machinery] Garbage collector
... skipping 36 lines ...
  Jan 23 01:37:13.903: INFO: Deleting pod "simpletest.rc-fqltg" in namespace "gc-4646"
  Jan 23 01:37:13.943: INFO: Deleting pod "simpletest.rc-v49pq" in namespace "gc-4646"
  [AfterEach] [sig-api-machinery] Garbage collector
    test/e2e/framework/framework.go:188
  Jan 23 01:37:13.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
  STEP: Destroying namespace "gc-4646" for this suite.
  •{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if deleteOptions.OrphanDependents is nil","total":61,"completed":50,"skipped":5625,"failed":1,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]"]}

  SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
  ------------------------------
  [sig-node] NoExecuteTaintManager Single Pod [Serial] 
    removing taint cancels eviction [Disruptive] [Conformance]
    test/e2e/framework/framework.go:652
  [BeforeEach] [sig-node] NoExecuteTaintManager Single Pod [Serial]
... skipping 28 lines ...
  • [SLOW TEST:135.859 seconds]
  [sig-node] NoExecuteTaintManager Single Pod [Serial]
  test/e2e/node/framework.go:23
    removing taint cancels eviction [Disruptive] [Conformance]
    test/e2e/framework/framework.go:652
  ------------------------------
  {"msg":"PASSED [sig-node] NoExecuteTaintManager Single Pod [Serial] removing taint cancels eviction [Disruptive] [Conformance]","total":61,"completed":51,"skipped":5804,"failed":1,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]"]}

  SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
  ------------------------------
  [sig-scheduling] SchedulerPredicates [Serial] 
    validates that NodeSelector is respected if not matching  [Conformance]
    test/e2e/framework/framework.go:652
  [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
... skipping 47 lines ...
  [AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
    test/e2e/framework/framework.go:188
  Jan 23 01:39:31.520: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
  STEP: Destroying namespace "sched-pred-3449" for this suite.
  [AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
    test/e2e/scheduling/predicates.go:83
  •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","total":61,"completed":52,"skipped":5947,"failed":1,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]"]}

  SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
  ------------------------------
  [sig-apps] Daemon set [Serial] 
    should verify changes to a daemon set status [Conformance]
    test/e2e/framework/framework.go:652
  [BeforeEach] [sig-apps] Daemon set [Serial]
... skipping 61 lines ...
  Jan 23 01:39:42.686: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"36634"},"items":null}
  
  [AfterEach] [sig-apps] Daemon set [Serial]
    test/e2e/framework/framework.go:188
  Jan 23 01:39:42.788: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
  STEP: Destroying namespace "daemonsets-8567" for this suite.
  •{"msg":"PASSED [sig-apps] Daemon set [Serial] should verify changes to a daemon set status [Conformance]","total":61,"completed":53,"skipped":6016,"failed":1,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]"]}

  SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
  ------------------------------
  [sig-scheduling] SchedulerPreemption [Serial] 
    validates basic preemption works [Conformance]
    test/e2e/framework/framework.go:652
  [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial]
... skipping 19 lines ...
  [AfterEach] [sig-scheduling] SchedulerPreemption [Serial]
    test/e2e/framework/framework.go:188
  Jan 23 01:41:06.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
  STEP: Destroying namespace "sched-preemption-3580" for this suite.
  [AfterEach] [sig-scheduling] SchedulerPreemption [Serial]
    test/e2e/scheduling/preemption.go:80
  •{"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance]","total":61,"completed":54,"skipped":6168,"failed":1,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]"]}

  SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
  ------------------------------
  [sig-api-machinery] Garbage collector 
    should delete pods created by rc when not orphaning [Conformance]
    test/e2e/framework/framework.go:652
  [BeforeEach] [sig-api-machinery] Garbage collector
... skipping 34 lines ...
  For evicted_pods_total:
  
  [AfterEach] [sig-api-machinery] Garbage collector
    test/e2e/framework/framework.go:188
  Jan 23 01:41:17.486: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
  STEP: Destroying namespace "gc-854" for this suite.
  •{"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":61,"completed":55,"skipped":6207,"failed":1,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]"]}

  SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
  ------------------------------
  [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] Deployment 
    Should scale from 5 pods to 3 pods and from 3 to 1
    test/e2e/autoscaling/horizontal_pod_autoscaling.go:43
  [BeforeEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU)
... skipping 208 lines ...
  test/e2e/autoscaling/framework.go:23
    [Serial] [Slow] Deployment
    test/e2e/autoscaling/horizontal_pod_autoscaling.go:38
      Should scale from 5 pods to 3 pods and from 3 to 1
      test/e2e/autoscaling/horizontal_pod_autoscaling.go:43
  ------------------------------
  {"msg":"PASSED [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] Deployment Should scale from 5 pods to 3 pods and from 3 to 1","total":61,"completed":56,"skipped":6286,"failed":1,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]"]}

  SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
  ------------------------------
  [sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow] attempt to deploy past allocatable memory limits 
    should fail deployments of pods once there isn't enough memory

    test/e2e/windows/memory_limits.go:60
  [BeforeEach] [sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow]
    test/e2e/windows/framework.go:28
  [BeforeEach] [sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow]
    test/e2e/framework/framework.go:187
  STEP: Creating a kubernetes client
  Jan 23 01:52:38.031: INFO: >>> kubeConfig: /tmp/kubeconfig
  STEP: Building a namespace api object, basename memory-limit-test-windows
  STEP: Waiting for a default service account to be provisioned in namespace
  STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
  [BeforeEach] [sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow]
    test/e2e/windows/memory_limits.go:48
  [It] should fail deployments of pods once there isn't enough memory

    test/e2e/windows/memory_limits.go:60
  Jan 23 01:52:38.440: INFO: Found FailedScheduling event with message 0/3 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }, 2 Insufficient memory. preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod.
  [AfterEach] [sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow]
    test/e2e/framework/framework.go:188
  Jan 23 01:52:38.440: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
  STEP: Destroying namespace "memory-limit-test-windows-4139" for this suite.
  •{"msg":"PASSED [sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow] attempt to deploy past allocatable memory limits should fail deployments of pods once there isn't enough memory","total":61,"completed":57,"skipped":6682,"failed":1,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]"]}

  SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
  ------------------------------
  [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicaSet with idle sidecar (ContainerResource use case) 
    Should not scale up on a busy sidecar with an idle application
    test/e2e/autoscaling/horizontal_pod_autoscaling.go:103
  [BeforeEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU)
... skipping 94 lines ...
  test/e2e/autoscaling/framework.go:23
    [Serial] [Slow] ReplicaSet with idle sidecar (ContainerResource use case)
    test/e2e/autoscaling/horizontal_pod_autoscaling.go:96
      Should not scale up on a busy sidecar with an idle application
      test/e2e/autoscaling/horizontal_pod_autoscaling.go:103
  ------------------------------
  {"msg":"PASSED [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicaSet with idle sidecar (ContainerResource use case) Should not scale up on a busy sidecar with an idle application","total":61,"completed":58,"skipped":6826,"failed":1,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]"]}

  SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSJan 23 01:56:33.489: INFO: Running AfterSuite actions on all nodes
  Jan 23 01:56:33.489: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func19.2
  Jan 23 01:56:33.489: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func9.2
  Jan 23 01:56:33.489: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2
  Jan 23 01:56:33.489: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3
  Jan 23 01:56:33.489: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2
  Jan 23 01:56:33.489: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func4.2
  Jan 23 01:56:33.489: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func1.3
  Jan 23 01:56:33.489: INFO: Running AfterSuite actions on node 1
  Jan 23 01:56:33.489: INFO: Skipping dumping logs from cluster
  
  JUnit report was created: /output/junit_kubetest.01.xml
  {"msg":"Test Suite completed","total":61,"completed":58,"skipped":6914,"failed":1,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]"]}

  
  
  Summarizing 1 Failure:
  
  [Fail] [sig-scheduling] SchedulerPredicates [Serial] [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] 

  test/e2e/scheduling/predicates.go:883
  
  Ran 59 of 6973 Specs in 9269.842 seconds
  FAIL! -- 58 Passed | 1 Failed | 0 Pending | 6914 Skipped

  --- FAIL: TestE2E (9272.52s)

  FAIL

  
  Ginkgo ran 1 suite in 2h34m32.687547334s
  Test Suite Failed

  [FAILED] in [It] - /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:238 @ 01/23/23 01:56:33.91
  Jan 23 01:56:33.912: INFO: FAILED!
  Jan 23 01:56:33.915: INFO: Cleaning up after "Conformance Tests conformance-tests" spec
  STEP: Dumping logs from the "capz-conf-zs64h3" workload cluster @ 01/23/23 01:56:33.915
  Jan 23 01:56:33.915: INFO: Dumping workload cluster capz-conf-zs64h3/capz-conf-zs64h3 logs
  Jan 23 01:56:34.027: INFO: Collecting logs for Linux node capz-conf-zs64h3-control-plane-dlccj in cluster capz-conf-zs64h3 in namespace capz-conf-zs64h3

  Jan 23 01:57:12.118: INFO: Collecting boot logs for AzureMachine capz-conf-zs64h3-control-plane-dlccj

  Jan 23 01:57:13.070: INFO: Collecting logs for Windows node capz-conf-96jhk in cluster capz-conf-zs64h3 in namespace capz-conf-zs64h3

  Jan 23 01:59:13.743: INFO: Attempting to copy file /c:/crashdumps.tar on node capz-conf-96jhk to /logs/artifacts/clusters/capz-conf-zs64h3/machines/capz-conf-zs64h3-md-win-67dfd985d8-q88x8/crashdumps.tar
  Jan 23 01:59:15.369: INFO: Collecting boot logs for AzureMachine capz-conf-zs64h3-md-win-96jhk

Failed to get logs for Machine capz-conf-zs64h3-md-win-67dfd985d8-q88x8, Cluster capz-conf-zs64h3/capz-conf-zs64h3: running command "$p = 'c:\localdumps' ; if (Test-Path $p) { tar.exe -cvzf c:\crashdumps.tar $p *>&1 | %{ Write-Output "$_"} } else { Write-Host "No crash dumps found at $p" }": Process exited with status 1
  Jan 23 01:59:16.404: INFO: Collecting logs for Windows node capz-conf-2xrmj in cluster capz-conf-zs64h3 in namespace capz-conf-zs64h3

  Jan 23 02:01:16.071: INFO: Attempting to copy file /c:/crashdumps.tar on node capz-conf-2xrmj to /logs/artifacts/clusters/capz-conf-zs64h3/machines/capz-conf-zs64h3-md-win-67dfd985d8-q945m/crashdumps.tar
  Jan 23 02:01:17.701: INFO: Collecting boot logs for AzureMachine capz-conf-zs64h3-md-win-2xrmj

Failed to get logs for Machine capz-conf-zs64h3-md-win-67dfd985d8-q945m, Cluster capz-conf-zs64h3/capz-conf-zs64h3: running command "$p = 'c:\localdumps' ; if (Test-Path $p) { tar.exe -cvzf c:\crashdumps.tar $p *>&1 | %{ Write-Output "$_"} } else { Write-Host "No crash dumps found at $p" }": Process exited with status 1
  Jan 23 02:01:18.900: INFO: Dumping workload cluster capz-conf-zs64h3/capz-conf-zs64h3 kube-system pod logs
  Jan 23 02:01:19.258: INFO: Creating log watcher for controller calico-apiserver/calico-apiserver-7f7758c56-4445b, container calico-apiserver
  Jan 23 02:01:19.258: INFO: Creating log watcher for controller calico-apiserver/calico-apiserver-7f7758c56-gzr5r, container calico-apiserver
  Jan 23 02:01:19.258: INFO: Collecting events for Pod calico-apiserver/calico-apiserver-7f7758c56-4445b
  Jan 23 02:01:19.258: INFO: Collecting events for Pod calico-apiserver/calico-apiserver-7f7758c56-gzr5r
  Jan 23 02:01:19.293: INFO: Collecting events for Pod calico-system/calico-kube-controllers-594d54f99-r76g2
... skipping 69 lines ...
  INFO: Waiting for the Cluster capz-conf-zs64h3/capz-conf-zs64h3 to be deleted
  STEP: Waiting for cluster capz-conf-zs64h3 to be deleted @ 01/23/23 02:01:21.282
  Jan 23 02:08:11.578: INFO: Deleting namespace used for hosting the "conformance-tests" test spec
  INFO: Deleting namespace capz-conf-zs64h3
  Jan 23 02:08:11.598: INFO: Checking if any resources are left over in Azure for spec "conformance-tests"
  STEP: Redacting sensitive information from logs @ 01/23/23 02:08:12.295
• [FAILED] [10593.603 seconds]
Conformance Tests [It] conformance-tests
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:100

  [FAILED] Unexpected error:
      <*errors.withStack | 0xc0009ac150>: {
          error: <*errors.withMessage | 0xc000956820>{
              cause: <*errors.errorString | 0xc0004f4b60>{
                  s: "error container run failed with exit code 1",
              },
              msg: "Unable to run conformance tests",
          },
          stack: [0x3143379, 0x353bac7, 0x18e62fb, 0x18f9df8, 0x147c741],
      }
      Unable to run conformance tests: error container run failed with exit code 1
  occurred
  In [It] at: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:238 @ 01/23/23 01:56:33.91

  Full Stack Trace
    sigs.k8s.io/cluster-api-provider-azure/test/e2e.glob..func3.2()
    	/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:238 +0x18fa
... skipping 8 lines ...
[ReportAfterSuite] Autogenerated ReportAfterSuite for --junit-report
autogenerated by Ginkgo
[ReportAfterSuite] PASSED [0.026 seconds]
------------------------------

Summarizing 1 Failure:
  [FAIL] Conformance Tests [It] conformance-tests
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:238

Ran 1 of 23 Specs in 10744.654 seconds
FAIL! -- 0 Passed | 1 Failed | 0 Pending | 22 Skipped
--- FAIL: TestE2E (10744.68s)
FAIL
You're using deprecated Ginkgo functionality:
=============================================
  CurrentGinkgoTestDescription() is deprecated in Ginkgo V2.  Use CurrentSpecReport() instead.
  Learn more at: https://onsi.github.io/ginkgo/MIGRATING_TO_V2#changed-currentginkgotestdescription
    /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:278
    /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/common.go:281

To silence deprecations that can be silenced set the following environment variable:
  ACK_GINKGO_DEPRECATIONS=2.6.0


Ginkgo ran 1 suite in 3h1m34.864876414s

Test Suite Failed
make[3]: *** [Makefile:655: test-e2e-run] Error 1
make[3]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make[2]: *** [Makefile:670: test-e2e-skip-push] Error 2
make[2]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make[1]: *** [Makefile:686: test-conformance] Error 2
make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make: *** [Makefile:696: test-windows-upstream] Error 2
================ REDACTING LOGS ================
All sensitive variables are redacted
+ EXIT_VALUE=2
+ set +o xtrace
Cleaning up after docker in docker.
================================================================================
... skipping 6 lines ...