This job view page is being replaced by Spyglass soon. Check out the new job view.
Resultfailure
Tests 0 failed / 7 succeeded
Started2022-09-03 20:23
Elapsed1h8m
Revision
uploadercrier
uploadercrier

No Test Failures!


Show 7 Passed Tests

Show 20 Skipped Tests

Error lines from build-log.txt

... skipping 898 lines ...
Status: Downloaded newer image for quay.io/jetstack/cert-manager-controller:v1.9.1
quay.io/jetstack/cert-manager-controller:v1.9.1
+ export GINKGO_NODES=3
+ GINKGO_NODES=3
+ export GINKGO_NOCOLOR=true
+ GINKGO_NOCOLOR=true
+ export GINKGO_ARGS=--fail-fast
+ GINKGO_ARGS=--fail-fast
+ export E2E_CONF_FILE=/home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/config/docker.yaml
+ E2E_CONF_FILE=/home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/config/docker.yaml
+ export ARTIFACTS=/logs/artifacts
+ ARTIFACTS=/logs/artifacts
+ export SKIP_RESOURCE_CLEANUP=false
+ SKIP_RESOURCE_CLEANUP=false
... skipping 79 lines ...
/home/prow/go/src/sigs.k8s.io/cluster-api/hack/tools/bin/kustomize-v4.5.2 build /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/data/infrastructure-docker/v1beta1/cluster-template-kcp-scale-in --load-restrictor LoadRestrictionsNone > /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/data/infrastructure-docker/v1beta1/cluster-template-kcp-scale-in.yaml
/home/prow/go/src/sigs.k8s.io/cluster-api/hack/tools/bin/kustomize-v4.5.2 build /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/data/infrastructure-docker/v1beta1/cluster-template-ipv6 --load-restrictor LoadRestrictionsNone > /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/data/infrastructure-docker/v1beta1/cluster-template-ipv6.yaml
/home/prow/go/src/sigs.k8s.io/cluster-api/hack/tools/bin/kustomize-v4.5.2 build /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/data/infrastructure-docker/v1beta1/cluster-template-topology --load-restrictor LoadRestrictionsNone > /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/data/infrastructure-docker/v1beta1/cluster-template-topology.yaml
/home/prow/go/src/sigs.k8s.io/cluster-api/hack/tools/bin/kustomize-v4.5.2 build /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/data/infrastructure-docker/v1beta1/cluster-template-ignition --load-restrictor LoadRestrictionsNone > /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/data/infrastructure-docker/v1beta1/cluster-template-ignition.yaml
mkdir -p /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/data/test-extension
/home/prow/go/src/sigs.k8s.io/cluster-api/hack/tools/bin/kustomize-v4.5.2 build /home/prow/go/src/sigs.k8s.io/cluster-api/test/extension/config/default > /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/data/test-extension/deployment.yaml
/home/prow/go/src/sigs.k8s.io/cluster-api/hack/tools/bin/ginkgo-v2.1.4 -v --trace --tags=e2e --focus="\[K8s-Upgrade\]"  --nodes=3 --no-color=true --output-dir="/logs/artifacts" --junit-report="junit.e2e_suite.1.xml" --fail-fast . -- \
    -e2e.artifacts-folder="/logs/artifacts" \
    -e2e.config="/home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/config/docker.yaml" \
    -e2e.skip-resource-cleanup=false -e2e.use-existing-cluster=false
go: downloading github.com/blang/semver v3.5.1+incompatible
go: downloading k8s.io/api v0.24.2
go: downloading k8s.io/apimachinery v0.24.2
... skipping 227 lines ...
    kubeadmconfig.bootstrap.cluster.x-k8s.io/k8s-upgrade-and-conformance-uljqkb-mp-0-config created
    kubeadmconfig.bootstrap.cluster.x-k8s.io/k8s-upgrade-and-conformance-uljqkb-mp-0-config-cgroupfs created
    cluster.cluster.x-k8s.io/k8s-upgrade-and-conformance-uljqkb created
    machinepool.cluster.x-k8s.io/k8s-upgrade-and-conformance-uljqkb-mp-0 created
    dockermachinepool.infrastructure.cluster.x-k8s.io/k8s-upgrade-and-conformance-uljqkb-dmp-0 created

    Failed to get logs for Machine k8s-upgrade-and-conformance-uljqkb-4xpw7-jp2dr, Cluster k8s-upgrade-and-conformance-ie185u/k8s-upgrade-and-conformance-uljqkb: exit status 2
    Failed to get logs for Machine k8s-upgrade-and-conformance-uljqkb-md-0-rg248-796ff9996-j7vhm, Cluster k8s-upgrade-and-conformance-ie185u/k8s-upgrade-and-conformance-uljqkb: exit status 2
    Failed to get logs for Machine k8s-upgrade-and-conformance-uljqkb-md-0-rg248-796ff9996-wkqbk, Cluster k8s-upgrade-and-conformance-ie185u/k8s-upgrade-and-conformance-uljqkb: exit status 2
    Failed to get logs for MachinePool k8s-upgrade-and-conformance-uljqkb-mp-0, Cluster k8s-upgrade-and-conformance-ie185u/k8s-upgrade-and-conformance-uljqkb: exit status 2
  << End Captured StdOut/StdErr Output

  Begin Captured GinkgoWriter Output >>
    STEP: Creating a namespace for hosting the "k8s-upgrade-and-conformance" test spec 09/03/22 20:35:44.194
    INFO: Creating namespace k8s-upgrade-and-conformance-ie185u
    INFO: Creating event watcher for namespace "k8s-upgrade-and-conformance-ie185u"
... skipping 41 lines ...
    
    Running in parallel across 4 nodes
    
    Sep  3 20:44:09.085: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  3 20:44:09.088: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable
    Sep  3 20:44:09.104: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready
    Sep  3 20:44:09.149: INFO: The status of Pod coredns-558bd4d5db-t5gq2 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  3 20:44:09.149: INFO: The status of Pod kindnet-r7l4p is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  3 20:44:09.149: INFO: The status of Pod kube-proxy-4g4rn is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  3 20:44:09.149: INFO: 17 / 20 pods in namespace 'kube-system' are running and ready (0 seconds elapsed)
    Sep  3 20:44:09.149: INFO: expected 2 pod replicas in namespace 'kube-system', 1 are Running and Ready.
    Sep  3 20:44:09.149: INFO: POD                       NODE                                              PHASE    GRACE  CONDITIONS
    Sep  3 20:44:09.149: INFO: coredns-558bd4d5db-t5gq2  k8s-upgrade-and-conformance-uljqkb-worker-hz1yac  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-03 20:41:32 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-03 20:43:25 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-03 20:41:36 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-03 20:41:32 +0000 UTC  }]
    Sep  3 20:44:09.149: INFO: kindnet-r7l4p             k8s-upgrade-and-conformance-uljqkb-worker-hz1yac  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-03 20:37:38 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-03 20:43:25 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-03 20:37:41 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-03 20:37:36 +0000 UTC  }]
    Sep  3 20:44:09.149: INFO: kube-proxy-4g4rn          k8s-upgrade-and-conformance-uljqkb-worker-hz1yac  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-03 20:41:34 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-03 20:43:25 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-03 20:41:37 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-03 20:41:34 +0000 UTC  }]
    Sep  3 20:44:09.149: INFO: 
    Sep  3 20:44:11.174: INFO: The status of Pod coredns-558bd4d5db-t5gq2 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  3 20:44:11.174: INFO: The status of Pod kindnet-r7l4p is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  3 20:44:11.174: INFO: The status of Pod kube-proxy-4g4rn is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  3 20:44:11.174: INFO: 17 / 20 pods in namespace 'kube-system' are running and ready (2 seconds elapsed)
    Sep  3 20:44:11.174: INFO: expected 2 pod replicas in namespace 'kube-system', 1 are Running and Ready.
    Sep  3 20:44:11.174: INFO: POD                       NODE                                              PHASE    GRACE  CONDITIONS
    Sep  3 20:44:11.174: INFO: coredns-558bd4d5db-t5gq2  k8s-upgrade-and-conformance-uljqkb-worker-hz1yac  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-03 20:41:32 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-03 20:43:25 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-03 20:41:36 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-03 20:41:32 +0000 UTC  }]
    Sep  3 20:44:11.174: INFO: kindnet-r7l4p             k8s-upgrade-and-conformance-uljqkb-worker-hz1yac  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-03 20:37:38 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-03 20:43:25 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-03 20:37:41 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-03 20:37:36 +0000 UTC  }]
    Sep  3 20:44:11.175: INFO: kube-proxy-4g4rn          k8s-upgrade-and-conformance-uljqkb-worker-hz1yac  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-03 20:41:34 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-03 20:43:25 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-03 20:41:37 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-03 20:41:34 +0000 UTC  }]
    Sep  3 20:44:11.175: INFO: 
    Sep  3 20:44:13.170: INFO: The status of Pod coredns-558bd4d5db-t5gq2 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  3 20:44:13.170: INFO: The status of Pod kindnet-r7l4p is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  3 20:44:13.170: INFO: The status of Pod kube-proxy-4g4rn is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  3 20:44:13.170: INFO: 17 / 20 pods in namespace 'kube-system' are running and ready (4 seconds elapsed)
    Sep  3 20:44:13.170: INFO: expected 2 pod replicas in namespace 'kube-system', 1 are Running and Ready.
    Sep  3 20:44:13.170: INFO: POD                       NODE                                              PHASE    GRACE  CONDITIONS
    Sep  3 20:44:13.170: INFO: coredns-558bd4d5db-t5gq2  k8s-upgrade-and-conformance-uljqkb-worker-hz1yac  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-03 20:41:32 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-03 20:43:25 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-03 20:41:36 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-03 20:41:32 +0000 UTC  }]
    Sep  3 20:44:13.170: INFO: kindnet-r7l4p             k8s-upgrade-and-conformance-uljqkb-worker-hz1yac  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-03 20:37:38 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-03 20:43:25 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-03 20:37:41 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-03 20:37:36 +0000 UTC  }]
    Sep  3 20:44:13.170: INFO: kube-proxy-4g4rn          k8s-upgrade-and-conformance-uljqkb-worker-hz1yac  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-03 20:41:34 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-03 20:43:25 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-03 20:41:37 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-03 20:41:34 +0000 UTC  }]
    Sep  3 20:44:13.170: INFO: 
    Sep  3 20:44:15.168: INFO: The status of Pod coredns-558bd4d5db-t5gq2 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  3 20:44:15.168: INFO: The status of Pod kindnet-r7l4p is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  3 20:44:15.168: INFO: The status of Pod kube-proxy-4g4rn is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  3 20:44:15.168: INFO: 17 / 20 pods in namespace 'kube-system' are running and ready (6 seconds elapsed)
    Sep  3 20:44:15.168: INFO: expected 2 pod replicas in namespace 'kube-system', 1 are Running and Ready.
    Sep  3 20:44:15.168: INFO: POD                       NODE                                              PHASE    GRACE  CONDITIONS
    Sep  3 20:44:15.168: INFO: coredns-558bd4d5db-t5gq2  k8s-upgrade-and-conformance-uljqkb-worker-hz1yac  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-03 20:41:32 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-03 20:43:25 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-03 20:41:36 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-03 20:41:32 +0000 UTC  }]
    Sep  3 20:44:15.168: INFO: kindnet-r7l4p             k8s-upgrade-and-conformance-uljqkb-worker-hz1yac  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-03 20:37:38 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-03 20:43:25 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-03 20:37:41 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-03 20:37:36 +0000 UTC  }]
    Sep  3 20:44:15.168: INFO: kube-proxy-4g4rn          k8s-upgrade-and-conformance-uljqkb-worker-hz1yac  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-03 20:41:34 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-03 20:43:25 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-03 20:41:37 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-03 20:41:34 +0000 UTC  }]
    Sep  3 20:44:15.168: INFO: 
    Sep  3 20:44:17.170: INFO: The status of Pod coredns-558bd4d5db-t5gq2 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  3 20:44:17.170: INFO: The status of Pod kindnet-r7l4p is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  3 20:44:17.170: INFO: The status of Pod kube-proxy-4g4rn is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  3 20:44:17.170: INFO: 17 / 20 pods in namespace 'kube-system' are running and ready (8 seconds elapsed)
    Sep  3 20:44:17.170: INFO: expected 2 pod replicas in namespace 'kube-system', 1 are Running and Ready.
    Sep  3 20:44:17.170: INFO: POD                       NODE                                              PHASE    GRACE  CONDITIONS
    Sep  3 20:44:17.170: INFO: coredns-558bd4d5db-t5gq2  k8s-upgrade-and-conformance-uljqkb-worker-hz1yac  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-03 20:41:32 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-03 20:43:25 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-03 20:41:36 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-03 20:41:32 +0000 UTC  }]
    Sep  3 20:44:17.171: INFO: kindnet-r7l4p             k8s-upgrade-and-conformance-uljqkb-worker-hz1yac  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-03 20:37:38 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-03 20:43:25 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-03 20:37:41 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-03 20:37:36 +0000 UTC  }]
    Sep  3 20:44:17.171: INFO: kube-proxy-4g4rn          k8s-upgrade-and-conformance-uljqkb-worker-hz1yac  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-03 20:41:34 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-03 20:43:25 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-03 20:41:37 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-03 20:41:34 +0000 UTC  }]
    Sep  3 20:44:17.171: INFO: 
    Sep  3 20:44:19.169: INFO: The status of Pod coredns-558bd4d5db-t5gq2 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  3 20:44:19.169: INFO: The status of Pod kindnet-r7l4p is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  3 20:44:19.169: INFO: The status of Pod kube-proxy-4g4rn is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  3 20:44:19.169: INFO: 17 / 20 pods in namespace 'kube-system' are running and ready (10 seconds elapsed)
    Sep  3 20:44:19.169: INFO: expected 2 pod replicas in namespace 'kube-system', 1 are Running and Ready.
    Sep  3 20:44:19.169: INFO: POD                       NODE                                              PHASE    GRACE  CONDITIONS
    Sep  3 20:44:19.169: INFO: coredns-558bd4d5db-t5gq2  k8s-upgrade-and-conformance-uljqkb-worker-hz1yac  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-03 20:41:32 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-03 20:43:25 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-03 20:41:36 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-03 20:41:32 +0000 UTC  }]
    Sep  3 20:44:19.169: INFO: kindnet-r7l4p             k8s-upgrade-and-conformance-uljqkb-worker-hz1yac  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-03 20:37:38 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-03 20:43:25 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-03 20:37:41 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-03 20:37:36 +0000 UTC  }]
    Sep  3 20:44:19.169: INFO: kube-proxy-4g4rn          k8s-upgrade-and-conformance-uljqkb-worker-hz1yac  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-03 20:41:34 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-03 20:43:25 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-03 20:41:37 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-03 20:41:34 +0000 UTC  }]
    Sep  3 20:44:19.169: INFO: 
    Sep  3 20:44:21.173: INFO: The status of Pod coredns-558bd4d5db-t5gq2 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  3 20:44:21.173: INFO: The status of Pod kindnet-r7l4p is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  3 20:44:21.173: INFO: The status of Pod kube-proxy-4g4rn is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  3 20:44:21.173: INFO: 17 / 20 pods in namespace 'kube-system' are running and ready (12 seconds elapsed)
    Sep  3 20:44:21.173: INFO: expected 2 pod replicas in namespace 'kube-system', 1 are Running and Ready.
    Sep  3 20:44:21.173: INFO: POD                       NODE                                              PHASE    GRACE  CONDITIONS
    Sep  3 20:44:21.173: INFO: coredns-558bd4d5db-t5gq2  k8s-upgrade-and-conformance-uljqkb-worker-hz1yac  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-03 20:41:32 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-03 20:43:25 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-03 20:41:36 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-03 20:41:32 +0000 UTC  }]
    Sep  3 20:44:21.173: INFO: kindnet-r7l4p             k8s-upgrade-and-conformance-uljqkb-worker-hz1yac  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-03 20:37:38 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-03 20:43:25 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-03 20:37:41 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-03 20:37:36 +0000 UTC  }]
    Sep  3 20:44:21.173: INFO: kube-proxy-4g4rn          k8s-upgrade-and-conformance-uljqkb-worker-hz1yac  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-03 20:41:34 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-03 20:43:25 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-03 20:41:37 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-03 20:41:34 +0000 UTC  }]
    Sep  3 20:44:21.173: INFO: 
    Sep  3 20:44:23.176: INFO: The status of Pod coredns-558bd4d5db-t5gq2 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  3 20:44:23.176: INFO: The status of Pod kindnet-r7l4p is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  3 20:44:23.176: INFO: The status of Pod kube-proxy-4g4rn is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  3 20:44:23.176: INFO: 17 / 20 pods in namespace 'kube-system' are running and ready (14 seconds elapsed)
    Sep  3 20:44:23.176: INFO: expected 2 pod replicas in namespace 'kube-system', 1 are Running and Ready.
    Sep  3 20:44:23.176: INFO: POD                       NODE                                              PHASE    GRACE  CONDITIONS
    Sep  3 20:44:23.176: INFO: coredns-558bd4d5db-t5gq2  k8s-upgrade-and-conformance-uljqkb-worker-hz1yac  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-03 20:41:32 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-03 20:43:25 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-03 20:41:36 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-03 20:41:32 +0000 UTC  }]
    Sep  3 20:44:23.176: INFO: kindnet-r7l4p             k8s-upgrade-and-conformance-uljqkb-worker-hz1yac  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-03 20:37:38 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-03 20:43:25 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-03 20:37:41 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-03 20:37:36 +0000 UTC  }]
    Sep  3 20:44:23.176: INFO: kube-proxy-4g4rn          k8s-upgrade-and-conformance-uljqkb-worker-hz1yac  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-03 20:41:34 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-03 20:43:25 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-03 20:41:37 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-03 20:41:34 +0000 UTC  }]
    Sep  3 20:44:23.176: INFO: 
    Sep  3 20:44:25.171: INFO: The status of Pod coredns-558bd4d5db-t5gq2 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  3 20:44:25.171: INFO: The status of Pod kindnet-r7l4p is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  3 20:44:25.171: INFO: The status of Pod kube-proxy-4g4rn is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  3 20:44:25.171: INFO: 17 / 20 pods in namespace 'kube-system' are running and ready (16 seconds elapsed)
    Sep  3 20:44:25.171: INFO: expected 2 pod replicas in namespace 'kube-system', 1 are Running and Ready.
    Sep  3 20:44:25.171: INFO: POD                       NODE                                              PHASE    GRACE  CONDITIONS
    Sep  3 20:44:25.171: INFO: coredns-558bd4d5db-t5gq2  k8s-upgrade-and-conformance-uljqkb-worker-hz1yac  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-03 20:41:32 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-03 20:43:25 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-03 20:41:36 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-03 20:41:32 +0000 UTC  }]
    Sep  3 20:44:25.171: INFO: kindnet-r7l4p             k8s-upgrade-and-conformance-uljqkb-worker-hz1yac  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-03 20:37:38 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-03 20:43:25 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-03 20:37:41 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-03 20:37:36 +0000 UTC  }]
    Sep  3 20:44:25.171: INFO: kube-proxy-4g4rn          k8s-upgrade-and-conformance-uljqkb-worker-hz1yac  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-03 20:41:34 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-03 20:43:25 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-03 20:41:37 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-03 20:41:34 +0000 UTC  }]
    Sep  3 20:44:25.171: INFO: 
    Sep  3 20:44:27.169: INFO: The status of Pod coredns-558bd4d5db-kv9f7 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  3 20:44:27.169: INFO: The status of Pod coredns-558bd4d5db-v6445 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  3 20:44:27.169: INFO: 14 / 16 pods in namespace 'kube-system' are running and ready (18 seconds elapsed)
    Sep  3 20:44:27.169: INFO: expected 2 pod replicas in namespace 'kube-system', 0 are Running and Ready.
    Sep  3 20:44:27.169: INFO: POD                       NODE                                                           PHASE    GRACE  CONDITIONS
    Sep  3 20:44:27.169: INFO: coredns-558bd4d5db-kv9f7  k8s-upgrade-and-conformance-uljqkb-md-0-rg248-796ff9996-wkqbk  Pending         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-03 20:44:26 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-03 20:44:26 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-09-03 20:44:26 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-03 20:44:26 +0000 UTC  }]
    Sep  3 20:44:27.169: INFO: coredns-558bd4d5db-v6445  k8s-upgrade-and-conformance-uljqkb-md-0-rg248-796ff9996-j7vhm  Pending         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-03 20:44:26 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-03 20:44:26 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-09-03 20:44:26 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-03 20:44:26 +0000 UTC  }]
    Sep  3 20:44:27.169: INFO: 
... skipping 43 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 20:44:29.416: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-1622" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","total":-1,"completed":1,"skipped":5,"failed":0}

    
    SSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 4 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
    [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep  3 20:44:29.255: INFO: Waiting up to 5m0s for pod "downwardapi-volume-13549a9a-13bf-41d7-a1d4-55ba47eede0e" in namespace "downward-api-180" to be "Succeeded or Failed"

    Sep  3 20:44:29.263: INFO: Pod "downwardapi-volume-13549a9a-13bf-41d7-a1d4-55ba47eede0e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.288616ms
    Sep  3 20:44:31.268: INFO: Pod "downwardapi-volume-13549a9a-13bf-41d7-a1d4-55ba47eede0e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012809977s
    Sep  3 20:44:33.414: INFO: Pod "downwardapi-volume-13549a9a-13bf-41d7-a1d4-55ba47eede0e": Phase="Running", Reason="", readiness=true. Elapsed: 4.159168495s
    Sep  3 20:44:35.419: INFO: Pod "downwardapi-volume-13549a9a-13bf-41d7-a1d4-55ba47eede0e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.163707192s
    STEP: Saw pod success
    Sep  3 20:44:35.419: INFO: Pod "downwardapi-volume-13549a9a-13bf-41d7-a1d4-55ba47eede0e" satisfied condition "Succeeded or Failed"

    Sep  3 20:44:35.422: INFO: Trying to get logs from node k8s-upgrade-and-conformance-uljqkb-worker-tpmotr pod downwardapi-volume-13549a9a-13bf-41d7-a1d4-55ba47eede0e container client-container: <nil>
    STEP: delete the pod
    Sep  3 20:44:35.455: INFO: Waiting for pod downwardapi-volume-13549a9a-13bf-41d7-a1d4-55ba47eede0e to disappear
    Sep  3 20:44:35.458: INFO: Pod downwardapi-volume-13549a9a-13bf-41d7-a1d4-55ba47eede0e no longer exists
    [AfterEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 20:44:35.458: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "downward-api-180" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":3,"failed":0}

    
    SSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Pods
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 17 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 20:44:35.525: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "pods-5325" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":54,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Kubelet
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 10 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 20:44:37.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubelet-test-3827" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":19,"failed":0}

    
    SSSS
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  3 20:44:35.487: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename emptydir
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test emptydir 0644 on tmpfs
    Sep  3 20:44:35.521: INFO: Waiting up to 5m0s for pod "pod-8fcbc672-44c1-4f39-a2c0-1a7ab49113e7" in namespace "emptydir-2605" to be "Succeeded or Failed"

    Sep  3 20:44:35.525: INFO: Pod "pod-8fcbc672-44c1-4f39-a2c0-1a7ab49113e7": Phase="Pending", Reason="", readiness=false. Elapsed: 3.462869ms
    Sep  3 20:44:37.529: INFO: Pod "pod-8fcbc672-44c1-4f39-a2c0-1a7ab49113e7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007466752s
    STEP: Saw pod success
    Sep  3 20:44:37.529: INFO: Pod "pod-8fcbc672-44c1-4f39-a2c0-1a7ab49113e7" satisfied condition "Succeeded or Failed"

    Sep  3 20:44:37.538: INFO: Trying to get logs from node k8s-upgrade-and-conformance-uljqkb-worker-tpmotr pod pod-8fcbc672-44c1-4f39-a2c0-1a7ab49113e7 container test-container: <nil>
    STEP: delete the pod
    Sep  3 20:44:37.559: INFO: Waiting for pod pod-8fcbc672-44c1-4f39-a2c0-1a7ab49113e7 to disappear
    Sep  3 20:44:37.563: INFO: Pod pod-8fcbc672-44c1-4f39-a2c0-1a7ab49113e7 no longer exists
    [AfterEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 20:44:37.563: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-2605" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":10,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 37 lines ...
    Sep  3 20:44:44.318: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-9054 explain e2e-test-crd-publish-openapi-1621-crds.spec'
    Sep  3 20:44:44.572: INFO: stderr: ""
    Sep  3 20:44:44.572: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-1621-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec <Object>\n\nDESCRIPTION:\n     Specification of Foo\n\nFIELDS:\n   bars\t<[]Object>\n     List of Bars and their specs.\n\n"
    Sep  3 20:44:44.572: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-9054 explain e2e-test-crd-publish-openapi-1621-crds.spec.bars'
    Sep  3 20:44:44.852: INFO: stderr: ""
    Sep  3 20:44:44.852: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-1621-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n     List of Bars and their specs.\n\nFIELDS:\n   age\t<string>\n     Age of Bar.\n\n   bazs\t<[]string>\n     List of Bazs.\n\n   name\t<string> -required-\n     Name of Bar.\n\n"
    STEP: kubectl explain works to return error when explain is called on property that doesn't exist

    Sep  3 20:44:44.852: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-9054 explain e2e-test-crd-publish-openapi-1621-crds.spec.bars2'
    Sep  3 20:44:45.110: INFO: rc: 1
    [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 20:44:47.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "crd-publish-openapi-9054" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":-1,"completed":3,"skipped":54,"failed":0}

    
    SSS
    ------------------------------
    [BeforeEach] [sig-apps] Deployment
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 49 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 20:44:54.452: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "deployment-3708" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":-1,"completed":1,"skipped":17,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] PodTemplates
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 6 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 20:44:54.569: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "podtemplate-7205" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]","total":-1,"completed":2,"skipped":38,"failed":0}

    
    SSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 39 lines ...
    Sep  3 20:44:54.597: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename security-context
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser
    Sep  3 20:44:54.638: INFO: Waiting up to 5m0s for pod "security-context-8f040c48-ef6d-4a7a-9c84-1740650610c1" in namespace "security-context-2672" to be "Succeeded or Failed"

    Sep  3 20:44:54.641: INFO: Pod "security-context-8f040c48-ef6d-4a7a-9c84-1740650610c1": Phase="Pending", Reason="", readiness=false. Elapsed: 3.049811ms
    Sep  3 20:44:56.646: INFO: Pod "security-context-8f040c48-ef6d-4a7a-9c84-1740650610c1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007823901s
    STEP: Saw pod success
    Sep  3 20:44:56.646: INFO: Pod "security-context-8f040c48-ef6d-4a7a-9c84-1740650610c1" satisfied condition "Succeeded or Failed"

    Sep  3 20:44:56.650: INFO: Trying to get logs from node k8s-upgrade-and-conformance-uljqkb-md-0-rg248-796ff9996-wkqbk pod security-context-8f040c48-ef6d-4a7a-9c84-1740650610c1 container test-container: <nil>
    STEP: delete the pod
    Sep  3 20:44:56.676: INFO: Waiting for pod security-context-8f040c48-ef6d-4a7a-9c84-1740650610c1 to disappear
    Sep  3 20:44:56.679: INFO: Pod security-context-8f040c48-ef6d-4a7a-9c84-1740650610c1 no longer exists
    [AfterEach] [sig-node] Security Context
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 20:44:56.679: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "security-context-2672" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Security Context should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","total":-1,"completed":3,"skipped":49,"failed":0}

    
    SSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Container Lifecycle Hook
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 26 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 20:44:59.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "container-lifecycle-hook-9618" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":57,"failed":0}

    
    SSSSSS
    ------------------------------
    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 11 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 20:44:59.848: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-839" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0  [Conformance]","total":-1,"completed":5,"skipped":63,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 46 lines ...
    STEP: Destroying namespace "services-300" for this suite.
    [AfterEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":2,"skipped":80,"failed":0}

    
    SSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] Deployment
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 26 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 20:45:01.812: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "deployment-2767" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":-1,"completed":4,"skipped":68,"failed":0}

    
    SSSSSSSS
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":-1,"completed":3,"skipped":23,"failed":0}

    [BeforeEach] [sig-network] EndpointSliceMirroring
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  3 20:44:55.932: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename endpointslicemirroring
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 11 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 20:45:02.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "endpointslicemirroring-6032" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","total":-1,"completed":4,"skipped":23,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  3 20:45:02.398: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename emptydir
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test emptydir 0666 on tmpfs
    Sep  3 20:45:02.439: INFO: Waiting up to 5m0s for pod "pod-70b3def9-da3b-48c2-8adc-319bdd921784" in namespace "emptydir-8339" to be "Succeeded or Failed"

    Sep  3 20:45:02.448: INFO: Pod "pod-70b3def9-da3b-48c2-8adc-319bdd921784": Phase="Pending", Reason="", readiness=false. Elapsed: 8.08477ms
    Sep  3 20:45:04.452: INFO: Pod "pod-70b3def9-da3b-48c2-8adc-319bdd921784": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.012918709s
    STEP: Saw pod success
    Sep  3 20:45:04.453: INFO: Pod "pod-70b3def9-da3b-48c2-8adc-319bdd921784" satisfied condition "Succeeded or Failed"

    Sep  3 20:45:04.456: INFO: Trying to get logs from node k8s-upgrade-and-conformance-uljqkb-worker-gvulve pod pod-70b3def9-da3b-48c2-8adc-319bdd921784 container test-container: <nil>
    STEP: delete the pod
    Sep  3 20:45:04.486: INFO: Waiting for pod pod-70b3def9-da3b-48c2-8adc-319bdd921784 to disappear
    Sep  3 20:45:04.490: INFO: Pod pod-70b3def9-da3b-48c2-8adc-319bdd921784 no longer exists
    [AfterEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 20:45:04.490: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-8339" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":76,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Variable Expansion
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  3 20:45:00.416: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename var-expansion
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should allow substituting values in a container's command [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test substitution in container's command
    Sep  3 20:45:00.485: INFO: Waiting up to 5m0s for pod "var-expansion-ac795e0f-bff9-41e3-b556-64d1ad715ffb" in namespace "var-expansion-8280" to be "Succeeded or Failed"

    Sep  3 20:45:00.490: INFO: Pod "var-expansion-ac795e0f-bff9-41e3-b556-64d1ad715ffb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.71003ms
    Sep  3 20:45:02.495: INFO: Pod "var-expansion-ac795e0f-bff9-41e3-b556-64d1ad715ffb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009746039s
    Sep  3 20:45:04.508: INFO: Pod "var-expansion-ac795e0f-bff9-41e3-b556-64d1ad715ffb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02352542s
    STEP: Saw pod success
    Sep  3 20:45:04.508: INFO: Pod "var-expansion-ac795e0f-bff9-41e3-b556-64d1ad715ffb" satisfied condition "Succeeded or Failed"

    Sep  3 20:45:04.512: INFO: Trying to get logs from node k8s-upgrade-and-conformance-uljqkb-md-0-rg248-796ff9996-j7vhm pod var-expansion-ac795e0f-bff9-41e3-b556-64d1ad715ffb container dapi-container: <nil>
    STEP: delete the pod
    Sep  3 20:45:04.543: INFO: Waiting for pod var-expansion-ac795e0f-bff9-41e3-b556-64d1ad715ffb to disappear
    Sep  3 20:45:04.546: INFO: Pod var-expansion-ac795e0f-bff9-41e3-b556-64d1ad715ffb no longer exists
    [AfterEach] [sig-node] Variable Expansion
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 20:45:04.546: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "var-expansion-8280" for this suite.
    
    •S
    ------------------------------
    {"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":99,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] EndpointSlice
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 8 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 20:45:04.639: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "endpointslice-308" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","total":-1,"completed":6,"skipped":112,"failed":0}

    
    S
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 11 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 20:45:06.885: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-6530" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":-1,"completed":7,"skipped":113,"failed":0}

    
    SSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected configMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 15 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 20:45:08.798: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-4312" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":124,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  3 20:45:06.929: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename configmap
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable via the environment [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating configMap configmap-5281/configmap-test-b5771dfc-fc65-41f1-be71-0871d86f5c4a
    STEP: Creating a pod to test consume configMaps
    Sep  3 20:45:07.031: INFO: Waiting up to 5m0s for pod "pod-configmaps-9c848d8e-3df0-4052-9e7d-3971ef902d34" in namespace "configmap-5281" to be "Succeeded or Failed"

    Sep  3 20:45:07.065: INFO: Pod "pod-configmaps-9c848d8e-3df0-4052-9e7d-3971ef902d34": Phase="Pending", Reason="", readiness=false. Elapsed: 34.593265ms
    Sep  3 20:45:09.070: INFO: Pod "pod-configmaps-9c848d8e-3df0-4052-9e7d-3971ef902d34": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.039163423s
    STEP: Saw pod success
    Sep  3 20:45:09.070: INFO: Pod "pod-configmaps-9c848d8e-3df0-4052-9e7d-3971ef902d34" satisfied condition "Succeeded or Failed"

    Sep  3 20:45:09.073: INFO: Trying to get logs from node k8s-upgrade-and-conformance-uljqkb-worker-gvulve pod pod-configmaps-9c848d8e-3df0-4052-9e7d-3971ef902d34 container env-test: <nil>
    STEP: delete the pod
    Sep  3 20:45:09.091: INFO: Waiting for pod pod-configmaps-9c848d8e-3df0-4052-9e7d-3971ef902d34 to disappear
    Sep  3 20:45:09.094: INFO: Pod pod-configmaps-9c848d8e-3df0-4052-9e7d-3971ef902d34 no longer exists
    [AfterEach] [sig-node] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 20:45:09.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "configmap-5281" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":125,"failed":0}

    
    SSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Downward API
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  3 20:45:08.876: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename downward-api
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should provide pod UID as env vars [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward api env vars
    Sep  3 20:45:08.930: INFO: Waiting up to 5m0s for pod "downward-api-f9ed304b-fe5f-4836-87d9-0ce2dd7aa268" in namespace "downward-api-7764" to be "Succeeded or Failed"

    Sep  3 20:45:08.934: INFO: Pod "downward-api-f9ed304b-fe5f-4836-87d9-0ce2dd7aa268": Phase="Pending", Reason="", readiness=false. Elapsed: 3.529096ms
    Sep  3 20:45:10.939: INFO: Pod "downward-api-f9ed304b-fe5f-4836-87d9-0ce2dd7aa268": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009173891s
    STEP: Saw pod success
    Sep  3 20:45:10.939: INFO: Pod "downward-api-f9ed304b-fe5f-4836-87d9-0ce2dd7aa268" satisfied condition "Succeeded or Failed"

    Sep  3 20:45:10.943: INFO: Trying to get logs from node k8s-upgrade-and-conformance-uljqkb-worker-gvulve pod downward-api-f9ed304b-fe5f-4836-87d9-0ce2dd7aa268 container dapi-container: <nil>
    STEP: delete the pod
    Sep  3 20:45:10.962: INFO: Waiting for pod downward-api-f9ed304b-fe5f-4836-87d9-0ce2dd7aa268 to disappear
    Sep  3 20:45:10.965: INFO: Pod downward-api-f9ed304b-fe5f-4836-87d9-0ce2dd7aa268 no longer exists
    [AfterEach] [sig-node] Downward API
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 20:45:10.965: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "downward-api-7764" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":157,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] ResourceQuota
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 20:45:11.357: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "resourcequota-8092" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":-1,"completed":6,"skipped":138,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 6 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 20:45:11.484: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "secrets-4642" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Secrets should be immutable if `immutable` field is set [Conformance]","total":-1,"completed":7,"skipped":168,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-auth] ServiceAccounts
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 20:45:12.133: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "svcaccounts-9848" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-auth] ServiceAccounts should guarantee kube-root-ca.crt exist in any namespace [Conformance]","total":-1,"completed":6,"skipped":213,"failed":0}

    
    SSSS
    ------------------------------
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 23 lines ...
    STEP: Destroying namespace "services-8632" for this suite.
    [AfterEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":-1,"completed":7,"skipped":217,"failed":0}

    
    SSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  3 20:45:18.710: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename secrets
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating secret with name secret-test-map-d89164df-82fe-4942-ac39-676645bf7624
    STEP: Creating a pod to test consume secrets
    Sep  3 20:45:18.807: INFO: Waiting up to 5m0s for pod "pod-secrets-8b01364b-de3a-41eb-884d-d07161917036" in namespace "secrets-816" to be "Succeeded or Failed"

    Sep  3 20:45:18.811: INFO: Pod "pod-secrets-8b01364b-de3a-41eb-884d-d07161917036": Phase="Pending", Reason="", readiness=false. Elapsed: 4.53313ms
    Sep  3 20:45:20.815: INFO: Pod "pod-secrets-8b01364b-de3a-41eb-884d-d07161917036": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00795042s
    STEP: Saw pod success
    Sep  3 20:45:20.815: INFO: Pod "pod-secrets-8b01364b-de3a-41eb-884d-d07161917036" satisfied condition "Succeeded or Failed"

    Sep  3 20:45:20.817: INFO: Trying to get logs from node k8s-upgrade-and-conformance-uljqkb-worker-gvulve pod pod-secrets-8b01364b-de3a-41eb-884d-d07161917036 container secret-volume-test: <nil>
    STEP: delete the pod
    Sep  3 20:45:20.830: INFO: Waiting for pod pod-secrets-8b01364b-de3a-41eb-884d-d07161917036 to disappear
    Sep  3 20:45:20.833: INFO: Pod pod-secrets-8b01364b-de3a-41eb-884d-d07161917036 no longer exists
    [AfterEach] [sig-storage] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 20:45:20.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "secrets-816" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":228,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Probing container
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 48 lines ...
    STEP: Destroying namespace "services-6914" for this suite.
    [AfterEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":-1,"completed":9,"skipped":294,"failed":0}

    
    SSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] StatefulSet
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 10 lines ...
    STEP: Looking for a node to schedule stateful set and pod
    STEP: Creating pod with conflicting port in namespace statefulset-1767
    STEP: Creating statefulset with conflicting port in namespace statefulset-1767
    STEP: Waiting until pod test-pod will start running in namespace statefulset-1767
    STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-1767
    Sep  3 20:45:15.620: INFO: Observed stateful pod in namespace: statefulset-1767, name: ss-0, uid: d036e56d-d5e7-45c8-8217-336306918c1f, status phase: Pending. Waiting for statefulset controller to delete.
    Sep  3 20:45:16.609: INFO: Observed stateful pod in namespace: statefulset-1767, name: ss-0, uid: d036e56d-d5e7-45c8-8217-336306918c1f, status phase: Failed. Waiting for statefulset controller to delete.

    Sep  3 20:45:16.616: INFO: Observed stateful pod in namespace: statefulset-1767, name: ss-0, uid: d036e56d-d5e7-45c8-8217-336306918c1f, status phase: Failed. Waiting for statefulset controller to delete.

    Sep  3 20:45:16.619: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-1767
    STEP: Removing pod with conflicting port in namespace statefulset-1767
    STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-1767 and will be in running state
    [AfterEach] Basic StatefulSet functionality [StatefulSetBasic]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:116
    Sep  3 20:45:20.642: INFO: Deleting all statefulset in ns statefulset-1767
... skipping 4 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 20:45:40.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "statefulset-1767" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":-1,"completed":8,"skipped":212,"failed":0}

    
    SSSS
    ------------------------------
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
    [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep  3 20:45:40.768: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ca4382a6-ae2c-4d11-a16f-5e977e4e89ab" in namespace "downward-api-9768" to be "Succeeded or Failed"

    Sep  3 20:45:40.777: INFO: Pod "downwardapi-volume-ca4382a6-ae2c-4d11-a16f-5e977e4e89ab": Phase="Pending", Reason="", readiness=false. Elapsed: 7.276612ms
    Sep  3 20:45:42.784: INFO: Pod "downwardapi-volume-ca4382a6-ae2c-4d11-a16f-5e977e4e89ab": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.014746083s
    STEP: Saw pod success
    Sep  3 20:45:42.784: INFO: Pod "downwardapi-volume-ca4382a6-ae2c-4d11-a16f-5e977e4e89ab" satisfied condition "Succeeded or Failed"

    Sep  3 20:45:42.788: INFO: Trying to get logs from node k8s-upgrade-and-conformance-uljqkb-md-0-rg248-796ff9996-j7vhm pod downwardapi-volume-ca4382a6-ae2c-4d11-a16f-5e977e4e89ab container client-container: <nil>
    STEP: delete the pod
    Sep  3 20:45:42.807: INFO: Waiting for pod downwardapi-volume-ca4382a6-ae2c-4d11-a16f-5e977e4e89ab to disappear
    Sep  3 20:45:42.811: INFO: Pod downwardapi-volume-ca4382a6-ae2c-4d11-a16f-5e977e4e89ab no longer exists
    [AfterEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 20:45:42.811: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "downward-api-9768" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":216,"failed":0}

    
    SSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 27 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 20:45:45.728: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-3032" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc  [Conformance]","total":-1,"completed":10,"skipped":231,"failed":0}

    
    SSSSS
    ------------------------------
    {"msg":"PASSED [sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":132,"failed":0}

    [BeforeEach] [sig-network] EndpointSlice
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  3 20:45:31.263: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename endpointslice
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 10 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 20:46:01.456: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "endpointslice-1124" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","total":-1,"completed":10,"skipped":132,"failed":0}

    
    SS
    ------------------------------
    [BeforeEach] [sig-storage] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 15 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 20:46:05.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "secrets-2843" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":134,"failed":0}

    
    SSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
    [It] should provide container's memory limit [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep  3 20:46:05.676: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9ac10443-a1bc-4b18-9f5e-fd19576a06fc" in namespace "downward-api-2855" to be "Succeeded or Failed"

    Sep  3 20:46:05.680: INFO: Pod "downwardapi-volume-9ac10443-a1bc-4b18-9f5e-fd19576a06fc": Phase="Pending", Reason="", readiness=false. Elapsed: 3.67215ms
    Sep  3 20:46:07.684: INFO: Pod "downwardapi-volume-9ac10443-a1bc-4b18-9f5e-fd19576a06fc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007945583s
    STEP: Saw pod success
    Sep  3 20:46:07.684: INFO: Pod "downwardapi-volume-9ac10443-a1bc-4b18-9f5e-fd19576a06fc" satisfied condition "Succeeded or Failed"

    Sep  3 20:46:07.687: INFO: Trying to get logs from node k8s-upgrade-and-conformance-uljqkb-worker-gvulve pod downwardapi-volume-9ac10443-a1bc-4b18-9f5e-fd19576a06fc container client-container: <nil>
    STEP: delete the pod
    Sep  3 20:46:07.708: INFO: Waiting for pod downwardapi-volume-9ac10443-a1bc-4b18-9f5e-fd19576a06fc to disappear
    Sep  3 20:46:07.711: INFO: Pod downwardapi-volume-9ac10443-a1bc-4b18-9f5e-fd19576a06fc no longer exists
    [AfterEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 20:46:07.711: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "downward-api-2855" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":149,"failed":0}

    
    SSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-auth] ServiceAccounts
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  3 20:45:45.750: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename svcaccounts
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    Sep  3 20:45:45.788: INFO: created pod
    Sep  3 20:45:45.788: INFO: Waiting up to 5m0s for pod "oidc-discovery-validator" in namespace "svcaccounts-9425" to be "Succeeded or Failed"

    Sep  3 20:45:45.791: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 3.13952ms
    Sep  3 20:45:47.795: INFO: Pod "oidc-discovery-validator": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007511391s
    STEP: Saw pod success
    Sep  3 20:45:47.795: INFO: Pod "oidc-discovery-validator" satisfied condition "Succeeded or Failed"

    Sep  3 20:46:17.796: INFO: polling logs
    Sep  3 20:46:17.802: INFO: Pod logs: 
    2022/09/03 20:45:46 OK: Got token
    2022/09/03 20:45:46 validating with in-cluster discovery
    2022/09/03 20:45:46 OK: got issuer https://kubernetes.default.svc.cluster.local
    2022/09/03 20:45:46 Full, not-validated claims: 
... skipping 9 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 20:46:17.808: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "svcaccounts-9425" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-auth] ServiceAccounts ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]","total":-1,"completed":11,"skipped":236,"failed":0}

    
    SSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] ResourceQuota
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 20:46:18.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "resourcequota-4028" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":-1,"completed":13,"skipped":167,"failed":0}

    
    SSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] ReplicationController
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 27 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 20:46:21.010: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "replication-controller-4147" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] ReplicationController should test the lifecycle of a ReplicationController [Conformance]","total":-1,"completed":12,"skipped":240,"failed":0}

    
    SSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] InitContainer [NodeConformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 10 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 20:46:21.728: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "init-container-3998" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":-1,"completed":14,"skipped":176,"failed":0}

    
    SS
    ------------------------------
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 3 lines ...
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
    [It] should serve a basic endpoint from pods  [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: creating service endpoint-test2 in namespace services-5862
    STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5862 to expose endpoints map[]
    Sep  3 20:46:21.811: INFO: Failed go get Endpoints object: endpoints "endpoint-test2" not found

    Sep  3 20:46:22.819: INFO: successfully validated that service endpoint-test2 in namespace services-5862 exposes endpoints map[]
    STEP: Creating pod pod1 in namespace services-5862
    Sep  3 20:46:22.829: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true)
    Sep  3 20:46:24.842: INFO: The status of Pod pod1 is Running (Ready = true)
    STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5862 to expose endpoints map[pod1:[80]]
    Sep  3 20:46:24.866: INFO: successfully validated that service endpoint-test2 in namespace services-5862 exposes endpoints map[pod1:[80]]
... skipping 14 lines ...
    STEP: Destroying namespace "services-5862" for this suite.
    [AfterEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods  [Conformance]","total":-1,"completed":15,"skipped":178,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 43 lines ...
    STEP: Destroying namespace "services-8107" for this suite.
    [AfterEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Services should complete a service status lifecycle [Conformance]","total":-1,"completed":16,"skipped":212,"failed":0}

    
    SSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] Service endpoints latency
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 418 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 20:46:32.135: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "svc-latency-4532" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Service endpoints latency should not be very high  [Conformance]","total":-1,"completed":13,"skipped":253,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  3 20:46:32.188: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename secrets
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should fail to create secret due to empty secret key [Conformance]

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating projection with secret that has name secret-emptykey-test-8f7bfe97-5213-4ee0-9d86-49d9ad4ddbdf
    [AfterEach] [sig-node] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 20:46:32.219: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "secrets-902" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Secrets should fail to create secret due to empty secret key [Conformance]","total":-1,"completed":14,"skipped":280,"failed":0}

    
    SSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected combined
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating configMap with name configmap-projected-all-test-volume-d0241670-b542-4c44-a7a7-4df090b8395d
    STEP: Creating secret with name secret-projected-all-test-volume-dd770741-5e55-4487-8b34-44a331040dee
    STEP: Creating a pod to test Check all projections for projected volume plugin
    Sep  3 20:46:32.282: INFO: Waiting up to 5m0s for pod "projected-volume-886fbe82-37d4-41d1-b7ef-243da0888a2e" in namespace "projected-5682" to be "Succeeded or Failed"

    Sep  3 20:46:32.286: INFO: Pod "projected-volume-886fbe82-37d4-41d1-b7ef-243da0888a2e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.652386ms
    Sep  3 20:46:34.290: INFO: Pod "projected-volume-886fbe82-37d4-41d1-b7ef-243da0888a2e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006741804s
    STEP: Saw pod success
    Sep  3 20:46:34.290: INFO: Pod "projected-volume-886fbe82-37d4-41d1-b7ef-243da0888a2e" satisfied condition "Succeeded or Failed"

    Sep  3 20:46:34.292: INFO: Trying to get logs from node k8s-upgrade-and-conformance-uljqkb-worker-gvulve pod projected-volume-886fbe82-37d4-41d1-b7ef-243da0888a2e container projected-all-volume-test: <nil>
    STEP: delete the pod
    Sep  3 20:46:34.307: INFO: Waiting for pod projected-volume-886fbe82-37d4-41d1-b7ef-243da0888a2e to disappear
    Sep  3 20:46:34.310: INFO: Pod projected-volume-886fbe82-37d4-41d1-b7ef-243da0888a2e no longer exists
    [AfterEach] [sig-storage] Projected combined
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 20:46:34.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-5682" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":-1,"completed":15,"skipped":284,"failed":0}

    
    SS
    ------------------------------
    [BeforeEach] [sig-node] Container Lifecycle Hook
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 22 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 20:46:37.371: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "container-lifecycle-hook-776" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":-1,"completed":17,"skipped":229,"failed":0}

    
    SSSS
    ------------------------------
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 11 lines ...
    STEP: Destroying namespace "services-8237" for this suite.
    [AfterEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":-1,"completed":18,"skipped":233,"failed":0}

    
    SSS
    ------------------------------
    [BeforeEach] [sig-node] Pods
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 11 lines ...
    STEP: verifying the pod is in kubernetes
    STEP: updating the pod
    Sep  3 20:46:36.880: INFO: Successfully updated pod "pod-update-activedeadlineseconds-0d98fefd-e6ad-4850-9561-f1f3bcea6ba6"
    Sep  3 20:46:36.880: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-0d98fefd-e6ad-4850-9561-f1f3bcea6ba6" in namespace "pods-4253" to be "terminated due to deadline exceeded"
    Sep  3 20:46:36.884: INFO: Pod "pod-update-activedeadlineseconds-0d98fefd-e6ad-4850-9561-f1f3bcea6ba6": Phase="Running", Reason="", readiness=true. Elapsed: 3.166048ms
    Sep  3 20:46:38.887: INFO: Pod "pod-update-activedeadlineseconds-0d98fefd-e6ad-4850-9561-f1f3bcea6ba6": Phase="Running", Reason="", readiness=true. Elapsed: 2.006425788s
    Sep  3 20:46:40.891: INFO: Pod "pod-update-activedeadlineseconds-0d98fefd-e6ad-4850-9561-f1f3bcea6ba6": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 4.010440843s

    Sep  3 20:46:40.891: INFO: Pod "pod-update-activedeadlineseconds-0d98fefd-e6ad-4850-9561-f1f3bcea6ba6" satisfied condition "terminated due to deadline exceeded"
    [AfterEach] [sig-node] Pods
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 20:46:40.891: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "pods-4253" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":286,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected configMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 20:46:41.672: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-7898" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":19,"skipped":236,"failed":0}

    
    SSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] ReplicaSet
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 14 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 20:46:46.790: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "replicaset-4133" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] ReplicaSet Replicaset should have a working scale subresource [Conformance]","total":-1,"completed":20,"skipped":252,"failed":0}

    
    SSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 30 lines ...
    STEP: Destroying namespace "webhook-6149-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":-1,"completed":17,"skipped":327,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] Aggregator
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 21 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 20:46:56.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "aggregator-7858" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":-1,"completed":21,"skipped":263,"failed":0}

    
    SSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] Discovery
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 89 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 20:46:57.111: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "discovery-7883" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance]","total":-1,"completed":22,"skipped":270,"failed":0}

    
    SSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected configMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  3 20:46:57.151: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename projected
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating configMap with name projected-configmap-test-volume-map-dcc80dce-6ef1-4644-9b82-b684ef862167
    STEP: Creating a pod to test consume configMaps
    Sep  3 20:46:57.202: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-a3f29fd9-183c-4fd7-a738-cd81f54d3387" in namespace "projected-4036" to be "Succeeded or Failed"

    Sep  3 20:46:57.205: INFO: Pod "pod-projected-configmaps-a3f29fd9-183c-4fd7-a738-cd81f54d3387": Phase="Pending", Reason="", readiness=false. Elapsed: 2.27856ms
    Sep  3 20:46:59.210: INFO: Pod "pod-projected-configmaps-a3f29fd9-183c-4fd7-a738-cd81f54d3387": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007209662s
    STEP: Saw pod success
    Sep  3 20:46:59.210: INFO: Pod "pod-projected-configmaps-a3f29fd9-183c-4fd7-a738-cd81f54d3387" satisfied condition "Succeeded or Failed"

    Sep  3 20:46:59.213: INFO: Trying to get logs from node k8s-upgrade-and-conformance-uljqkb-worker-gvulve pod pod-projected-configmaps-a3f29fd9-183c-4fd7-a738-cd81f54d3387 container agnhost-container: <nil>
    STEP: delete the pod
    Sep  3 20:46:59.228: INFO: Waiting for pod pod-projected-configmaps-a3f29fd9-183c-4fd7-a738-cd81f54d3387 to disappear
    Sep  3 20:46:59.232: INFO: Pod pod-projected-configmaps-a3f29fd9-183c-4fd7-a738-cd81f54d3387 no longer exists
    [AfterEach] [sig-storage] Projected configMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 20:46:59.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-4036" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":23,"skipped":285,"failed":0}

    
    SSSSS
    ------------------------------
    [BeforeEach] [sig-apps] ReplicationController
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 14 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 20:47:02.324: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "replication-controller-5029" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":-1,"completed":24,"skipped":290,"failed":0}

    
    SSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 47 lines ...
    STEP: Destroying namespace "services-2665" for this suite.
    [AfterEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":-1,"completed":18,"skipped":359,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 9 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 20:47:15.881: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "crd-publish-openapi-6835" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":-1,"completed":25,"skipped":309,"failed":0}

    
    SSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Container Lifecycle Hook
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 28 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 20:47:32.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "container-lifecycle-hook-5051" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":-1,"completed":26,"skipped":317,"failed":0}

    
    S
    ------------------------------
    [BeforeEach] [sig-apps] StatefulSet
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 25 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 20:47:38.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "statefulset-4444" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":-1,"completed":19,"skipped":392,"failed":0}

    
    SSSSS
    ------------------------------
    [BeforeEach] [sig-instrumentation] Events
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 15 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 20:47:38.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "events-2051" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-instrumentation] Events should delete a collection of events [Conformance]","total":-1,"completed":20,"skipped":397,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 27 lines ...
    STEP: Destroying namespace "webhook-384-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":-1,"completed":27,"skipped":318,"failed":0}

    
    SSSS
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  3 20:47:38.177: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename emptydir
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test emptydir volume type on node default medium
    Sep  3 20:47:38.210: INFO: Waiting up to 5m0s for pod "pod-c333d685-d5cc-44c0-9ca6-4c9f29f2e3f7" in namespace "emptydir-1137" to be "Succeeded or Failed"

    Sep  3 20:47:38.213: INFO: Pod "pod-c333d685-d5cc-44c0-9ca6-4c9f29f2e3f7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.466204ms
    Sep  3 20:47:40.217: INFO: Pod "pod-c333d685-d5cc-44c0-9ca6-4c9f29f2e3f7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007032611s
    STEP: Saw pod success
    Sep  3 20:47:40.217: INFO: Pod "pod-c333d685-d5cc-44c0-9ca6-4c9f29f2e3f7" satisfied condition "Succeeded or Failed"

    Sep  3 20:47:40.220: INFO: Trying to get logs from node k8s-upgrade-and-conformance-uljqkb-worker-gvulve pod pod-c333d685-d5cc-44c0-9ca6-4c9f29f2e3f7 container test-container: <nil>
    STEP: delete the pod
    Sep  3 20:47:40.235: INFO: Waiting for pod pod-c333d685-d5cc-44c0-9ca6-4c9f29f2e3f7 to disappear
    Sep  3 20:47:40.238: INFO: Pod pod-c333d685-d5cc-44c0-9ca6-4c9f29f2e3f7 no longer exists
    [AfterEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 20:47:40.238: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-1137" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":21,"skipped":431,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-auth] ServiceAccounts
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 25 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 20:47:40.930: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "svcaccounts-9619" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","total":-1,"completed":22,"skipped":452,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] Deployment
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 27 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 20:47:45.295: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "deployment-6885" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":-1,"completed":23,"skipped":476,"failed":0}

    
    SS
    ------------------------------
    [BeforeEach] [sig-node] InitContainer [NodeConformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  3 20:47:45.312: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename init-container
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-node] InitContainer [NodeConformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162
    [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: creating the pod
    Sep  3 20:47:45.348: INFO: PodSpec: initContainers in spec.initContainers
    [AfterEach] [sig-node] InitContainer [NodeConformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 20:47:49.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "init-container-8327" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":-1,"completed":24,"skipped":478,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 11 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 20:47:56.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "crd-publish-openapi-3061" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":-1,"completed":28,"skipped":322,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Kubelet
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 10 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 20:47:58.973: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubelet-test-7348" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":29,"skipped":371,"failed":0}

    
    SSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] ReplicationController
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 20:48:02.063: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "replication-controller-8096" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":-1,"completed":30,"skipped":380,"failed":0}

    
    SS
    ------------------------------
    [BeforeEach] [sig-node] Variable Expansion
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  3 20:45:40.278: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename var-expansion
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: creating the pod with failed condition

    STEP: updating the pod
    Sep  3 20:47:40.917: INFO: Successfully updated pod "var-expansion-33dc8858-10d0-4bd9-ac52-a9de1d2b3284"
    STEP: waiting for pod running
    STEP: deleting the pod gracefully
    Sep  3 20:47:42.937: INFO: Deleting pod "var-expansion-33dc8858-10d0-4bd9-ac52-a9de1d2b3284" in namespace "var-expansion-304"
    Sep  3 20:47:42.942: INFO: Wait up to 5m0s for pod "var-expansion-33dc8858-10d0-4bd9-ac52-a9de1d2b3284" to be fully deleted
... skipping 6 lines ...
    • [SLOW TEST:162.678 seconds]
    [sig-node] Variable Expansion
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
      should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    ------------------------------
    {"msg":"PASSED [sig-node] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance]","total":-1,"completed":10,"skipped":303,"failed":0}

    
    SS
    ------------------------------
    [BeforeEach] [sig-node] Container Lifecycle Hook
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 36 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 20:48:24.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "container-lifecycle-hook-2552" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":-1,"completed":31,"skipped":382,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected configMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  3 20:48:22.962: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename projected
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating configMap with name projected-configmap-test-volume-map-ee89c257-014d-40b0-a422-8d9b1ea0a96f
    STEP: Creating a pod to test consume configMaps
    Sep  3 20:48:23.007: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-1eddbdbf-632d-4a51-92b6-c7c3da6bf8ed" in namespace "projected-6417" to be "Succeeded or Failed"

    Sep  3 20:48:23.011: INFO: Pod "pod-projected-configmaps-1eddbdbf-632d-4a51-92b6-c7c3da6bf8ed": Phase="Pending", Reason="", readiness=false. Elapsed: 3.523835ms
    Sep  3 20:48:25.016: INFO: Pod "pod-projected-configmaps-1eddbdbf-632d-4a51-92b6-c7c3da6bf8ed": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008372633s
    STEP: Saw pod success
    Sep  3 20:48:25.016: INFO: Pod "pod-projected-configmaps-1eddbdbf-632d-4a51-92b6-c7c3da6bf8ed" satisfied condition "Succeeded or Failed"

    Sep  3 20:48:25.022: INFO: Trying to get logs from node k8s-upgrade-and-conformance-uljqkb-md-0-rg248-796ff9996-wkqbk pod pod-projected-configmaps-1eddbdbf-632d-4a51-92b6-c7c3da6bf8ed container agnhost-container: <nil>
    STEP: delete the pod
    Sep  3 20:48:25.048: INFO: Waiting for pod pod-projected-configmaps-1eddbdbf-632d-4a51-92b6-c7c3da6bf8ed to disappear
    Sep  3 20:48:25.053: INFO: Pod pod-projected-configmaps-1eddbdbf-632d-4a51-92b6-c7c3da6bf8ed no longer exists
    [AfterEach] [sig-storage] Projected configMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 20:48:25.053: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-6417" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":305,"failed":0}

    
    SSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] Garbage collector
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 37 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 20:48:25.451: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "gc-734" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":-1,"completed":32,"skipped":422,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] Deployment
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 32 lines ...
    
    Sep  3 20:48:33.693: INFO: New ReplicaSet "webserver-deployment-795d758f88" of Deployment "webserver-deployment":
    &ReplicaSet{ObjectMeta:{webserver-deployment-795d758f88  deployment-7356  ffb48b8b-e108-4ec6-b1b7-2c355c793bda 7813 3 2022-09-03 20:48:31 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment f012cb04-7a5b-430e-99da-7201e195dd7d 0xc003dd1b17 0xc003dd1b18}] []  [{kube-controller-manager Update apps/v1 2022-09-03 20:48:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f012cb04-7a5b-430e-99da-7201e195dd7d\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 795d758f88,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [] []  []} {[] [] [{httpd webserver:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc003dd1ba8 <nil> ClusterFirst map[]   <nil>  false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} []   nil default-scheduler [] []  <nil> nil [] <nil> <nil> <nil> map[] [] <nil>}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
    Sep  3 20:48:33.693: INFO: All old ReplicaSets of Deployment "webserver-deployment":
    Sep  3 20:48:33.693: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-847dcfb7fb  deployment-7356  d29c2c30-e2fd-4f3c-9bfe-d6e5554d0717 7810 3 2022-09-03 20:48:25 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:847dcfb7fb] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment f012cb04-7a5b-430e-99da-7201e195dd7d 0xc003dd1c37 0xc003dd1c38}] []  [{kube-controller-manager Update apps/v1 2022-09-03 20:48:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f012cb04-7a5b-430e-99da-7201e195dd7d\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 847dcfb7fb,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:847dcfb7fb] map[] [] []  []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc003dd1cb8 <nil> ClusterFirst map[]   <nil>  false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} []   nil default-scheduler [] []  <nil> nil [] <nil> <nil> <nil> map[] [] <nil>}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:2,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},}
    Sep  3 20:48:33.709: INFO: Pod "webserver-deployment-795d758f88-47xr2" is not available:
    &Pod{ObjectMeta:{webserver-deployment-795d758f88-47xr2 webserver-deployment-795d758f88- deployment-7356  eca3411e-0b02-4ab4-a66b-aa7221c8fb36 7798 0 2022-09-03 20:48:31 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 ffb48b8b-e108-4ec6-b1b7-2c355c793bda 0xc003e741b0 0xc003e741b1}] []  [{kube-controller-manager Update v1 2022-09-03 20:48:31 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ffb48b8b-e108-4ec6-b1b7-2c355c793bda\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2022-09-03 20:48:33 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.1.21\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-7prms,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7prms,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-uljqkb-md-0-rg248-796ff9996-wkqbk,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-03 20:48:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-03 20:48:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-03 20:48:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-03 20:48:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.7,PodIP:192.168.1.21,StartTime:2022-09-03 20:48:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.1.21,},},EphemeralContainerStatuses:[]ContainerStatus{},},}

    Sep  3 20:48:33.710: INFO: Pod "webserver-deployment-795d758f88-65zfk" is not available:
    &Pod{ObjectMeta:{webserver-deployment-795d758f88-65zfk webserver-deployment-795d758f88- deployment-7356  8ef69c9d-4485-46e3-adbd-ada2eb723017 7830 0 2022-09-03 20:48:33 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 ffb48b8b-e108-4ec6-b1b7-2c355c793bda 0xc003e74400 0xc003e74401}] []  [{kube-controller-manager Update v1 2022-09-03 20:48:33 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ffb48b8b-e108-4ec6-b1b7-2c355c793bda\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-kp2pn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kp2pn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
    Sep  3 20:48:33.710: INFO: Pod "webserver-deployment-795d758f88-9sx8n" is not available:
    &Pod{ObjectMeta:{webserver-deployment-795d758f88-9sx8n webserver-deployment-795d758f88- deployment-7356  7b53f2bf-ebda-46fd-a037-36dfc94eb385 7829 0 2022-09-03 20:48:33 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 ffb48b8b-e108-4ec6-b1b7-2c355c793bda 0xc003e74547 0xc003e74548}] []  [{kube-controller-manager Update v1 2022-09-03 20:48:33 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ffb48b8b-e108-4ec6-b1b7-2c355c793bda\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-4945v,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4945v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-uljqkb-worker-tpmotr,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-03 20:48:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
    Sep  3 20:48:33.710: INFO: Pod "webserver-deployment-795d758f88-gjpvk" is not available:
    &Pod{ObjectMeta:{webserver-deployment-795d758f88-gjpvk webserver-deployment-795d758f88- deployment-7356  5dacfcc6-21cc-4b2d-a7e9-f330fa18a8df 7795 0 2022-09-03 20:48:31 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 ffb48b8b-e108-4ec6-b1b7-2c355c793bda 0xc003e74710 0xc003e74711}] []  [{kube-controller-manager Update v1 2022-09-03 20:48:31 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ffb48b8b-e108-4ec6-b1b7-2c355c793bda\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2022-09-03 20:48:33 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.1.20\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-czfxb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-czfxb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-uljqkb-md-0-rg248-796ff9996-wkqbk,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-03 20:48:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-03 20:48:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-03 20:48:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-03 20:48:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.7,PodIP:192.168.1.20,StartTime:2022-09-03 20:48:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.1.20,},},EphemeralContainerStatuses:[]ContainerStatus{},},}

    Sep  3 20:48:33.710: INFO: Pod "webserver-deployment-795d758f88-j4pzd" is not available:
    &Pod{ObjectMeta:{webserver-deployment-795d758f88-j4pzd webserver-deployment-795d758f88- deployment-7356  8fe0fe2a-e436-4729-826f-5bd8e4a2793b 7802 0 2022-09-03 20:48:31 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 ffb48b8b-e108-4ec6-b1b7-2c355c793bda 0xc003e74970 0xc003e74971}] []  [{kube-controller-manager Update v1 2022-09-03 20:48:31 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ffb48b8b-e108-4ec6-b1b7-2c355c793bda\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2022-09-03 20:48:33 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.6.41\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-fr2xc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fr2xc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-uljqkb-worker-gvulve,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-03 20:48:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-03 20:48:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-03 20:48:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-03 20:48:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.5,PodIP:192.168.6.41,StartTime:2022-09-03 20:48:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.6.41,},},EphemeralContainerStatuses:[]ContainerStatus{},},}

    Sep  3 20:48:33.710: INFO: Pod "webserver-deployment-795d758f88-rlhlk" is not available:
    &Pod{ObjectMeta:{webserver-deployment-795d758f88-rlhlk webserver-deployment-795d758f88- deployment-7356  c48c8d2b-b9d8-4b1d-9f9b-e4890ddc04a1 7820 0 2022-09-03 20:48:33 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 ffb48b8b-e108-4ec6-b1b7-2c355c793bda 0xc003e74c80 0xc003e74c81}] []  [{kube-controller-manager Update v1 2022-09-03 20:48:33 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ffb48b8b-e108-4ec6-b1b7-2c355c793bda\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-2jdfk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2jdfk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
    Sep  3 20:48:33.711: INFO: Pod "webserver-deployment-795d758f88-t7qqv" is not available:
    &Pod{ObjectMeta:{webserver-deployment-795d758f88-t7qqv webserver-deployment-795d758f88- deployment-7356  7ea4d79a-5dfc-4f71-8784-7647296d8a60 7808 0 2022-09-03 20:48:31 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 ffb48b8b-e108-4ec6-b1b7-2c355c793bda 0xc003e74e57 0xc003e74e58}] []  [{kube-controller-manager Update v1 2022-09-03 20:48:31 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ffb48b8b-e108-4ec6-b1b7-2c355c793bda\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2022-09-03 20:48:33 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.2.17\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-66lxn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-66lxn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-uljqkb-worker-tpmotr,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-03 20:48:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-03 20:48:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-03 20:48:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-03 20:48:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:192.168.2.17,StartTime:2022-09-03 20:48:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.2.17,},},EphemeralContainerStatuses:[]ContainerStatus{},},}

    Sep  3 20:48:33.711: INFO: Pod "webserver-deployment-795d758f88-v4j75" is not available:
    &Pod{ObjectMeta:{webserver-deployment-795d758f88-v4j75 webserver-deployment-795d758f88- deployment-7356  ff5cc890-ccc7-4714-8bd1-b7ea84409f20 7822 0 2022-09-03 20:48:33 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 ffb48b8b-e108-4ec6-b1b7-2c355c793bda 0xc003e75160 0xc003e75161}] []  [{kube-controller-manager Update v1 2022-09-03 20:48:33 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ffb48b8b-e108-4ec6-b1b7-2c355c793bda\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-xq6br,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xq6br,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-uljqkb-worker-gvulve,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-03 20:48:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
    Sep  3 20:48:33.711: INFO: Pod "webserver-deployment-795d758f88-vzmnn" is not available:
    &Pod{ObjectMeta:{webserver-deployment-795d758f88-vzmnn webserver-deployment-795d758f88- deployment-7356  4243b064-9fea-4099-ac3a-f678e979758a 7803 0 2022-09-03 20:48:31 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 ffb48b8b-e108-4ec6-b1b7-2c355c793bda 0xc003e75360 0xc003e75361}] []  [{kube-controller-manager Update v1 2022-09-03 20:48:31 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ffb48b8b-e108-4ec6-b1b7-2c355c793bda\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2022-09-03 20:48:33 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.0.27\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-zd9rm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zd9rm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-uljqkb-md-0-rg248-796ff9996-j7vhm,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-03 20:48:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-03 20:48:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-03 20:48:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-03 20:48:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.4,PodIP:192.168.0.27,StartTime:2022-09-03 20:48:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.0.27,},},EphemeralContainerStatuses:[]ContainerStatus{},},}

    Sep  3 20:48:33.711: INFO: Pod "webserver-deployment-847dcfb7fb-2kczv" is available:
    &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-2kczv webserver-deployment-847dcfb7fb- deployment-7356  9a55cde6-1ca9-4eb5-8974-b899d516151c 7628 0 2022-09-03 20:48:25 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:847dcfb7fb] map[] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb d29c2c30-e2fd-4f3c-9bfe-d6e5554d0717 0xc003e75600 0xc003e75601}] []  [{kube-controller-manager Update v1 2022-09-03 20:48:25 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d29c2c30-e2fd-4f3c-9bfe-d6e5554d0717\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2022-09-03 20:48:27 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.0.26\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-dpxlj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dpxlj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-uljqkb-md-0-rg248-796ff9996-j7vhm,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-03 20:48:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-03 20:48:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-03 20:48:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-03 20:48:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.4,PodIP:192.168.0.26,StartTime:2022-09-03 20:48:25 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-09-03 20:48:26 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:containerd://2e792ca96854dea80a4a25ad5f2a450a323ef9d056ef78c63b3f3c232e031f9e,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.0.26,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
    Sep  3 20:48:33.711: INFO: Pod "webserver-deployment-847dcfb7fb-5ztbt" is available:
    &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-5ztbt webserver-deployment-847dcfb7fb- deployment-7356  a31fe587-92a9-4a95-8da5-4860e5ce55d7 7626 0 2022-09-03 20:48:25 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:847dcfb7fb] map[] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb d29c2c30-e2fd-4f3c-9bfe-d6e5554d0717 0xc003e75820 0xc003e75821}] []  [{kube-controller-manager Update v1 2022-09-03 20:48:25 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d29c2c30-e2fd-4f3c-9bfe-d6e5554d0717\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2022-09-03 20:48:27 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.0.25\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-bxb7f,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bxb7f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-uljqkb-md-0-rg248-796ff9996-j7vhm,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-03 20:48:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-03 20:48:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-03 20:48:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-03 20:48:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.4,PodIP:192.168.0.25,StartTime:2022-09-03 20:48:25 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-09-03 20:48:26 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:containerd://8229fabdd8f1c10fdd14ac5f99dd28a3d4d54dbc8c5493c327a2b5a75076387e,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.0.25,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
    Sep  3 20:48:33.712: INFO: Pod "webserver-deployment-847dcfb7fb-787zl" is available:
    &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-787zl webserver-deployment-847dcfb7fb- deployment-7356  fe700c1e-151d-4823-9334-3183022c8da4 7629 0 2022-09-03 20:48:25 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:847dcfb7fb] map[] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb d29c2c30-e2fd-4f3c-9bfe-d6e5554d0717 0xc003e75a40 0xc003e75a41}] []  [{kube-controller-manager Update v1 2022-09-03 20:48:25 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d29c2c30-e2fd-4f3c-9bfe-d6e5554d0717\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2022-09-03 20:48:27 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.6.40\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-6cng8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6cng8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-uljqkb-worker-gvulve,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-03 20:48:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-03 20:48:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-03 20:48:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-03 20:48:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.5,PodIP:192.168.6.40,StartTime:2022-09-03 20:48:25 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-09-03 20:48:26 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:containerd://60de6178aeec64f3cbe904f62c7555d9618dfd575ae9c658cd18b82a0ee7cef6,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.6.40,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
... skipping 23 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 20:48:33.713: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "deployment-7356" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":-1,"completed":33,"skipped":457,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 10 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 20:48:38.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "configmap-2252" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":34,"skipped":523,"failed":0}

    
    SSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 48 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 20:48:41.620: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-7942" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource  [Conformance]","total":-1,"completed":35,"skipped":536,"failed":0}

    
    S
    ------------------------------
    [BeforeEach] [sig-node] Probing container
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 8 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 20:49:25.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "container-probe-4924" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":308,"failed":0}

    
    S
    ------------------------------
    [BeforeEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 20:49:29.209: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "configmap-9223" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":309,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-auth] Certificates API [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 26 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 20:49:30.071: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "certificates-7582" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","total":-1,"completed":14,"skipped":339,"failed":0}

    
    S
    ------------------------------
    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 21 lines ...
    STEP: Destroying namespace "webhook-772-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":-1,"completed":15,"skipped":340,"failed":0}

    
    SS
    ------------------------------
    [BeforeEach] [sig-network] DNS
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 8 lines ...
    
    STEP: creating a pod to probe DNS
    STEP: submitting the pod to kubernetes
    STEP: retrieving the pod
    STEP: looking for the results for each expected name from probers
    Sep  3 20:48:43.845: INFO: Unable to read wheezy_udp@kubernetes.default.svc.cluster.local from pod dns-4646/dns-test-a1e5e7a1-76a5-4f86-b797-f6da3daa95ea: the server is currently unable to handle the request (get pods dns-test-a1e5e7a1-76a5-4f86-b797-f6da3daa95ea)
    Sep  3 20:50:09.919: FAIL: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod dns-4646/dns-test-a1e5e7a1-76a5-4f86-b797-f6da3daa95ea: Get "https://172.18.0.3:6443/api/v1/namespaces/dns-4646/pods/dns-test-a1e5e7a1-76a5-4f86-b797-f6da3daa95ea/proxy/results/wheezy_tcp@kubernetes.default.svc.cluster.local": context deadline exceeded

    
    Full Stack Trace
    k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc001e1bda8, 0x29a3500, 0x0, 0x0)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211 +0x69
    k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollImmediateInternal(0xc002a9c108, 0xc001e1bda8, 0xc002a9c108, 0xc001e1bda8)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 +0x2f
... skipping 13 lines ...
    	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b
    testing.tRunner(0xc00112ad80, 0x70fea78)
    	/usr/local/go/src/testing/testing.go:1203 +0xe5
    created by testing.(*T).Run
    	/usr/local/go/src/testing/testing.go:1248 +0x2b3
    E0903 20:50:09.919925      15 runtime.go:78] Observed a panic: ginkgowrapper.FailurePanic{Message:"Sep  3 20:50:09.919: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod dns-4646/dns-test-a1e5e7a1-76a5-4f86-b797-f6da3daa95ea: Get \"https://172.18.0.3:6443/api/v1/namespaces/dns-4646/pods/dns-test-a1e5e7a1-76a5-4f86-b797-f6da3daa95ea/proxy/results/wheezy_tcp@kubernetes.default.svc.cluster.local\": context deadline exceeded", Filename:"/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go", Line:211, FullStackTrace:"k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc001e1bda8, 0x29a3500, 0x0, 0x0)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211 +0x69\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollImmediateInternal(0xc002a9c108, 0xc001e1bda8, 0xc002a9c108, 0xc001e1bda8)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 +0x2f\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x12a05f200, 0x8bb2c97000, 0xc001e1bda8, 0x4a, 0x0)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:441 +0x4d\nk8s.io/kubernetes/test/e2e/network.assertFilesContain(0xc001771e80, 0x8, 0x8, 0x6ee63d3, 0x7, 0xc002b86000, 0x77b8c18, 0xc00287d1e0, 0x0, 0x0, ...)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:463 +0x158\nk8s.io/kubernetes/test/e2e/network.assertFilesExist(...)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:457\nk8s.io/kubernetes/test/e2e/network.validateDNSResults(0xc0006db080, 0xc002b86000, 0xc001771e80, 0x8, 0x8)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:520 +0x365\nk8s.io/kubernetes/test/e2e/network.glob..func2.1()\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:64 +0x58a\nk8s.io/kubernetes/test/e2e.RunE2ETests(0xc00112ad80)\n\t_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c\nk8s.io/kubernetes/test/e2e.TestE2E(0xc00112ad80)\n\t_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b\ntesting.tRunner(0xc00112ad80, 0x70fea78)\n\t/usr/local/go/src/testing/testing.go:1203 +0xe5\ncreated by testing.(*T).Run\n\t/usr/local/go/src/testing/testing.go:1248 +0x2b3"} (
    Your test failed.

    Ginkgo panics to prevent subsequent assertions from running.
    Normally Ginkgo rescues this panic so you shouldn't see it.
    
    But, if you make an assertion in a goroutine, Ginkgo can't capture the panic.
    To circumvent this, you should call
    
... skipping 5 lines ...
    k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic(0x6a84100, 0xc00257e140)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0x95
    k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x86
    panic(0x6a84100, 0xc00257e140)
    	/usr/local/go/src/runtime/panic.go:965 +0x1b9
    k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1(0xc00115fce0, 0x159, 0x86a5e60, 0x7d, 0xd3, 0xc000c99000, 0x7fb)

    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:63 +0xa5
    panic(0x61dbcc0, 0x75da840)
    	/usr/local/go/src/runtime/panic.go:965 +0x1b9
    k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.Fail(0xc00115fce0, 0x159, 0xc001e1b7e8, 0x1, 0x1)

    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:267 +0xc8
    k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail(0xc00115fce0, 0x159, 0xc001e1b8d0, 0x1, 0x1)

    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:67 +0x1b5
    k8s.io/kubernetes/test/e2e/framework.Failf(0x6f89b47, 0x24, 0xc001e1bb30, 0x4, 0x4)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/log.go:51 +0x219
    k8s.io/kubernetes/test/e2e/network.assertFilesContain.func1(0xc002a9c100, 0x0, 0x0)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:480 +0xab1
    k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc001e1bda8, 0x29a3500, 0x0, 0x0)
... skipping 76 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 20:50:12.181: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "job-3857" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":-1,"completed":16,"skipped":342,"failed":0}

    
    SSSS
    ------------------------------
    [BeforeEach] [sig-node] Downward API
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  3 20:50:12.199: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename downward-api
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward api env vars
    Sep  3 20:50:12.239: INFO: Waiting up to 5m0s for pod "downward-api-7f9fa53f-da06-437c-b089-495e4bc0baa6" in namespace "downward-api-7170" to be "Succeeded or Failed"

    Sep  3 20:50:12.243: INFO: Pod "downward-api-7f9fa53f-da06-437c-b089-495e4bc0baa6": Phase="Pending", Reason="", readiness=false. Elapsed: 3.744741ms
    Sep  3 20:50:14.248: INFO: Pod "downward-api-7f9fa53f-da06-437c-b089-495e4bc0baa6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008404855s
    STEP: Saw pod success
    Sep  3 20:50:14.248: INFO: Pod "downward-api-7f9fa53f-da06-437c-b089-495e4bc0baa6" satisfied condition "Succeeded or Failed"

    Sep  3 20:50:14.251: INFO: Trying to get logs from node k8s-upgrade-and-conformance-uljqkb-md-0-rg248-796ff9996-j7vhm pod downward-api-7f9fa53f-da06-437c-b089-495e4bc0baa6 container dapi-container: <nil>
    STEP: delete the pod
    Sep  3 20:50:14.281: INFO: Waiting for pod downward-api-7f9fa53f-da06-437c-b089-495e4bc0baa6 to disappear
    Sep  3 20:50:14.283: INFO: Pod downward-api-7f9fa53f-da06-437c-b089-495e4bc0baa6 no longer exists
    [AfterEach] [sig-node] Downward API
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 20:50:14.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "downward-api-7170" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":-1,"completed":17,"skipped":346,"failed":0}

    
    SSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 150 lines ...
    Sep  3 20:49:58.334: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2855 exec execpod-affinitybxhzz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.18.0.7 31096'
    Sep  3 20:50:00.564: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 172.18.0.7 31096\nConnection to 172.18.0.7 31096 port [tcp/*] succeeded!\n"
    Sep  3 20:50:00.564: INFO: stdout: ""
    Sep  3 20:50:00.564: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2855 exec execpod-affinitybxhzz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.18.0.7 31096'
    Sep  3 20:50:02.751: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 172.18.0.7 31096\nConnection to 172.18.0.7 31096 port [tcp/*] succeeded!\n"
    Sep  3 20:50:02.751: INFO: stdout: ""
    Sep  3 20:50:02.752: FAIL: Unexpected error:

        <*errors.errorString | 0xc002ba2290>: {
            s: "service is not reachable within 2m0s timeout on endpoint 172.18.0.7:31096 over TCP protocol",
        }
        service is not reachable within 2m0s timeout on endpoint 172.18.0.7:31096 over TCP protocol
    occurred
    
... skipping 25 lines ...
    • Failure [146.074 seconds]
    [sig-network] Services
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
      should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    
      Sep  3 20:50:02.752: Unexpected error:

          <*errors.errorString | 0xc002ba2290>: {
              s: "service is not reachable within 2m0s timeout on endpoint 172.18.0.7:31096 over TCP protocol",
          }
          service is not reachable within 2m0s timeout on endpoint 172.18.0.7:31096 over TCP protocol
      occurred
    
... skipping 6 lines ...
    STEP: Building a namespace api object, basename secrets
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating secret with name secret-test-map-08116222-9a5a-4233-9f5b-4855d295590f
    STEP: Creating a pod to test consume secrets
    Sep  3 20:50:14.340: INFO: Waiting up to 5m0s for pod "pod-secrets-10dbce37-6031-4a98-ae41-5047785ad55e" in namespace "secrets-2566" to be "Succeeded or Failed"

    Sep  3 20:50:14.344: INFO: Pod "pod-secrets-10dbce37-6031-4a98-ae41-5047785ad55e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.473006ms
    Sep  3 20:50:16.348: INFO: Pod "pod-secrets-10dbce37-6031-4a98-ae41-5047785ad55e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007806876s
    STEP: Saw pod success
    Sep  3 20:50:16.348: INFO: Pod "pod-secrets-10dbce37-6031-4a98-ae41-5047785ad55e" satisfied condition "Succeeded or Failed"

    Sep  3 20:50:16.351: INFO: Trying to get logs from node k8s-upgrade-and-conformance-uljqkb-md-0-rg248-796ff9996-j7vhm pod pod-secrets-10dbce37-6031-4a98-ae41-5047785ad55e container secret-volume-test: <nil>
    STEP: delete the pod
    Sep  3 20:50:16.365: INFO: Waiting for pod pod-secrets-10dbce37-6031-4a98-ae41-5047785ad55e to disappear
    Sep  3 20:50:16.368: INFO: Pod pod-secrets-10dbce37-6031-4a98-ae41-5047785ad55e no longer exists
    [AfterEach] [sig-storage] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 20:50:16.368: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "secrets-2566" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":18,"skipped":354,"failed":0}

    
    SSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] Watchers
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 23 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 20:50:26.471: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "watch-9339" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":-1,"completed":19,"skipped":369,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 25 lines ...
    STEP: Destroying namespace "webhook-5075-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":-1,"completed":20,"skipped":390,"failed":0}

    
    SSS
    ------------------------------
    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 29 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 20:50:45.169: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-3398" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image  [Conformance]","total":-1,"completed":21,"skipped":393,"failed":0}

    
    SSS
    ------------------------------
    [BeforeEach] [sig-apps] Job
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  3 20:50:45.203: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename job
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a job
    STEP: Ensuring job reaches completions
    [AfterEach] [sig-apps] Job
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 20:50:53.263: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "job-7445" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":-1,"completed":22,"skipped":396,"failed":0}

    
    SSSSSSSS
    ------------------------------
    {"msg":"FAILED [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","total":-1,"completed":24,"skipped":524,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]}

    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  3 20:50:15.296: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename services
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 62 lines ...
    STEP: Destroying namespace "services-2176" for this suite.
    [AfterEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","total":-1,"completed":25,"skipped":524,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]}

    
    S
    ------------------------------
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
    [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep  3 20:50:55.357: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b9029add-372e-430d-9439-760ae4faad88" in namespace "downward-api-6741" to be "Succeeded or Failed"

    Sep  3 20:50:55.365: INFO: Pod "downwardapi-volume-b9029add-372e-430d-9439-760ae4faad88": Phase="Pending", Reason="", readiness=false. Elapsed: 7.592126ms
    Sep  3 20:50:57.369: INFO: Pod "downwardapi-volume-b9029add-372e-430d-9439-760ae4faad88": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.012124073s
    STEP: Saw pod success
    Sep  3 20:50:57.369: INFO: Pod "downwardapi-volume-b9029add-372e-430d-9439-760ae4faad88" satisfied condition "Succeeded or Failed"

    Sep  3 20:50:57.373: INFO: Trying to get logs from node k8s-upgrade-and-conformance-uljqkb-md-0-rg248-796ff9996-j7vhm pod downwardapi-volume-b9029add-372e-430d-9439-760ae4faad88 container client-container: <nil>
    STEP: delete the pod
    Sep  3 20:50:57.389: INFO: Waiting for pod downwardapi-volume-b9029add-372e-430d-9439-760ae4faad88 to disappear
    Sep  3 20:50:57.392: INFO: Pod downwardapi-volume-b9029add-372e-430d-9439-760ae4faad88 no longer exists
    [AfterEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 20:50:57.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "downward-api-6741" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":26,"skipped":525,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]}

    
    SSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Docker Containers
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 6 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 20:50:59.468: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "containers-5667" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":-1,"completed":27,"skipped":532,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]}

    
    SSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 22 lines ...
    STEP: Destroying namespace "webhook-9460-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":-1,"completed":23,"skipped":404,"failed":0}

    
    SS
    ------------------------------
    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 24 lines ...
    STEP: Destroying namespace "webhook-1387-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":-1,"completed":24,"skipped":406,"failed":0}

    
    SSSS
    ------------------------------
    [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:35
    [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
... skipping 11 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 20:51:06.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "sysctl-1180" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance]","total":-1,"completed":25,"skipped":410,"failed":0}

    
    SSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] ResourceQuota
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 20:51:27.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "resourcequota-8862" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":-1,"completed":28,"skipped":538,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] Watchers
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 9 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 20:51:32.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "watch-5587" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":-1,"completed":29,"skipped":559,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]}

    
    S
    ------------------------------
    [BeforeEach] [sig-node] Kubelet
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 10 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 20:51:35.081: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubelet-test-8280" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":30,"skipped":560,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 59 lines ...
    STEP: Destroying namespace "services-427" for this suite.
    [AfterEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":26,"skipped":416,"failed":0}

    
    SSSS
    ------------------------------
    [BeforeEach] [sig-network] EndpointSlice
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 25 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 20:51:50.331: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "endpointslice-2973" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]","total":-1,"completed":27,"skipped":420,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected configMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  3 20:51:50.380: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename projected
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating configMap with name projected-configmap-test-volume-9183ac69-fdf9-4915-a754-2c8e7f1329c9
    STEP: Creating a pod to test consume configMaps
    Sep  3 20:51:50.424: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b2249972-1438-4b43-9c68-63836342ec08" in namespace "projected-477" to be "Succeeded or Failed"

    Sep  3 20:51:50.429: INFO: Pod "pod-projected-configmaps-b2249972-1438-4b43-9c68-63836342ec08": Phase="Pending", Reason="", readiness=false. Elapsed: 4.480467ms
    Sep  3 20:51:52.434: INFO: Pod "pod-projected-configmaps-b2249972-1438-4b43-9c68-63836342ec08": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009131159s
    STEP: Saw pod success
    Sep  3 20:51:52.434: INFO: Pod "pod-projected-configmaps-b2249972-1438-4b43-9c68-63836342ec08" satisfied condition "Succeeded or Failed"

    Sep  3 20:51:52.437: INFO: Trying to get logs from node k8s-upgrade-and-conformance-uljqkb-worker-gvulve pod pod-projected-configmaps-b2249972-1438-4b43-9c68-63836342ec08 container agnhost-container: <nil>
    STEP: delete the pod
    Sep  3 20:51:52.459: INFO: Waiting for pod pod-projected-configmaps-b2249972-1438-4b43-9c68-63836342ec08 to disappear
    Sep  3 20:51:52.462: INFO: Pod pod-projected-configmaps-b2249972-1438-4b43-9c68-63836342ec08 no longer exists
    [AfterEach] [sig-storage] Projected configMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 20:51:52.462: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-477" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":28,"skipped":447,"failed":0}

    
    SSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
    [It] should provide container's cpu request [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep  3 20:51:52.536: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e285df23-d7e5-4ca7-a3af-afa89a7735a6" in namespace "downward-api-7743" to be "Succeeded or Failed"

    Sep  3 20:51:52.540: INFO: Pod "downwardapi-volume-e285df23-d7e5-4ca7-a3af-afa89a7735a6": Phase="Pending", Reason="", readiness=false. Elapsed: 3.141849ms
    Sep  3 20:51:54.544: INFO: Pod "downwardapi-volume-e285df23-d7e5-4ca7-a3af-afa89a7735a6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007157374s
    STEP: Saw pod success
    Sep  3 20:51:54.544: INFO: Pod "downwardapi-volume-e285df23-d7e5-4ca7-a3af-afa89a7735a6" satisfied condition "Succeeded or Failed"

    Sep  3 20:51:54.546: INFO: Trying to get logs from node k8s-upgrade-and-conformance-uljqkb-md-0-rg248-796ff9996-j7vhm pod downwardapi-volume-e285df23-d7e5-4ca7-a3af-afa89a7735a6 container client-container: <nil>
    STEP: delete the pod
    Sep  3 20:51:54.563: INFO: Waiting for pod downwardapi-volume-e285df23-d7e5-4ca7-a3af-afa89a7735a6 to disappear
    Sep  3 20:51:54.566: INFO: Pod downwardapi-volume-e285df23-d7e5-4ca7-a3af-afa89a7735a6 no longer exists
    [AfterEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 20:51:54.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "downward-api-7743" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":-1,"completed":29,"skipped":458,"failed":0}

    
    SSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 11 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 20:52:00.110: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "crd-publish-openapi-2306" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":-1,"completed":31,"skipped":589,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Variable Expansion
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  3 20:51:54.597: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename var-expansion
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should fail substituting values in a volume subpath with absolute path [Slow] [Conformance]

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    Sep  3 20:51:56.646: INFO: Deleting pod "var-expansion-6f9b9919-425f-408a-925b-fece07a891e2" in namespace "var-expansion-7999"
    Sep  3 20:51:56.653: INFO: Wait up to 5m0s for pod "var-expansion-6f9b9919-425f-408a-925b-fece07a891e2" to be fully deleted
    [AfterEach] [sig-node] Variable Expansion
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 20:52:06.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "var-expansion-7999" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Variable Expansion should fail substituting values in a volume subpath with absolute path [Slow] [Conformance]","total":-1,"completed":30,"skipped":467,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Probing container
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 18 lines ...
    • [SLOW TEST:242.616 seconds]
    [sig-node] Probing container
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
      should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    ------------------------------
    {"msg":"PASSED [sig-node] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":-1,"completed":36,"skipped":537,"failed":0}

    
    SSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Subpath
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 4 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
    STEP: Setting up data
    [It] should support subpaths with configmap pod [LinuxOnly] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating pod pod-subpath-test-configmap-mgw6
    STEP: Creating a pod to test atomic-volume-subpath
    Sep  3 20:52:44.329: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-mgw6" in namespace "subpath-6360" to be "Succeeded or Failed"

    Sep  3 20:52:44.334: INFO: Pod "pod-subpath-test-configmap-mgw6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.822974ms
    Sep  3 20:52:46.338: INFO: Pod "pod-subpath-test-configmap-mgw6": Phase="Running", Reason="", readiness=true. Elapsed: 2.008694745s
    Sep  3 20:52:48.342: INFO: Pod "pod-subpath-test-configmap-mgw6": Phase="Running", Reason="", readiness=true. Elapsed: 4.012277073s
    Sep  3 20:52:50.346: INFO: Pod "pod-subpath-test-configmap-mgw6": Phase="Running", Reason="", readiness=true. Elapsed: 6.016256603s
    Sep  3 20:52:52.350: INFO: Pod "pod-subpath-test-configmap-mgw6": Phase="Running", Reason="", readiness=true. Elapsed: 8.020784305s
    Sep  3 20:52:54.356: INFO: Pod "pod-subpath-test-configmap-mgw6": Phase="Running", Reason="", readiness=true. Elapsed: 10.026278865s
    Sep  3 20:52:56.361: INFO: Pod "pod-subpath-test-configmap-mgw6": Phase="Running", Reason="", readiness=true. Elapsed: 12.031366979s
    Sep  3 20:52:58.365: INFO: Pod "pod-subpath-test-configmap-mgw6": Phase="Running", Reason="", readiness=true. Elapsed: 14.035970609s
    Sep  3 20:53:00.371: INFO: Pod "pod-subpath-test-configmap-mgw6": Phase="Running", Reason="", readiness=true. Elapsed: 16.041491452s
    Sep  3 20:53:02.376: INFO: Pod "pod-subpath-test-configmap-mgw6": Phase="Running", Reason="", readiness=true. Elapsed: 18.046636046s
    Sep  3 20:53:04.382: INFO: Pod "pod-subpath-test-configmap-mgw6": Phase="Running", Reason="", readiness=true. Elapsed: 20.052679753s
    Sep  3 20:53:06.389: INFO: Pod "pod-subpath-test-configmap-mgw6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.059283589s
    STEP: Saw pod success
    Sep  3 20:53:06.389: INFO: Pod "pod-subpath-test-configmap-mgw6" satisfied condition "Succeeded or Failed"

    Sep  3 20:53:06.392: INFO: Trying to get logs from node k8s-upgrade-and-conformance-uljqkb-worker-gvulve pod pod-subpath-test-configmap-mgw6 container test-container-subpath-configmap-mgw6: <nil>
    STEP: delete the pod
    Sep  3 20:53:06.410: INFO: Waiting for pod pod-subpath-test-configmap-mgw6 to disappear
    Sep  3 20:53:06.413: INFO: Pod pod-subpath-test-configmap-mgw6 no longer exists
    STEP: Deleting pod pod-subpath-test-configmap-mgw6
    Sep  3 20:53:06.413: INFO: Deleting pod "pod-subpath-test-configmap-mgw6" in namespace "subpath-6360"
    [AfterEach] [sig-storage] Subpath
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 20:53:06.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "subpath-6360" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":-1,"completed":37,"skipped":553,"failed":0}

    
    SSSSS
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  3 20:53:06.437: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename emptydir
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test emptydir 0644 on tmpfs
    Sep  3 20:53:06.482: INFO: Waiting up to 5m0s for pod "pod-b045d815-60d4-4f3c-8279-aa92827ab990" in namespace "emptydir-6894" to be "Succeeded or Failed"

    Sep  3 20:53:06.489: INFO: Pod "pod-b045d815-60d4-4f3c-8279-aa92827ab990": Phase="Pending", Reason="", readiness=false. Elapsed: 6.91333ms
    Sep  3 20:53:08.494: INFO: Pod "pod-b045d815-60d4-4f3c-8279-aa92827ab990": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.01174019s
    STEP: Saw pod success
    Sep  3 20:53:08.494: INFO: Pod "pod-b045d815-60d4-4f3c-8279-aa92827ab990" satisfied condition "Succeeded or Failed"

    Sep  3 20:53:08.498: INFO: Trying to get logs from node k8s-upgrade-and-conformance-uljqkb-worker-gvulve pod pod-b045d815-60d4-4f3c-8279-aa92827ab990 container test-container: <nil>
    STEP: delete the pod
    Sep  3 20:53:08.511: INFO: Waiting for pod pod-b045d815-60d4-4f3c-8279-aa92827ab990 to disappear
    Sep  3 20:53:08.515: INFO: Pod pod-b045d815-60d4-4f3c-8279-aa92827ab990 no longer exists
    [AfterEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 20:53:08.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-6894" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":38,"skipped":558,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  3 20:53:08.595: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename emptydir
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test emptydir 0666 on node default medium
    Sep  3 20:53:08.641: INFO: Waiting up to 5m0s for pod "pod-64362481-9468-4b9a-8001-809f531345f6" in namespace "emptydir-9981" to be "Succeeded or Failed"

    Sep  3 20:53:08.647: INFO: Pod "pod-64362481-9468-4b9a-8001-809f531345f6": Phase="Pending", Reason="", readiness=false. Elapsed: 5.84972ms
    Sep  3 20:53:10.651: INFO: Pod "pod-64362481-9468-4b9a-8001-809f531345f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009741309s
    STEP: Saw pod success
    Sep  3 20:53:10.651: INFO: Pod "pod-64362481-9468-4b9a-8001-809f531345f6" satisfied condition "Succeeded or Failed"

    Sep  3 20:53:10.653: INFO: Trying to get logs from node k8s-upgrade-and-conformance-uljqkb-worker-gvulve pod pod-64362481-9468-4b9a-8001-809f531345f6 container test-container: <nil>
    STEP: delete the pod
    Sep  3 20:53:10.668: INFO: Waiting for pod pod-64362481-9468-4b9a-8001-809f531345f6 to disappear
    Sep  3 20:53:10.670: INFO: Pod pod-64362481-9468-4b9a-8001-809f531345f6 no longer exists
    [AfterEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 20:53:10.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-9981" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":39,"skipped":603,"failed":0}

    
    SS
    ------------------------------
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
    [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep  3 20:53:10.729: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5cbbe7a2-c5e6-4ea2-ae34-24e9f553eefa" in namespace "downward-api-3162" to be "Succeeded or Failed"

    Sep  3 20:53:10.732: INFO: Pod "downwardapi-volume-5cbbe7a2-c5e6-4ea2-ae34-24e9f553eefa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.646926ms
    Sep  3 20:53:12.737: INFO: Pod "downwardapi-volume-5cbbe7a2-c5e6-4ea2-ae34-24e9f553eefa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007675492s
    STEP: Saw pod success
    Sep  3 20:53:12.737: INFO: Pod "downwardapi-volume-5cbbe7a2-c5e6-4ea2-ae34-24e9f553eefa" satisfied condition "Succeeded or Failed"

    Sep  3 20:53:12.741: INFO: Trying to get logs from node k8s-upgrade-and-conformance-uljqkb-worker-gvulve pod downwardapi-volume-5cbbe7a2-c5e6-4ea2-ae34-24e9f553eefa container client-container: <nil>
    STEP: delete the pod
    Sep  3 20:53:12.757: INFO: Waiting for pod downwardapi-volume-5cbbe7a2-c5e6-4ea2-ae34-24e9f553eefa to disappear
    Sep  3 20:53:12.760: INFO: Pod downwardapi-volume-5cbbe7a2-c5e6-4ea2-ae34-24e9f553eefa no longer exists
    [AfterEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 20:53:12.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "downward-api-3162" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":40,"skipped":605,"failed":0}

    
    SSS
    ------------------------------
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
    [It] should provide container's memory request [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep  3 20:53:12.819: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e0e18624-3694-44b7-ba18-f3264b0173d5" in namespace "downward-api-8019" to be "Succeeded or Failed"

    Sep  3 20:53:12.825: INFO: Pod "downwardapi-volume-e0e18624-3694-44b7-ba18-f3264b0173d5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.479997ms
    Sep  3 20:53:14.830: INFO: Pod "downwardapi-volume-e0e18624-3694-44b7-ba18-f3264b0173d5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010970659s
    STEP: Saw pod success
    Sep  3 20:53:14.830: INFO: Pod "downwardapi-volume-e0e18624-3694-44b7-ba18-f3264b0173d5" satisfied condition "Succeeded or Failed"

    Sep  3 20:53:14.833: INFO: Trying to get logs from node k8s-upgrade-and-conformance-uljqkb-worker-gvulve pod downwardapi-volume-e0e18624-3694-44b7-ba18-f3264b0173d5 container client-container: <nil>
    STEP: delete the pod
    Sep  3 20:53:14.849: INFO: Waiting for pod downwardapi-volume-e0e18624-3694-44b7-ba18-f3264b0173d5 to disappear
    Sep  3 20:53:14.853: INFO: Pod downwardapi-volume-e0e18624-3694-44b7-ba18-f3264b0173d5 no longer exists
    [AfterEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 20:53:14.853: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "downward-api-8019" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":-1,"completed":41,"skipped":608,"failed":0}

    
    SSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Pods
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 20:53:16.942: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "pods-6482" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":-1,"completed":42,"skipped":618,"failed":0}

    
    SS
    ------------------------------
    [BeforeEach] [sig-apps] DisruptionController
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 16 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 20:53:21.045: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "disruption-7978" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] DisruptionController should update/patch PodDisruptionBudget status [Conformance]","total":-1,"completed":43,"skipped":620,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] CronJob
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 51 lines ...
    STEP: retrieving the pod
    STEP: looking for the results for each expected name from probers
    Sep  3 20:53:33.231: INFO: File wheezy_udp@dns-test-service-3.dns-1633.svc.cluster.local from pod  dns-1633/dns-test-6fa43d5d-a9e1-4357-83b4-34932ef4b27a contains 'foo.example.com.
    ' instead of 'bar.example.com.'
    Sep  3 20:53:33.235: INFO: File jessie_udp@dns-test-service-3.dns-1633.svc.cluster.local from pod  dns-1633/dns-test-6fa43d5d-a9e1-4357-83b4-34932ef4b27a contains 'foo.example.com.
    ' instead of 'bar.example.com.'
    Sep  3 20:53:33.235: INFO: Lookups using dns-1633/dns-test-6fa43d5d-a9e1-4357-83b4-34932ef4b27a failed for: [wheezy_udp@dns-test-service-3.dns-1633.svc.cluster.local jessie_udp@dns-test-service-3.dns-1633.svc.cluster.local]

    
    Sep  3 20:53:38.240: INFO: File wheezy_udp@dns-test-service-3.dns-1633.svc.cluster.local from pod  dns-1633/dns-test-6fa43d5d-a9e1-4357-83b4-34932ef4b27a contains '' instead of 'bar.example.com.'
    Sep  3 20:53:38.244: INFO: File jessie_udp@dns-test-service-3.dns-1633.svc.cluster.local from pod  dns-1633/dns-test-6fa43d5d-a9e1-4357-83b4-34932ef4b27a contains 'foo.example.com.
    ' instead of 'bar.example.com.'
    Sep  3 20:53:38.244: INFO: Lookups using dns-1633/dns-test-6fa43d5d-a9e1-4357-83b4-34932ef4b27a failed for: [wheezy_udp@dns-test-service-3.dns-1633.svc.cluster.local jessie_udp@dns-test-service-3.dns-1633.svc.cluster.local]

    
    Sep  3 20:53:43.240: INFO: File wheezy_udp@dns-test-service-3.dns-1633.svc.cluster.local from pod  dns-1633/dns-test-6fa43d5d-a9e1-4357-83b4-34932ef4b27a contains 'foo.example.com.
    ' instead of 'bar.example.com.'
    Sep  3 20:53:43.244: INFO: File jessie_udp@dns-test-service-3.dns-1633.svc.cluster.local from pod  dns-1633/dns-test-6fa43d5d-a9e1-4357-83b4-34932ef4b27a contains 'foo.example.com.
    ' instead of 'bar.example.com.'
    Sep  3 20:53:43.244: INFO: Lookups using dns-1633/dns-test-6fa43d5d-a9e1-4357-83b4-34932ef4b27a failed for: [wheezy_udp@dns-test-service-3.dns-1633.svc.cluster.local jessie_udp@dns-test-service-3.dns-1633.svc.cluster.local]

    
    Sep  3 20:53:48.239: INFO: File wheezy_udp@dns-test-service-3.dns-1633.svc.cluster.local from pod  dns-1633/dns-test-6fa43d5d-a9e1-4357-83b4-34932ef4b27a contains 'foo.example.com.
    ' instead of 'bar.example.com.'
    Sep  3 20:53:48.244: INFO: File jessie_udp@dns-test-service-3.dns-1633.svc.cluster.local from pod  dns-1633/dns-test-6fa43d5d-a9e1-4357-83b4-34932ef4b27a contains 'foo.example.com.
    ' instead of 'bar.example.com.'
    Sep  3 20:53:48.244: INFO: Lookups using dns-1633/dns-test-6fa43d5d-a9e1-4357-83b4-34932ef4b27a failed for: [wheezy_udp@dns-test-service-3.dns-1633.svc.cluster.local jessie_udp@dns-test-service-3.dns-1633.svc.cluster.local]

    
    Sep  3 20:53:53.243: INFO: File jessie_udp@dns-test-service-3.dns-1633.svc.cluster.local from pod  dns-1633/dns-test-6fa43d5d-a9e1-4357-83b4-34932ef4b27a contains 'foo.example.com.
    ' instead of 'bar.example.com.'
    Sep  3 20:53:53.243: INFO: Lookups using dns-1633/dns-test-6fa43d5d-a9e1-4357-83b4-34932ef4b27a failed for: [jessie_udp@dns-test-service-3.dns-1633.svc.cluster.local]

    
    Sep  3 20:53:58.246: INFO: DNS probes using dns-test-6fa43d5d-a9e1-4357-83b4-34932ef4b27a succeeded
    
    STEP: deleting the pod
    STEP: changing the service to type=ClusterIP
    STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1633.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-1633.svc.cluster.local; sleep 1; done
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 20:54:00.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "dns-1633" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":-1,"completed":44,"skipped":662,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] ResourceQuota
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 20 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 20:54:16.647: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "resourcequota-1683" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":-1,"completed":45,"skipped":704,"failed":0}

    
    S
    ------------------------------
    {"msg":"PASSED [sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","total":-1,"completed":32,"skipped":609,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]}

    [BeforeEach] [sig-network] Networking
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  3 20:54:00.233: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename pod-network-test
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 39 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 20:54:26.723: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "pod-network-test-2084" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":33,"skipped":609,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] ResourceQuota
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 15 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 20:54:27.886: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "resourcequota-9523" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":-1,"completed":46,"skipped":705,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] Servers with support for Table transformation
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 8 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 20:54:27.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "tables-9817" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":-1,"completed":47,"skipped":726,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Variable Expansion
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  3 20:54:28.027: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename var-expansion
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should allow substituting values in a container's args [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test substitution in container's args
    Sep  3 20:54:28.067: INFO: Waiting up to 5m0s for pod "var-expansion-909711a8-b37c-4ff7-bd8e-cbddb86825af" in namespace "var-expansion-3898" to be "Succeeded or Failed"

    Sep  3 20:54:28.069: INFO: Pod "var-expansion-909711a8-b37c-4ff7-bd8e-cbddb86825af": Phase="Pending", Reason="", readiness=false. Elapsed: 2.703871ms
    Sep  3 20:54:30.074: INFO: Pod "var-expansion-909711a8-b37c-4ff7-bd8e-cbddb86825af": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006925574s
    STEP: Saw pod success
    Sep  3 20:54:30.074: INFO: Pod "var-expansion-909711a8-b37c-4ff7-bd8e-cbddb86825af" satisfied condition "Succeeded or Failed"

    Sep  3 20:54:30.076: INFO: Trying to get logs from node k8s-upgrade-and-conformance-uljqkb-md-0-rg248-796ff9996-j7vhm pod var-expansion-909711a8-b37c-4ff7-bd8e-cbddb86825af container dapi-container: <nil>
    STEP: delete the pod
    Sep  3 20:54:30.099: INFO: Waiting for pod var-expansion-909711a8-b37c-4ff7-bd8e-cbddb86825af to disappear
    Sep  3 20:54:30.102: INFO: Pod var-expansion-909711a8-b37c-4ff7-bd8e-cbddb86825af no longer exists
    [AfterEach] [sig-node] Variable Expansion
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 20:54:30.102: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "var-expansion-3898" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":-1,"completed":48,"skipped":769,"failed":0}

    
    SSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] HostPort
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 29 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 20:54:42.109: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "hostport-2656" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] HostPort validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance]","total":-1,"completed":34,"skipped":630,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]}

    
    S
    ------------------------------
    [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 20:54:47.514: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "crd-publish-openapi-4823" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":-1,"completed":49,"skipped":781,"failed":0}

    
    S
    ------------------------------
    [BeforeEach] [sig-apps] Deployment
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 28 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 20:54:49.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "deployment-9347" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":-1,"completed":35,"skipped":631,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 20 lines ...
    Sep  3 20:54:51.023: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
    Sep  3 20:54:51.023: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-382 describe pod agnhost-primary-9jkjf'
    Sep  3 20:54:51.134: INFO: stderr: ""
    Sep  3 20:54:51.134: INFO: stdout: "Name:         agnhost-primary-9jkjf\nNamespace:    kubectl-382\nPriority:     0\nNode:         k8s-upgrade-and-conformance-uljqkb-md-0-rg248-796ff9996-j7vhm/172.18.0.4\nStart Time:   Sat, 03 Sep 2022 20:54:48 +0000\nLabels:       app=agnhost\n              role=primary\nAnnotations:  <none>\nStatus:       Running\nIP:           192.168.0.41\nIPs:\n  IP:           192.168.0.41\nControlled By:  ReplicationController/agnhost-primary\nContainers:\n  agnhost-primary:\n    Container ID:   containerd://5c82c9c11b14ea62b29b621062f5b3eb0dfbebebcbb7d9232feea97a442e1305\n    Image:          k8s.gcr.io/e2e-test-images/agnhost:2.32\n    Image ID:       k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Sat, 03 Sep 2022 20:54:49 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    <none>\n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vjwqx (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  kube-api-access-vjwqx:\n    Type:                    Projected (a volume that contains injected data from multiple sources)\n    TokenExpirationSeconds:  3607\n    ConfigMapName:           kube-root-ca.crt\n    ConfigMapOptional:       <nil>\n    DownwardAPI:             true\nQoS Class:                   BestEffort\nNode-Selectors:              <none>\nTolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n  Type    Reason     Age   From               Message\n  ----    ------     ----  ----               -------\n  Normal  Scheduled  2s    default-scheduler  Successfully assigned kubectl-382/agnhost-primary-9jkjf to k8s-upgrade-and-conformance-uljqkb-md-0-rg248-796ff9996-j7vhm\n  Normal  Pulled     2s    kubelet            Container image \"k8s.gcr.io/e2e-test-images/agnhost:2.32\" already present on machine\n  Normal  Created    2s    kubelet            Created container agnhost-primary\n  Normal  Started    2s    kubelet            Started container agnhost-primary\n"
    Sep  3 20:54:51.134: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-382 describe rc agnhost-primary'
    Sep  3 20:54:51.246: INFO: stderr: ""
    Sep  3 20:54:51.246: INFO: stdout: "Name:         agnhost-primary\nNamespace:    kubectl-382\nSelector:     app=agnhost,role=primary\nLabels:       app=agnhost\n              role=primary\nAnnotations:  <none>\nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=agnhost\n           role=primary\n  Containers:\n   agnhost-primary:\n    Image:        k8s.gcr.io/e2e-test-images/agnhost:2.32\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  <none>\n    Mounts:       <none>\n  Volumes:        <none>\nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  3s    replication-controller  Created pod: agnhost-primary-9jkjf\n"

    Sep  3 20:54:51.247: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-382 describe service agnhost-primary'
    Sep  3 20:54:51.350: INFO: stderr: ""
    Sep  3 20:54:51.350: INFO: stdout: "Name:              agnhost-primary\nNamespace:         kubectl-382\nLabels:            app=agnhost\n                   role=primary\nAnnotations:       <none>\nSelector:          app=agnhost,role=primary\nType:              ClusterIP\nIP Family Policy:  SingleStack\nIP Families:       IPv4\nIP:                10.134.70.63\nIPs:               10.134.70.63\nPort:              <unset>  6379/TCP\nTargetPort:        agnhost-server/TCP\nEndpoints:         192.168.0.41:6379\nSession Affinity:  None\nEvents:            <none>\n"
    Sep  3 20:54:51.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-382 describe node k8s-upgrade-and-conformance-uljqkb-4xpw7-jp2dr'
    Sep  3 20:54:51.478: INFO: stderr: ""
    Sep  3 20:54:51.478: INFO: stdout: "Name:               k8s-upgrade-and-conformance-uljqkb-4xpw7-jp2dr\nRoles:              control-plane,master\nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/os=linux\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=k8s-upgrade-and-conformance-uljqkb-4xpw7-jp2dr\n                    kubernetes.io/os=linux\n                    node-role.kubernetes.io/control-plane=\n                    node-role.kubernetes.io/master=\n                    node.kubernetes.io/exclude-from-external-load-balancers=\nAnnotations:        cluster.x-k8s.io/cluster-name: k8s-upgrade-and-conformance-uljqkb\n                    cluster.x-k8s.io/cluster-namespace: k8s-upgrade-and-conformance-ie185u\n                    cluster.x-k8s.io/machine: k8s-upgrade-and-conformance-uljqkb-4xpw7-jp2dr\n                    cluster.x-k8s.io/owner-kind: KubeadmControlPlane\n                    cluster.x-k8s.io/owner-name: k8s-upgrade-and-conformance-uljqkb-4xpw7\n                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/containerd/containerd.sock\n                    node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Sat, 03 Sep 2022 20:38:37 +0000\nTaints:             node-role.kubernetes.io/master:NoSchedule\nUnschedulable:      false\nLease:\n  HolderIdentity:  k8s-upgrade-and-conformance-uljqkb-4xpw7-jp2dr\n  AcquireTime:     <unset>\n  RenewTime:       Sat, 03 Sep 2022 20:54:48 +0000\nConditions:\n  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----             ------  -----------------                 ------------------                ------                       -------\n  MemoryPressure   False   Sat, 03 Sep 2022 20:54:41 +0000   Sat, 03 Sep 2022 20:38:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure     False   Sat, 03 Sep 2022 20:54:41 +0000   Sat, 03 Sep 2022 20:38:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure      False   Sat, 03 Sep 2022 20:54:41 +0000   Sat, 03 Sep 2022 20:38:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready            True    Sat, 03 Sep 2022 20:54:41 +0000   Sat, 03 Sep 2022 20:39:39 +0000   KubeletReady                 kubelet is posting ready status\nAddresses:\n  InternalIP:  172.18.0.9\n  Hostname:    k8s-upgrade-and-conformance-uljqkb-4xpw7-jp2dr\nCapacity:\n  cpu:                8\n  ephemeral-storage:  253882800Ki\n  hugepages-1Gi:      0\n  hugepages-2Mi:      0\n  memory:             65860680Ki\n  pods:               110\nAllocatable:\n  cpu:                8\n  ephemeral-storage:  253882800Ki\n  hugepages-1Gi:      0\n  hugepages-2Mi:      0\n  memory:             65860680Ki\n  pods:               110\nSystem Info:\n  Machine ID:                 cb8616ecd2c641b8a1a8c10eda3ff006\n  System UUID:                b3ed6acf-ec84-4cd4-9ec7-b0a9250cd150\n  Boot ID:                    33e6c6d0-bfc1-4131-ba55-5f91de48743e\n  Kernel Version:             5.4.0-1072-gke\n  OS Image:                   Ubuntu 22.04.1 LTS\n  Operating System:           linux\n  Architecture:               amd64\n  Container Runtime Version:  containerd://1.6.7\n  Kubelet Version:            v1.21.14\n  Kube-Proxy Version:         v1.21.14\nPodCIDR:                      192.168.5.0/24\nPodCIDRs:                     192.168.5.0/24\nProviderID:                   docker:////k8s-upgrade-and-conformance-uljqkb-4xpw7-jp2dr\nNon-terminated Pods:          (6 in total)\n  Namespace                   Name                                                                      CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age\n  ---------                   ----                                                                      ------------  ----------  ---------------  -------------  ---\n  kube-system                 etcd-k8s-upgrade-and-conformance-uljqkb-4xpw7-jp2dr                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         16m\n  kube-system                 kindnet-dhlsw                                                             100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      16m\n  kube-system                 kube-apiserver-k8s-upgrade-and-conformance-uljqkb-4xpw7-jp2dr             250m (3%)     0 (0%)      0 (0%)           0 (0%)         16m\n  kube-system                 kube-controller-manager-k8s-upgrade-and-conformance-uljqkb-4xpw7-jp2dr    200m (2%)     0 (0%)      0 (0%)           0 (0%)         16m\n  kube-system                 kube-proxy-mm527                                                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m\n  kube-system                 kube-scheduler-k8s-upgrade-and-conformance-uljqkb-4xpw7-jp2dr             100m (1%)     0 (0%)      0 (0%)           0 (0%)         16m\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource           Requests    Limits\n  --------           --------    ------\n  cpu                750m (9%)   100m (1%)\n  memory             150Mi (0%)  50Mi (0%)\n  ephemeral-storage  0 (0%)      0 (0%)\n  hugepages-1Gi      0 (0%)      0 (0%)\n  hugepages-2Mi      0 (0%)      0 (0%)\nEvents:\n  Type     Reason                    Age                From        Message\n  ----     ------                    ----               ----        -------\n  Normal   Starting                  16m                kubelet     Starting kubelet.\n  Warning  InvalidDiskCapacity       16m                kubelet     invalid capacity 0 on image filesystem\n  Normal   NodeHasSufficientMemory   16m (x2 over 16m)  kubelet     Node k8s-upgrade-and-conformance-uljqkb-4xpw7-jp2dr status is now: NodeHasSufficientMemory\n  Normal   NodeHasNoDiskPressure     16m (x2 over 16m)  kubelet     Node k8s-upgrade-and-conformance-uljqkb-4xpw7-jp2dr status is now: NodeHasNoDiskPressure\n  Normal   NodeHasSufficientPID      16m (x2 over 16m)  kubelet     Node k8s-upgrade-and-conformance-uljqkb-4xpw7-jp2dr status is now: NodeHasSufficientPID\n  Normal   NodeAllocatableEnforced   16m                kubelet     Updated Node Allocatable limit across pods\n  Warning  CheckLimitsForResolvConf  16m                kubelet     Resolv.conf file '/etc/resolv.conf' contains search line consisting of more than 3 domains!\n  Normal   Starting                  15m                kube-proxy  Starting kube-proxy.\n  Normal   NodeReady                 15m (x2 over 15m)  kubelet     Node k8s-upgrade-and-conformance-uljqkb-4xpw7-jp2dr status is now: NodeReady\n  Normal   Starting                  13m                kube-proxy  Starting kube-proxy.\n"
... skipping 4 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 20:54:51.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-382" for this suite.
    
    •
    ------------------------------
    {"msg":"FAILED [sig-network] DNS should provide DNS for the cluster  [Conformance]","total":-1,"completed":4,"skipped":76,"failed":1,"failures":["[sig-network] DNS should provide DNS for the cluster  [Conformance]"]}

    [BeforeEach] [sig-network] DNS
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  3 20:50:09.955: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename dns
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 5 lines ...
    
    STEP: creating a pod to probe DNS
    STEP: submitting the pod to kubernetes
    STEP: retrieving the pod
    STEP: looking for the results for each expected name from probers
    Sep  3 20:53:44.896: INFO: Unable to read wheezy_udp@kubernetes.default.svc.cluster.local from pod dns-4661/dns-test-5e63d259-a82d-4bc4-befb-990d9e473cfe: the server is currently unable to handle the request (get pods dns-test-5e63d259-a82d-4bc4-befb-990d9e473cfe)
    Sep  3 20:55:12.025: FAIL: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod dns-4661/dns-test-5e63d259-a82d-4bc4-befb-990d9e473cfe: Get "https://172.18.0.3:6443/api/v1/namespaces/dns-4661/pods/dns-test-5e63d259-a82d-4bc4-befb-990d9e473cfe/proxy/results/wheezy_tcp@kubernetes.default.svc.cluster.local": context deadline exceeded

    
    Full Stack Trace
    k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc001e1bda8, 0x29a3500, 0x0, 0x0)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211 +0x69
    k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollImmediateInternal(0xc002a9c480, 0xc001e1bda8, 0xc002a9c480, 0xc001e1bda8)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 +0x2f
... skipping 13 lines ...
    	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b
    testing.tRunner(0xc00112ad80, 0x70fea78)
    	/usr/local/go/src/testing/testing.go:1203 +0xe5
    created by testing.(*T).Run
    	/usr/local/go/src/testing/testing.go:1248 +0x2b3
    E0903 20:55:12.026851      15 runtime.go:78] Observed a panic: ginkgowrapper.FailurePanic{Message:"Sep  3 20:55:12.026: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod dns-4661/dns-test-5e63d259-a82d-4bc4-befb-990d9e473cfe: Get \"https://172.18.0.3:6443/api/v1/namespaces/dns-4661/pods/dns-test-5e63d259-a82d-4bc4-befb-990d9e473cfe/proxy/results/wheezy_tcp@kubernetes.default.svc.cluster.local\": context deadline exceeded", Filename:"/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go", Line:211, FullStackTrace:"k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc001e1bda8, 0x29a3500, 0x0, 0x0)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211 +0x69\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollImmediateInternal(0xc002a9c480, 0xc001e1bda8, 0xc002a9c480, 0xc001e1bda8)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 +0x2f\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x12a05f200, 0x8bb2c97000, 0xc001e1bda8, 0x4a, 0x0)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:441 +0x4d\nk8s.io/kubernetes/test/e2e/network.assertFilesContain(0xc001491380, 0x8, 0x8, 0x6ee63d3, 0x7, 0xc003cb0800, 0x77b8c18, 0xc00266eb00, 0x0, 0x0, ...)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:463 +0x158\nk8s.io/kubernetes/test/e2e/network.assertFilesExist(...)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:457\nk8s.io/kubernetes/test/e2e/network.validateDNSResults(0xc0006db080, 0xc003cb0800, 0xc001491380, 0x8, 0x8)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:520 +0x365\nk8s.io/kubernetes/test/e2e/network.glob..func2.1()\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:64 +0x58a\nk8s.io/kubernetes/test/e2e.RunE2ETests(0xc00112ad80)\n\t_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c\nk8s.io/kubernetes/test/e2e.TestE2E(0xc00112ad80)\n\t_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b\ntesting.tRunner(0xc00112ad80, 0x70fea78)\n\t/usr/local/go/src/testing/testing.go:1203 +0xe5\ncreated by testing.(*T).Run\n\t/usr/local/go/src/testing/testing.go:1248 +0x2b3"} (
    Your test failed.

    Ginkgo panics to prevent subsequent assertions from running.
    Normally Ginkgo rescues this panic so you shouldn't see it.
    
    But, if you make an assertion in a goroutine, Ginkgo can't capture the panic.
    To circumvent this, you should call
    
... skipping 5 lines ...
    k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic(0x6a84100, 0xc0030441c0)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0x95
    k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x86
    panic(0x6a84100, 0xc0030441c0)
    	/usr/local/go/src/runtime/panic.go:965 +0x1b9
    k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1(0xc00115fce0, 0x159, 0x86a5e60, 0x7d, 0xd3, 0xc0010d7800, 0x7fb)

    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:63 +0xa5
    panic(0x61dbcc0, 0x75da840)
    	/usr/local/go/src/runtime/panic.go:965 +0x1b9
    k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.Fail(0xc00115fce0, 0x159, 0xc001e1b7e8, 0x1, 0x1)

    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:267 +0xc8
    k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail(0xc00115fce0, 0x159, 0xc001e1b8d0, 0x1, 0x1)

    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:67 +0x1b5
    k8s.io/kubernetes/test/e2e/framework.Failf(0x6f89b47, 0x24, 0xc001e1bb30, 0x4, 0x4)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/log.go:51 +0x219
    k8s.io/kubernetes/test/e2e/network.assertFilesContain.func1(0xc002a9c400, 0x0, 0x0)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:480 +0xab1
    k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc001e1bda8, 0x29a3500, 0x0, 0x0)
... skipping 54 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    
      Sep  3 20:55:12.026: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod dns-4661/dns-test-5e63d259-a82d-4bc4-befb-990d9e473cfe: Get "https://172.18.0.3:6443/api/v1/namespaces/dns-4661/pods/dns-test-5e63d259-a82d-4bc4-befb-990d9e473cfe/proxy/results/wheezy_tcp@kubernetes.default.svc.cluster.local": context deadline exceeded
    
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods  [Conformance]","total":-1,"completed":50,"skipped":782,"failed":0}

    [BeforeEach] [sig-apps] StatefulSet
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  3 20:54:51.585: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename statefulset
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 98 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 20:56:23.332: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "statefulset-4508" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":-1,"completed":51,"skipped":782,"failed":0}

    
    SSSSS
    ------------------------------
    [BeforeEach] [sig-node] Container Runtime
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 20:56:25.421: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "container-runtime-3974" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":-1,"completed":52,"skipped":787,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] DisruptionController
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 25 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 20:56:31.577: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "disruption-2535" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] DisruptionController Listing PodDisruptionBudgets for all namespaces should list and delete a collection of PodDisruptionBudgets [Conformance]","total":-1,"completed":53,"skipped":810,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Pods
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 20:56:33.688: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "pods-5437" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Pods should get a host IP [NodeConformance] [Conformance]","total":-1,"completed":54,"skipped":846,"failed":0}

    
    SSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Lease
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 6 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 20:56:33.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "lease-test-1068" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Lease lease API should be available [Conformance]","total":-1,"completed":55,"skipped":856,"failed":0}

    
    SSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Kubelet
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 10 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 20:56:33.887: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubelet-test-373" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":-1,"completed":56,"skipped":873,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 23 lines ...
    STEP: Destroying namespace "webhook-9568-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":-1,"completed":57,"skipped":898,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Container Runtime
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 20:56:38.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "container-runtime-9005" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":58,"skipped":922,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  3 20:56:39.011: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename emptydir
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test emptydir 0777 on node default medium
    Sep  3 20:56:39.083: INFO: Waiting up to 5m0s for pod "pod-9f761fd9-abab-4454-87fc-e4c0ed88f168" in namespace "emptydir-8734" to be "Succeeded or Failed"

    Sep  3 20:56:39.090: INFO: Pod "pod-9f761fd9-abab-4454-87fc-e4c0ed88f168": Phase="Pending", Reason="", readiness=false. Elapsed: 6.793587ms
    Sep  3 20:56:41.094: INFO: Pod "pod-9f761fd9-abab-4454-87fc-e4c0ed88f168": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010658668s
    STEP: Saw pod success
    Sep  3 20:56:41.094: INFO: Pod "pod-9f761fd9-abab-4454-87fc-e4c0ed88f168" satisfied condition "Succeeded or Failed"

    Sep  3 20:56:41.097: INFO: Trying to get logs from node k8s-upgrade-and-conformance-uljqkb-worker-gvulve pod pod-9f761fd9-abab-4454-87fc-e4c0ed88f168 container test-container: <nil>
    STEP: delete the pod
    Sep  3 20:56:41.119: INFO: Waiting for pod pod-9f761fd9-abab-4454-87fc-e4c0ed88f168 to disappear
    Sep  3 20:56:41.121: INFO: Pod pod-9f761fd9-abab-4454-87fc-e4c0ed88f168 no longer exists
    [AfterEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 20:56:41.121: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-8734" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":59,"skipped":954,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 15 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 20:56:45.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "configmap-5090" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":60,"skipped":1028,"failed":0}

    
    SSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] Watchers
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 18 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 20:56:45.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "watch-5968" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":-1,"completed":61,"skipped":1033,"failed":0}

    
    SSSS
    ------------------------------
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 20:56:52.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "downward-api-3166" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":-1,"completed":62,"skipped":1037,"failed":0}

    
    SSS
    ------------------------------
    [BeforeEach] [sig-apps] CronJob
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 18 lines ...
    • [SLOW TEST:300.065 seconds]
    [sig-apps] CronJob
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
      should not schedule jobs when suspended [Slow] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    ------------------------------
    {"msg":"PASSED [sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]","total":-1,"completed":31,"skipped":498,"failed":0}

    
    SSS
    ------------------------------
    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 52 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 20:57:22.163: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-251" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]","total":-1,"completed":32,"skipped":501,"failed":0}

    
    SSS
    ------------------------------
    [BeforeEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  3 20:57:22.176: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename configmap
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating configMap with name configmap-test-volume-a68cfafa-410d-4f9d-a322-65bc314b2da1
    STEP: Creating a pod to test consume configMaps
    Sep  3 20:57:22.220: INFO: Waiting up to 5m0s for pod "pod-configmaps-6a00d1f4-4161-4a26-8fde-b0992d4d5c7b" in namespace "configmap-4150" to be "Succeeded or Failed"

    Sep  3 20:57:22.225: INFO: Pod "pod-configmaps-6a00d1f4-4161-4a26-8fde-b0992d4d5c7b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.953795ms
    Sep  3 20:57:24.231: INFO: Pod "pod-configmaps-6a00d1f4-4161-4a26-8fde-b0992d4d5c7b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009584185s
    STEP: Saw pod success
    Sep  3 20:57:24.231: INFO: Pod "pod-configmaps-6a00d1f4-4161-4a26-8fde-b0992d4d5c7b" satisfied condition "Succeeded or Failed"

    Sep  3 20:57:24.233: INFO: Trying to get logs from node k8s-upgrade-and-conformance-uljqkb-md-0-rg248-796ff9996-wkqbk pod pod-configmaps-6a00d1f4-4161-4a26-8fde-b0992d4d5c7b container configmap-volume-test: <nil>
    STEP: delete the pod
    Sep  3 20:57:24.247: INFO: Waiting for pod pod-configmaps-6a00d1f4-4161-4a26-8fde-b0992d4d5c7b to disappear
    Sep  3 20:57:24.250: INFO: Pod pod-configmaps-6a00d1f4-4161-4a26-8fde-b0992d4d5c7b no longer exists
    [AfterEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 20:57:24.250: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "configmap-4150" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":-1,"completed":33,"skipped":504,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  3 20:57:24.304: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename emptydir
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test emptydir 0777 on tmpfs
    Sep  3 20:57:24.344: INFO: Waiting up to 5m0s for pod "pod-144586c8-dea6-4e0b-a714-cef46dabaa6c" in namespace "emptydir-5753" to be "Succeeded or Failed"

    Sep  3 20:57:24.347: INFO: Pod "pod-144586c8-dea6-4e0b-a714-cef46dabaa6c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.00734ms
    Sep  3 20:57:26.351: INFO: Pod "pod-144586c8-dea6-4e0b-a714-cef46dabaa6c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007255204s
    STEP: Saw pod success
    Sep  3 20:57:26.351: INFO: Pod "pod-144586c8-dea6-4e0b-a714-cef46dabaa6c" satisfied condition "Succeeded or Failed"

    Sep  3 20:57:26.355: INFO: Trying to get logs from node k8s-upgrade-and-conformance-uljqkb-worker-gvulve pod pod-144586c8-dea6-4e0b-a714-cef46dabaa6c container test-container: <nil>
    STEP: delete the pod
    Sep  3 20:57:26.371: INFO: Waiting for pod pod-144586c8-dea6-4e0b-a714-cef46dabaa6c to disappear
    Sep  3 20:57:26.373: INFO: Pod pod-144586c8-dea6-4e0b-a714-cef46dabaa6c no longer exists
    [AfterEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 20:57:26.373: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-5753" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":34,"skipped":540,"failed":0}

    
    SSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] InitContainer [NodeConformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 10 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 20:57:30.561: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "init-container-1638" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":-1,"completed":35,"skipped":556,"failed":0}

    
    SSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Pods
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 3 lines ...
    [BeforeEach] [sig-node] Pods
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186
    [It] should contain environment variables for services [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    Sep  3 20:57:30.632: INFO: The status of Pod server-envvars-999472ee-7ecb-4ea9-9af2-6d1c327b8e82 is Pending, waiting for it to be Running (with Ready = true)
    Sep  3 20:57:32.636: INFO: The status of Pod server-envvars-999472ee-7ecb-4ea9-9af2-6d1c327b8e82 is Running (Ready = true)
    Sep  3 20:57:32.657: INFO: Waiting up to 5m0s for pod "client-envvars-35414ed1-47ad-4a1f-9a70-3b982b33dfc7" in namespace "pods-4451" to be "Succeeded or Failed"

    Sep  3 20:57:32.663: INFO: Pod "client-envvars-35414ed1-47ad-4a1f-9a70-3b982b33dfc7": Phase="Pending", Reason="", readiness=false. Elapsed: 5.395219ms
    Sep  3 20:57:34.667: INFO: Pod "client-envvars-35414ed1-47ad-4a1f-9a70-3b982b33dfc7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009835842s
    STEP: Saw pod success
    Sep  3 20:57:34.667: INFO: Pod "client-envvars-35414ed1-47ad-4a1f-9a70-3b982b33dfc7" satisfied condition "Succeeded or Failed"

    Sep  3 20:57:34.670: INFO: Trying to get logs from node k8s-upgrade-and-conformance-uljqkb-worker-gvulve pod client-envvars-35414ed1-47ad-4a1f-9a70-3b982b33dfc7 container env3cont: <nil>
    STEP: delete the pod
    Sep  3 20:57:34.685: INFO: Waiting for pod client-envvars-35414ed1-47ad-4a1f-9a70-3b982b33dfc7 to disappear
    Sep  3 20:57:34.688: INFO: Pod client-envvars-35414ed1-47ad-4a1f-9a70-3b982b33dfc7 no longer exists
    [AfterEach] [sig-node] Pods
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 20:57:34.688: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "pods-4451" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":-1,"completed":36,"skipped":570,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
    [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep  3 20:57:34.770: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8590b849-562d-4e1d-94f1-d580dbf3f134" in namespace "projected-411" to be "Succeeded or Failed"

    Sep  3 20:57:34.773: INFO: Pod "downwardapi-volume-8590b849-562d-4e1d-94f1-d580dbf3f134": Phase="Pending", Reason="", readiness=false. Elapsed: 2.948514ms
    Sep  3 20:57:36.778: INFO: Pod "downwardapi-volume-8590b849-562d-4e1d-94f1-d580dbf3f134": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007433437s
    STEP: Saw pod success
    Sep  3 20:57:36.778: INFO: Pod "downwardapi-volume-8590b849-562d-4e1d-94f1-d580dbf3f134" satisfied condition "Succeeded or Failed"

    Sep  3 20:57:36.781: INFO: Trying to get logs from node k8s-upgrade-and-conformance-uljqkb-worker-gvulve pod downwardapi-volume-8590b849-562d-4e1d-94f1-d580dbf3f134 container client-container: <nil>
    STEP: delete the pod
    Sep  3 20:57:36.799: INFO: Waiting for pod downwardapi-volume-8590b849-562d-4e1d-94f1-d580dbf3f134 to disappear
    Sep  3 20:57:36.802: INFO: Pod downwardapi-volume-8590b849-562d-4e1d-94f1-d580dbf3f134 no longer exists
    [AfterEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 20:57:36.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-411" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":37,"skipped":594,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
    [It] should provide podname only [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep  3 20:57:36.895: INFO: Waiting up to 5m0s for pod "downwardapi-volume-73ab73f4-e1b1-42d2-8e4c-fe01611ec7c2" in namespace "projected-1424" to be "Succeeded or Failed"

    Sep  3 20:57:36.898: INFO: Pod "downwardapi-volume-73ab73f4-e1b1-42d2-8e4c-fe01611ec7c2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.292931ms
    Sep  3 20:57:38.903: INFO: Pod "downwardapi-volume-73ab73f4-e1b1-42d2-8e4c-fe01611ec7c2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007124463s
    STEP: Saw pod success
    Sep  3 20:57:38.903: INFO: Pod "downwardapi-volume-73ab73f4-e1b1-42d2-8e4c-fe01611ec7c2" satisfied condition "Succeeded or Failed"

    Sep  3 20:57:38.906: INFO: Trying to get logs from node k8s-upgrade-and-conformance-uljqkb-worker-gvulve pod downwardapi-volume-73ab73f4-e1b1-42d2-8e4c-fe01611ec7c2 container client-container: <nil>
    STEP: delete the pod
    Sep  3 20:57:38.922: INFO: Waiting for pod downwardapi-volume-73ab73f4-e1b1-42d2-8e4c-fe01611ec7c2 to disappear
    Sep  3 20:57:38.925: INFO: Pod downwardapi-volume-73ab73f4-e1b1-42d2-8e4c-fe01611ec7c2 no longer exists
    [AfterEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 20:57:38.925: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-1424" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":-1,"completed":38,"skipped":623,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 12 lines ...
    Sep  3 20:57:42.490: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
    [It] should honor timeout [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Setting timeout (1s) shorter than webhook latency (5s)
    STEP: Registering slow webhook via the AdmissionRegistration API
    STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s)
    STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore

    STEP: Registering slow webhook via the AdmissionRegistration API
    STEP: Having no error when timeout is longer than webhook latency

    STEP: Registering slow webhook via the AdmissionRegistration API
    STEP: Having no error when timeout is empty (defaulted to 10s in v1)

    STEP: Registering slow webhook via the AdmissionRegistration API
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 20:57:55.617: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "webhook-2642" for this suite.
    STEP: Destroying namespace "webhook-2642-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":-1,"completed":39,"skipped":655,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  3 20:57:55.746: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename secrets
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating secret with name secret-test-17eb413c-f667-4350-817b-12e85dd5a3bf
    STEP: Creating a pod to test consume secrets
    Sep  3 20:57:55.810: INFO: Waiting up to 5m0s for pod "pod-secrets-69b924b4-7184-415f-9202-60f04610cd36" in namespace "secrets-0" to be "Succeeded or Failed"

    Sep  3 20:57:55.823: INFO: Pod "pod-secrets-69b924b4-7184-415f-9202-60f04610cd36": Phase="Pending", Reason="", readiness=false. Elapsed: 12.864722ms
    Sep  3 20:57:57.827: INFO: Pod "pod-secrets-69b924b4-7184-415f-9202-60f04610cd36": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.016579629s
    STEP: Saw pod success
    Sep  3 20:57:57.827: INFO: Pod "pod-secrets-69b924b4-7184-415f-9202-60f04610cd36" satisfied condition "Succeeded or Failed"

    Sep  3 20:57:57.829: INFO: Trying to get logs from node k8s-upgrade-and-conformance-uljqkb-md-0-rg248-796ff9996-wkqbk pod pod-secrets-69b924b4-7184-415f-9202-60f04610cd36 container secret-volume-test: <nil>
    STEP: delete the pod
    Sep  3 20:57:57.844: INFO: Waiting for pod pod-secrets-69b924b4-7184-415f-9202-60f04610cd36 to disappear
    Sep  3 20:57:57.847: INFO: Pod pod-secrets-69b924b4-7184-415f-9202-60f04610cd36 no longer exists
    [AfterEach] [sig-storage] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 20:57:57.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "secrets-0" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":40,"skipped":686,"failed":0}

    
    SSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 6 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 20:57:57.932: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "configmap-0" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","total":-1,"completed":41,"skipped":700,"failed":0}

    
    SSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Container Runtime
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  3 20:57:57.954: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename container-runtime
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: create the container
    STEP: wait for the container to reach Failed

    STEP: get the container status
    STEP: the container should be terminated
    STEP: the termination message should be set
    Sep  3 20:58:00.001: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
    STEP: delete the container
    [AfterEach] [sig-node] Container Runtime
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 20:58:00.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "container-runtime-628" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":42,"skipped":711,"failed":0}

    
    SSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] CronJob
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 16 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 20:58:02.126: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "cronjob-267" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","total":-1,"completed":63,"skipped":1040,"failed":0}

    
    SSSSS
    ------------------------------
    [BeforeEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  3 20:58:02.149: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename configmap
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating configMap with name configmap-test-volume-map-5de9ed91-d098-4f63-96ac-e79e0880768c
    STEP: Creating a pod to test consume configMaps
    Sep  3 20:58:02.192: INFO: Waiting up to 5m0s for pod "pod-configmaps-045984ea-0012-403b-ae01-3743f81f436c" in namespace "configmap-9235" to be "Succeeded or Failed"

    Sep  3 20:58:02.196: INFO: Pod "pod-configmaps-045984ea-0012-403b-ae01-3743f81f436c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.103189ms
    Sep  3 20:58:04.200: INFO: Pod "pod-configmaps-045984ea-0012-403b-ae01-3743f81f436c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007977055s
    STEP: Saw pod success
    Sep  3 20:58:04.200: INFO: Pod "pod-configmaps-045984ea-0012-403b-ae01-3743f81f436c" satisfied condition "Succeeded or Failed"

    Sep  3 20:58:04.203: INFO: Trying to get logs from node k8s-upgrade-and-conformance-uljqkb-worker-gvulve pod pod-configmaps-045984ea-0012-403b-ae01-3743f81f436c container agnhost-container: <nil>
    STEP: delete the pod
    Sep  3 20:58:04.214: INFO: Waiting for pod pod-configmaps-045984ea-0012-403b-ae01-3743f81f436c to disappear
    Sep  3 20:58:04.219: INFO: Pod pod-configmaps-045984ea-0012-403b-ae01-3743f81f436c no longer exists
    [AfterEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 20:58:04.219: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "configmap-9235" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":64,"skipped":1045,"failed":0}

    
    SSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] Job
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 22 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 20:58:11.318: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "job-8892" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":-1,"completed":65,"skipped":1051,"failed":0}

    
    SSSSSSSS
    ------------------------------
    [BeforeEach] [sig-auth] ServiceAccounts
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 20:58:14.457: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "svcaccounts-5036" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","total":-1,"completed":66,"skipped":1059,"failed":0}

    
    SSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] StatefulSet
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 40 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 20:59:34.779: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "statefulset-6420" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":-1,"completed":67,"skipped":1069,"failed":0}

    [BeforeEach] [sig-node] Pods
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  3 20:59:34.793: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename pods
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 11 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 20:59:34.881: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "pods-623" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Pods should delete a collection of pods [Conformance]","total":-1,"completed":68,"skipped":1069,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected secret
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  3 20:59:34.925: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename projected
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating projection with secret that has name projected-secret-test-map-82897365-96af-4e97-86b5-d86b31b6f0a6
    STEP: Creating a pod to test consume secrets
    Sep  3 20:59:34.968: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-adab6f8f-8a6d-4d2f-b939-6c5115049a05" in namespace "projected-3083" to be "Succeeded or Failed"

    Sep  3 20:59:34.973: INFO: Pod "pod-projected-secrets-adab6f8f-8a6d-4d2f-b939-6c5115049a05": Phase="Pending", Reason="", readiness=false. Elapsed: 3.926297ms
    Sep  3 20:59:36.977: INFO: Pod "pod-projected-secrets-adab6f8f-8a6d-4d2f-b939-6c5115049a05": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008490007s
    STEP: Saw pod success
    Sep  3 20:59:36.977: INFO: Pod "pod-projected-secrets-adab6f8f-8a6d-4d2f-b939-6c5115049a05" satisfied condition "Succeeded or Failed"

    Sep  3 20:59:36.981: INFO: Trying to get logs from node k8s-upgrade-and-conformance-uljqkb-worker-gvulve pod pod-projected-secrets-adab6f8f-8a6d-4d2f-b939-6c5115049a05 container projected-secret-volume-test: <nil>
    STEP: delete the pod
    Sep  3 20:59:37.004: INFO: Waiting for pod pod-projected-secrets-adab6f8f-8a6d-4d2f-b939-6c5115049a05 to disappear
    Sep  3 20:59:37.007: INFO: Pod pod-projected-secrets-adab6f8f-8a6d-4d2f-b939-6c5115049a05 no longer exists
    [AfterEach] [sig-storage] Projected secret
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 20:59:37.007: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-3083" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":69,"skipped":1090,"failed":0}

    
    SSSSSS
    ------------------------------
    [BeforeEach] [sig-network] Networking
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 42 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 20:59:59.546: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "pod-network-test-8521" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":70,"skipped":1096,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Pods
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 29 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 21:00:00.547: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "pods-610" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Pods should run through the lifecycle of Pods and PodStatus [Conformance]","total":-1,"completed":71,"skipped":1142,"failed":0}

    
    SSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] CronJob
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 20 lines ...
    • [SLOW TEST:312.086 seconds]
    [sig-apps] CronJob
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
      should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    ------------------------------
    {"msg":"PASSED [sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","total":-1,"completed":36,"skipped":674,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] ReplicaSet
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 18 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 21:00:07.177: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "replicaset-9322" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] ReplicaSet Replace and Patch tests [Conformance]","total":-1,"completed":72,"skipped":1158,"failed":0}

    
    SSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] CronJob
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 27 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 21:00:07.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "cronjob-7375" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] CronJob should support CronJob API operations [Conformance]","total":-1,"completed":73,"skipped":1165,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    {"msg":"FAILED [sig-network] DNS should provide DNS for the cluster  [Conformance]","total":-1,"completed":4,"skipped":76,"failed":2,"failures":["[sig-network] DNS should provide DNS for the cluster  [Conformance]","[sig-network] DNS should provide DNS for the cluster  [Conformance]"]}

    [BeforeEach] [sig-network] DNS
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  3 20:55:12.057: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename dns
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 5 lines ...
    
    STEP: creating a pod to probe DNS
    STEP: submitting the pod to kubernetes
    STEP: retrieving the pod
    STEP: looking for the results for each expected name from probers
    Sep  3 20:58:48.000: INFO: Unable to read wheezy_udp@kubernetes.default.svc.cluster.local from pod dns-3871/dns-test-41c04c4f-fe62-4768-921a-b3953cf96d4f: the server is currently unable to handle the request (get pods dns-test-41c04c4f-fe62-4768-921a-b3953cf96d4f)
    Sep  3 21:00:14.120: FAIL: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod dns-3871/dns-test-41c04c4f-fe62-4768-921a-b3953cf96d4f: Get "https://172.18.0.3:6443/api/v1/namespaces/dns-3871/pods/dns-test-41c04c4f-fe62-4768-921a-b3953cf96d4f/proxy/results/wheezy_tcp@kubernetes.default.svc.cluster.local": context deadline exceeded

    
    Full Stack Trace
    k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc001e1bda8, 0x29a3500, 0x0, 0x0)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211 +0x69
    k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollImmediateInternal(0xc002a9c780, 0xc001e1bda8, 0xc002a9c780, 0xc001e1bda8)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 +0x2f
... skipping 13 lines ...
    	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b
    testing.tRunner(0xc00112ad80, 0x70fea78)
    	/usr/local/go/src/testing/testing.go:1203 +0xe5
    created by testing.(*T).Run
    	/usr/local/go/src/testing/testing.go:1248 +0x2b3
    E0903 21:00:14.121066      15 runtime.go:78] Observed a panic: ginkgowrapper.FailurePanic{Message:"Sep  3 21:00:14.120: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod dns-3871/dns-test-41c04c4f-fe62-4768-921a-b3953cf96d4f: Get \"https://172.18.0.3:6443/api/v1/namespaces/dns-3871/pods/dns-test-41c04c4f-fe62-4768-921a-b3953cf96d4f/proxy/results/wheezy_tcp@kubernetes.default.svc.cluster.local\": context deadline exceeded", Filename:"/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go", Line:211, FullStackTrace:"k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc001e1bda8, 0x29a3500, 0x0, 0x0)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211 +0x69\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollImmediateInternal(0xc002a9c780, 0xc001e1bda8, 0xc002a9c780, 0xc001e1bda8)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 +0x2f\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x12a05f200, 0x8bb2c97000, 0xc001e1bda8, 0x4a, 0x0)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:441 +0x4d\nk8s.io/kubernetes/test/e2e/network.assertFilesContain(0xc001082880, 0x8, 0x8, 0x6ee63d3, 0x7, 0xc000077400, 0x77b8c18, 0xc002e8d600, 0x0, 0x0, ...)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:463 +0x158\nk8s.io/kubernetes/test/e2e/network.assertFilesExist(...)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:457\nk8s.io/kubernetes/test/e2e/network.validateDNSResults(0xc0006db080, 0xc000077400, 0xc001082880, 0x8, 0x8)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:520 +0x365\nk8s.io/kubernetes/test/e2e/network.glob..func2.1()\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:64 +0x58a\nk8s.io/kubernetes/test/e2e.RunE2ETests(0xc00112ad80)\n\t_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c\nk8s.io/kubernetes/test/e2e.TestE2E(0xc00112ad80)\n\t_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b\ntesting.tRunner(0xc00112ad80, 0x70fea78)\n\t/usr/local/go/src/testing/testing.go:1203 +0xe5\ncreated by testing.(*T).Run\n\t/usr/local/go/src/testing/testing.go:1248 +0x2b3"} (
    Your test failed.

    Ginkgo panics to prevent subsequent assertions from running.
    Normally Ginkgo rescues this panic so you shouldn't see it.
    
    But, if you make an assertion in a goroutine, Ginkgo can't capture the panic.
    To circumvent this, you should call
    
... skipping 5 lines ...
    k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic(0x6a84100, 0xc0036480c0)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0x95
    k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x86
    panic(0x6a84100, 0xc0036480c0)
    	/usr/local/go/src/runtime/panic.go:965 +0x1b9
    k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1(0xc00115fce0, 0x159, 0x86a5e60, 0x7d, 0xd3, 0xc001124000, 0x7fb)

    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:63 +0xa5
    panic(0x61dbcc0, 0x75da840)
    	/usr/local/go/src/runtime/panic.go:965 +0x1b9
    k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.Fail(0xc00115fce0, 0x159, 0xc001e1b7e8, 0x1, 0x1)

    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:267 +0xc8
    k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail(0xc00115fce0, 0x159, 0xc001e1b8d0, 0x1, 0x1)

    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:67 +0x1b5
    k8s.io/kubernetes/test/e2e/framework.Failf(0x6f89b47, 0x24, 0xc001e1bb30, 0x4, 0x4)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/log.go:51 +0x219
    k8s.io/kubernetes/test/e2e/network.assertFilesContain.func1(0xc002a9c700, 0x0, 0x0)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:480 +0xab1
    k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc001e1bda8, 0x29a3500, 0x0, 0x0)
... skipping 54 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    
      Sep  3 21:00:14.120: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod dns-3871/dns-test-41c04c4f-fe62-4768-921a-b3953cf96d4f: Get "https://172.18.0.3:6443/api/v1/namespaces/dns-3871/pods/dns-test-41c04c4f-fe62-4768-921a-b3953cf96d4f/proxy/results/wheezy_tcp@kubernetes.default.svc.cluster.local": context deadline exceeded
    
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211
    ------------------------------
    {"msg":"FAILED [sig-network] DNS should provide DNS for the cluster  [Conformance]","total":-1,"completed":4,"skipped":76,"failed":3,"failures":["[sig-network] DNS should provide DNS for the cluster  [Conformance]","[sig-network] DNS should provide DNS for the cluster  [Conformance]","[sig-network] DNS should provide DNS for the cluster  [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] Garbage collector
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 45 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 21:00:41.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "gc-9482" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":-1,"completed":37,"skipped":687,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]}

    
    SS
    ------------------------------
    [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 23 lines ...
    STEP: Destroying namespace "crd-webhook-98" for this suite.
    [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":-1,"completed":38,"skipped":689,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]}

    
    SSSSSS
    ------------------------------
    [BeforeEach] [sig-node] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 21:00:49.000: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "configmap-5601" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","total":-1,"completed":39,"skipped":695,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 21:00:49.103: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "custom-resource-definition-3266" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":-1,"completed":40,"skipped":722,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] Garbage collector
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 41 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 21:00:59.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "gc-4126" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":-1,"completed":41,"skipped":772,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]}

    
    S
    ------------------------------
    [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 19 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 21:02:02.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "crd-watch-4559" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":-1,"completed":42,"skipped":773,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Probing container
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 18 lines ...
    • [SLOW TEST:242.879 seconds]
    [sig-node] Probing container
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
      should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    ------------------------------
    {"msg":"PASSED [sig-node] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":-1,"completed":43,"skipped":721,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 7 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 21:02:03.756: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "custom-resource-definition-9986" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works  [Conformance]","total":-1,"completed":43,"skipped":784,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]}

    
    SS
    ------------------------------
    [BeforeEach] [sig-node] Docker Containers
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  3 21:02:03.775: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename containers
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test override all
    Sep  3 21:02:03.821: INFO: Waiting up to 5m0s for pod "client-containers-808cf867-4fa5-4523-b3ea-35b32d026c84" in namespace "containers-2539" to be "Succeeded or Failed"

    Sep  3 21:02:03.826: INFO: Pod "client-containers-808cf867-4fa5-4523-b3ea-35b32d026c84": Phase="Pending", Reason="", readiness=false. Elapsed: 3.685447ms
    Sep  3 21:02:05.831: INFO: Pod "client-containers-808cf867-4fa5-4523-b3ea-35b32d026c84": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009133667s
    STEP: Saw pod success
    Sep  3 21:02:05.831: INFO: Pod "client-containers-808cf867-4fa5-4523-b3ea-35b32d026c84" satisfied condition "Succeeded or Failed"

    Sep  3 21:02:05.835: INFO: Trying to get logs from node k8s-upgrade-and-conformance-uljqkb-md-0-rg248-796ff9996-j7vhm pod client-containers-808cf867-4fa5-4523-b3ea-35b32d026c84 container agnhost-container: <nil>
    STEP: delete the pod
    Sep  3 21:02:05.863: INFO: Waiting for pod client-containers-808cf867-4fa5-4523-b3ea-35b32d026c84 to disappear
    Sep  3 21:02:05.866: INFO: Pod client-containers-808cf867-4fa5-4523-b3ea-35b32d026c84 no longer exists
    [AfterEach] [sig-node] Docker Containers
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 21:02:05.866: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "containers-2539" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":-1,"completed":44,"skipped":786,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Security Context
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  3 21:02:05.911: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename security-context-test
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-node] Security Context
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
    [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    Sep  3 21:02:05.964: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-cdc07a20-aa5d-4b98-95eb-f58af13c96d9" in namespace "security-context-test-7614" to be "Succeeded or Failed"

    Sep  3 21:02:05.967: INFO: Pod "alpine-nnp-false-cdc07a20-aa5d-4b98-95eb-f58af13c96d9": Phase="Pending", Reason="", readiness=false. Elapsed: 3.683577ms
    Sep  3 21:02:07.971: INFO: Pod "alpine-nnp-false-cdc07a20-aa5d-4b98-95eb-f58af13c96d9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007170817s
    Sep  3 21:02:09.976: INFO: Pod "alpine-nnp-false-cdc07a20-aa5d-4b98-95eb-f58af13c96d9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012291557s
    Sep  3 21:02:09.976: INFO: Pod "alpine-nnp-false-cdc07a20-aa5d-4b98-95eb-f58af13c96d9" satisfied condition "Succeeded or Failed"

    [AfterEach] [sig-node] Security Context
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 21:02:09.993: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "security-context-test-7614" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":45,"skipped":796,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]}

    
    SSSS
    ------------------------------
    [BeforeEach] [sig-node] PreStop
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 26 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 21:02:12.048: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "prestop-4032" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] PreStop should call prestop when killing a pod  [Conformance]","total":-1,"completed":44,"skipped":747,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  3 21:02:10.014: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename emptydir
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test emptydir 0644 on node default medium
    Sep  3 21:02:10.056: INFO: Waiting up to 5m0s for pod "pod-ed9d762a-26a4-42aa-87da-d3cd9b12fd0a" in namespace "emptydir-1212" to be "Succeeded or Failed"

    Sep  3 21:02:10.060: INFO: Pod "pod-ed9d762a-26a4-42aa-87da-d3cd9b12fd0a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.714515ms
    Sep  3 21:02:12.066: INFO: Pod "pod-ed9d762a-26a4-42aa-87da-d3cd9b12fd0a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009513966s
    STEP: Saw pod success
    Sep  3 21:02:12.066: INFO: Pod "pod-ed9d762a-26a4-42aa-87da-d3cd9b12fd0a" satisfied condition "Succeeded or Failed"

    Sep  3 21:02:12.069: INFO: Trying to get logs from node k8s-upgrade-and-conformance-uljqkb-worker-tpmotr pod pod-ed9d762a-26a4-42aa-87da-d3cd9b12fd0a container test-container: <nil>
    STEP: delete the pod
    Sep  3 21:02:12.097: INFO: Waiting for pod pod-ed9d762a-26a4-42aa-87da-d3cd9b12fd0a to disappear
    Sep  3 21:02:12.101: INFO: Pod pod-ed9d762a-26a4-42aa-87da-d3cd9b12fd0a no longer exists
    [AfterEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 21:02:12.101: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-1212" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":46,"skipped":800,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected configMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  3 21:02:12.115: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename projected
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating configMap with name projected-configmap-test-volume-472bb89e-011c-4276-93ca-f1a4cab60578
    STEP: Creating a pod to test consume configMaps
    Sep  3 21:02:12.167: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-5403d844-9cad-490a-8afa-b73e8807a3a8" in namespace "projected-2331" to be "Succeeded or Failed"

    Sep  3 21:02:12.176: INFO: Pod "pod-projected-configmaps-5403d844-9cad-490a-8afa-b73e8807a3a8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.487979ms
    Sep  3 21:02:14.181: INFO: Pod "pod-projected-configmaps-5403d844-9cad-490a-8afa-b73e8807a3a8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.013795124s
    STEP: Saw pod success
    Sep  3 21:02:14.181: INFO: Pod "pod-projected-configmaps-5403d844-9cad-490a-8afa-b73e8807a3a8" satisfied condition "Succeeded or Failed"

    Sep  3 21:02:14.184: INFO: Trying to get logs from node k8s-upgrade-and-conformance-uljqkb-worker-tpmotr pod pod-projected-configmaps-5403d844-9cad-490a-8afa-b73e8807a3a8 container agnhost-container: <nil>
    STEP: delete the pod
    Sep  3 21:02:14.202: INFO: Waiting for pod pod-projected-configmaps-5403d844-9cad-490a-8afa-b73e8807a3a8 to disappear
    Sep  3 21:02:14.205: INFO: Pod pod-projected-configmaps-5403d844-9cad-490a-8afa-b73e8807a3a8 no longer exists
    [AfterEach] [sig-storage] Projected configMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 21:02:14.205: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-2331" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":-1,"completed":45,"skipped":776,"failed":0}

    
    SSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  3 21:02:12.149: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename configmap
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating configMap with name configmap-test-volume-map-53b384e3-5044-4e47-a5e7-bda275ab7320
    STEP: Creating a pod to test consume configMaps
    Sep  3 21:02:12.193: INFO: Waiting up to 5m0s for pod "pod-configmaps-c600f8f3-986d-4b69-862a-53c9555fd104" in namespace "configmap-7778" to be "Succeeded or Failed"

    Sep  3 21:02:12.196: INFO: Pod "pod-configmaps-c600f8f3-986d-4b69-862a-53c9555fd104": Phase="Pending", Reason="", readiness=false. Elapsed: 3.35373ms
    Sep  3 21:02:14.202: INFO: Pod "pod-configmaps-c600f8f3-986d-4b69-862a-53c9555fd104": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009039659s
    STEP: Saw pod success
    Sep  3 21:02:14.202: INFO: Pod "pod-configmaps-c600f8f3-986d-4b69-862a-53c9555fd104" satisfied condition "Succeeded or Failed"

    Sep  3 21:02:14.205: INFO: Trying to get logs from node k8s-upgrade-and-conformance-uljqkb-worker-gvulve pod pod-configmaps-c600f8f3-986d-4b69-862a-53c9555fd104 container agnhost-container: <nil>
    STEP: delete the pod
    Sep  3 21:02:14.233: INFO: Waiting for pod pod-configmaps-c600f8f3-986d-4b69-862a-53c9555fd104 to disappear
    Sep  3 21:02:14.236: INFO: Pod pod-configmaps-c600f8f3-986d-4b69-862a-53c9555fd104 no longer exists
    [AfterEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 21:02:14.236: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "configmap-7778" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":47,"skipped":819,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected secret
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  3 21:02:14.258: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename projected
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating projection with secret that has name projected-secret-test-5fa1dd7d-87d5-4894-ac27-861e1f780d4b
    STEP: Creating a pod to test consume secrets
    Sep  3 21:02:14.305: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-356645b9-4e5f-4d27-99b6-6f3b7c1c7024" in namespace "projected-7464" to be "Succeeded or Failed"

    Sep  3 21:02:14.309: INFO: Pod "pod-projected-secrets-356645b9-4e5f-4d27-99b6-6f3b7c1c7024": Phase="Pending", Reason="", readiness=false. Elapsed: 3.90593ms
    Sep  3 21:02:16.316: INFO: Pod "pod-projected-secrets-356645b9-4e5f-4d27-99b6-6f3b7c1c7024": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.011225485s
    STEP: Saw pod success
    Sep  3 21:02:16.316: INFO: Pod "pod-projected-secrets-356645b9-4e5f-4d27-99b6-6f3b7c1c7024" satisfied condition "Succeeded or Failed"

    Sep  3 21:02:16.325: INFO: Trying to get logs from node k8s-upgrade-and-conformance-uljqkb-worker-gvulve pod pod-projected-secrets-356645b9-4e5f-4d27-99b6-6f3b7c1c7024 container projected-secret-volume-test: <nil>
    STEP: delete the pod
    Sep  3 21:02:16.354: INFO: Waiting for pod pod-projected-secrets-356645b9-4e5f-4d27-99b6-6f3b7c1c7024 to disappear
    Sep  3 21:02:16.358: INFO: Pod pod-projected-secrets-356645b9-4e5f-4d27-99b6-6f3b7c1c7024 no longer exists
    [AfterEach] [sig-storage] Projected secret
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 21:02:16.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-7464" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":46,"skipped":800,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] StatefulSet
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 69 lines ...
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
      Basic StatefulSet functionality [StatefulSetBasic]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95
        should perform rolling updates and roll backs of template modifications [Conformance]
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    ------------------------------
    {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":-1,"completed":74,"skipped":1206,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected secret
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  3 21:02:18.549: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename projected
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating projection with secret that has name projected-secret-test-map-7c025c4a-e8a3-46cc-9ce6-c9741d9bfa65
    STEP: Creating a pod to test consume secrets
    Sep  3 21:02:18.588: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-79226bc0-f33b-46d4-9565-77f84729e192" in namespace "projected-3730" to be "Succeeded or Failed"

    Sep  3 21:02:18.596: INFO: Pod "pod-projected-secrets-79226bc0-f33b-46d4-9565-77f84729e192": Phase="Pending", Reason="", readiness=false. Elapsed: 8.562145ms
    Sep  3 21:02:20.601: INFO: Pod "pod-projected-secrets-79226bc0-f33b-46d4-9565-77f84729e192": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.013556103s
    STEP: Saw pod success
    Sep  3 21:02:20.601: INFO: Pod "pod-projected-secrets-79226bc0-f33b-46d4-9565-77f84729e192" satisfied condition "Succeeded or Failed"

    Sep  3 21:02:20.605: INFO: Trying to get logs from node k8s-upgrade-and-conformance-uljqkb-worker-gvulve pod pod-projected-secrets-79226bc0-f33b-46d4-9565-77f84729e192 container projected-secret-volume-test: <nil>
    STEP: delete the pod
    Sep  3 21:02:20.627: INFO: Waiting for pod pod-projected-secrets-79226bc0-f33b-46d4-9565-77f84729e192 to disappear
    Sep  3 21:02:20.630: INFO: Pod pod-projected-secrets-79226bc0-f33b-46d4-9565-77f84729e192 no longer exists
    [AfterEach] [sig-storage] Projected secret
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 21:02:20.630: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-3730" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":75,"skipped":1230,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-instrumentation] Events API
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 21 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 21:02:20.780: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "events-2732" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":-1,"completed":76,"skipped":1257,"failed":0}

    [BeforeEach] [sig-network] DNS
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  3 21:02:20.790: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename dns
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 13 lines ...
    Sep  3 21:02:22.854: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9500.svc.cluster.local from pod dns-9500/dns-test-25b381f7-d3a0-452c-98cf-2d683a547c62: the server could not find the requested resource (get pods dns-test-25b381f7-d3a0-452c-98cf-2d683a547c62)
    Sep  3 21:02:22.858: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9500.svc.cluster.local from pod dns-9500/dns-test-25b381f7-d3a0-452c-98cf-2d683a547c62: the server could not find the requested resource (get pods dns-test-25b381f7-d3a0-452c-98cf-2d683a547c62)
    Sep  3 21:02:22.869: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9500.svc.cluster.local from pod dns-9500/dns-test-25b381f7-d3a0-452c-98cf-2d683a547c62: the server could not find the requested resource (get pods dns-test-25b381f7-d3a0-452c-98cf-2d683a547c62)
    Sep  3 21:02:22.872: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9500.svc.cluster.local from pod dns-9500/dns-test-25b381f7-d3a0-452c-98cf-2d683a547c62: the server could not find the requested resource (get pods dns-test-25b381f7-d3a0-452c-98cf-2d683a547c62)
    Sep  3 21:02:22.876: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9500.svc.cluster.local from pod dns-9500/dns-test-25b381f7-d3a0-452c-98cf-2d683a547c62: the server could not find the requested resource (get pods dns-test-25b381f7-d3a0-452c-98cf-2d683a547c62)
    Sep  3 21:02:22.880: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9500.svc.cluster.local from pod dns-9500/dns-test-25b381f7-d3a0-452c-98cf-2d683a547c62: the server could not find the requested resource (get pods dns-test-25b381f7-d3a0-452c-98cf-2d683a547c62)
    Sep  3 21:02:22.886: INFO: Lookups using dns-9500/dns-test-25b381f7-d3a0-452c-98cf-2d683a547c62 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9500.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9500.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9500.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9500.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9500.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9500.svc.cluster.local jessie_udp@dns-test-service-2.dns-9500.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9500.svc.cluster.local]

    
    Sep  3 21:02:27.891: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-9500.svc.cluster.local from pod dns-9500/dns-test-25b381f7-d3a0-452c-98cf-2d683a547c62: the server could not find the requested resource (get pods dns-test-25b381f7-d3a0-452c-98cf-2d683a547c62)
    Sep  3 21:02:27.894: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9500.svc.cluster.local from pod dns-9500/dns-test-25b381f7-d3a0-452c-98cf-2d683a547c62: the server could not find the requested resource (get pods dns-test-25b381f7-d3a0-452c-98cf-2d683a547c62)
    Sep  3 21:02:27.898: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9500.svc.cluster.local from pod dns-9500/dns-test-25b381f7-d3a0-452c-98cf-2d683a547c62: the server could not find the requested resource (get pods dns-test-25b381f7-d3a0-452c-98cf-2d683a547c62)
    Sep  3 21:02:27.901: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9500.svc.cluster.local from pod dns-9500/dns-test-25b381f7-d3a0-452c-98cf-2d683a547c62: the server could not find the requested resource (get pods dns-test-25b381f7-d3a0-452c-98cf-2d683a547c62)
    Sep  3 21:02:27.914: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9500.svc.cluster.local from pod dns-9500/dns-test-25b381f7-d3a0-452c-98cf-2d683a547c62: the server could not find the requested resource (get pods dns-test-25b381f7-d3a0-452c-98cf-2d683a547c62)
    Sep  3 21:02:27.918: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9500.svc.cluster.local from pod dns-9500/dns-test-25b381f7-d3a0-452c-98cf-2d683a547c62: the server could not find the requested resource (get pods dns-test-25b381f7-d3a0-452c-98cf-2d683a547c62)
    Sep  3 21:02:27.921: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9500.svc.cluster.local from pod dns-9500/dns-test-25b381f7-d3a0-452c-98cf-2d683a547c62: the server could not find the requested resource (get pods dns-test-25b381f7-d3a0-452c-98cf-2d683a547c62)
    Sep  3 21:02:27.925: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9500.svc.cluster.local from pod dns-9500/dns-test-25b381f7-d3a0-452c-98cf-2d683a547c62: the server could not find the requested resource (get pods dns-test-25b381f7-d3a0-452c-98cf-2d683a547c62)
    Sep  3 21:02:27.933: INFO: Lookups using dns-9500/dns-test-25b381f7-d3a0-452c-98cf-2d683a547c62 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9500.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9500.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9500.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9500.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9500.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9500.svc.cluster.local jessie_udp@dns-test-service-2.dns-9500.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9500.svc.cluster.local]

    
    Sep  3 21:02:32.891: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-9500.svc.cluster.local from pod dns-9500/dns-test-25b381f7-d3a0-452c-98cf-2d683a547c62: the server could not find the requested resource (get pods dns-test-25b381f7-d3a0-452c-98cf-2d683a547c62)
    Sep  3 21:02:32.894: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9500.svc.cluster.local from pod dns-9500/dns-test-25b381f7-d3a0-452c-98cf-2d683a547c62: the server could not find the requested resource (get pods dns-test-25b381f7-d3a0-452c-98cf-2d683a547c62)
    Sep  3 21:02:32.897: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9500.svc.cluster.local from pod dns-9500/dns-test-25b381f7-d3a0-452c-98cf-2d683a547c62: the server could not find the requested resource (get pods dns-test-25b381f7-d3a0-452c-98cf-2d683a547c62)
    Sep  3 21:02:32.901: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9500.svc.cluster.local from pod dns-9500/dns-test-25b381f7-d3a0-452c-98cf-2d683a547c62: the server could not find the requested resource (get pods dns-test-25b381f7-d3a0-452c-98cf-2d683a547c62)
    Sep  3 21:02:32.910: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9500.svc.cluster.local from pod dns-9500/dns-test-25b381f7-d3a0-452c-98cf-2d683a547c62: the server could not find the requested resource (get pods dns-test-25b381f7-d3a0-452c-98cf-2d683a547c62)
    Sep  3 21:02:32.913: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9500.svc.cluster.local from pod dns-9500/dns-test-25b381f7-d3a0-452c-98cf-2d683a547c62: the server could not find the requested resource (get pods dns-test-25b381f7-d3a0-452c-98cf-2d683a547c62)
    Sep  3 21:02:32.916: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9500.svc.cluster.local from pod dns-9500/dns-test-25b381f7-d3a0-452c-98cf-2d683a547c62: the server could not find the requested resource (get pods dns-test-25b381f7-d3a0-452c-98cf-2d683a547c62)
    Sep  3 21:02:32.919: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9500.svc.cluster.local from pod dns-9500/dns-test-25b381f7-d3a0-452c-98cf-2d683a547c62: the server could not find the requested resource (get pods dns-test-25b381f7-d3a0-452c-98cf-2d683a547c62)
    Sep  3 21:02:32.926: INFO: Lookups using dns-9500/dns-test-25b381f7-d3a0-452c-98cf-2d683a547c62 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9500.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9500.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9500.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9500.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9500.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9500.svc.cluster.local jessie_udp@dns-test-service-2.dns-9500.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9500.svc.cluster.local]

    
    Sep  3 21:02:37.891: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-9500.svc.cluster.local from pod dns-9500/dns-test-25b381f7-d3a0-452c-98cf-2d683a547c62: the server could not find the requested resource (get pods dns-test-25b381f7-d3a0-452c-98cf-2d683a547c62)
    Sep  3 21:02:37.894: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9500.svc.cluster.local from pod dns-9500/dns-test-25b381f7-d3a0-452c-98cf-2d683a547c62: the server could not find the requested resource (get pods dns-test-25b381f7-d3a0-452c-98cf-2d683a547c62)
    Sep  3 21:02:37.898: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9500.svc.cluster.local from pod dns-9500/dns-test-25b381f7-d3a0-452c-98cf-2d683a547c62: the server could not find the requested resource (get pods dns-test-25b381f7-d3a0-452c-98cf-2d683a547c62)
    Sep  3 21:02:37.903: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9500.svc.cluster.local from pod dns-9500/dns-test-25b381f7-d3a0-452c-98cf-2d683a547c62: the server could not find the requested resource (get pods dns-test-25b381f7-d3a0-452c-98cf-2d683a547c62)
    Sep  3 21:02:37.913: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9500.svc.cluster.local from pod dns-9500/dns-test-25b381f7-d3a0-452c-98cf-2d683a547c62: the server could not find the requested resource (get pods dns-test-25b381f7-d3a0-452c-98cf-2d683a547c62)
    Sep  3 21:02:37.917: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9500.svc.cluster.local from pod dns-9500/dns-test-25b381f7-d3a0-452c-98cf-2d683a547c62: the server could not find the requested resource (get pods dns-test-25b381f7-d3a0-452c-98cf-2d683a547c62)
    Sep  3 21:02:37.921: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9500.svc.cluster.local from pod dns-9500/dns-test-25b381f7-d3a0-452c-98cf-2d683a547c62: the server could not find the requested resource (get pods dns-test-25b381f7-d3a0-452c-98cf-2d683a547c62)
    Sep  3 21:02:37.924: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9500.svc.cluster.local from pod dns-9500/dns-test-25b381f7-d3a0-452c-98cf-2d683a547c62: the server could not find the requested resource (get pods dns-test-25b381f7-d3a0-452c-98cf-2d683a547c62)
    Sep  3 21:02:37.931: INFO: Lookups using dns-9500/dns-test-25b381f7-d3a0-452c-98cf-2d683a547c62 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9500.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9500.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9500.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9500.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9500.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9500.svc.cluster.local jessie_udp@dns-test-service-2.dns-9500.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9500.svc.cluster.local]

    
    Sep  3 21:02:42.891: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-9500.svc.cluster.local from pod dns-9500/dns-test-25b381f7-d3a0-452c-98cf-2d683a547c62: the server could not find the requested resource (get pods dns-test-25b381f7-d3a0-452c-98cf-2d683a547c62)
    Sep  3 21:02:42.894: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9500.svc.cluster.local from pod dns-9500/dns-test-25b381f7-d3a0-452c-98cf-2d683a547c62: the server could not find the requested resource (get pods dns-test-25b381f7-d3a0-452c-98cf-2d683a547c62)
    Sep  3 21:02:42.898: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9500.svc.cluster.local from pod dns-9500/dns-test-25b381f7-d3a0-452c-98cf-2d683a547c62: the server could not find the requested resource (get pods dns-test-25b381f7-d3a0-452c-98cf-2d683a547c62)
    Sep  3 21:02:42.901: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9500.svc.cluster.local from pod dns-9500/dns-test-25b381f7-d3a0-452c-98cf-2d683a547c62: the server could not find the requested resource (get pods dns-test-25b381f7-d3a0-452c-98cf-2d683a547c62)
    Sep  3 21:02:42.911: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9500.svc.cluster.local from pod dns-9500/dns-test-25b381f7-d3a0-452c-98cf-2d683a547c62: the server could not find the requested resource (get pods dns-test-25b381f7-d3a0-452c-98cf-2d683a547c62)
    Sep  3 21:02:42.915: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9500.svc.cluster.local from pod dns-9500/dns-test-25b381f7-d3a0-452c-98cf-2d683a547c62: the server could not find the requested resource (get pods dns-test-25b381f7-d3a0-452c-98cf-2d683a547c62)
    Sep  3 21:02:42.919: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9500.svc.cluster.local from pod dns-9500/dns-test-25b381f7-d3a0-452c-98cf-2d683a547c62: the server could not find the requested resource (get pods dns-test-25b381f7-d3a0-452c-98cf-2d683a547c62)
    Sep  3 21:02:42.923: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9500.svc.cluster.local from pod dns-9500/dns-test-25b381f7-d3a0-452c-98cf-2d683a547c62: the server could not find the requested resource (get pods dns-test-25b381f7-d3a0-452c-98cf-2d683a547c62)
    Sep  3 21:02:42.931: INFO: Lookups using dns-9500/dns-test-25b381f7-d3a0-452c-98cf-2d683a547c62 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9500.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9500.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9500.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9500.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9500.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9500.svc.cluster.local jessie_udp@dns-test-service-2.dns-9500.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9500.svc.cluster.local]

    
    Sep  3 21:02:47.891: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-9500.svc.cluster.local from pod dns-9500/dns-test-25b381f7-d3a0-452c-98cf-2d683a547c62: the server could not find the requested resource (get pods dns-test-25b381f7-d3a0-452c-98cf-2d683a547c62)
    Sep  3 21:02:47.895: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9500.svc.cluster.local from pod dns-9500/dns-test-25b381f7-d3a0-452c-98cf-2d683a547c62: the server could not find the requested resource (get pods dns-test-25b381f7-d3a0-452c-98cf-2d683a547c62)
    Sep  3 21:02:47.898: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9500.svc.cluster.local from pod dns-9500/dns-test-25b381f7-d3a0-452c-98cf-2d683a547c62: the server could not find the requested resource (get pods dns-test-25b381f7-d3a0-452c-98cf-2d683a547c62)
    Sep  3 21:02:47.902: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9500.svc.cluster.local from pod dns-9500/dns-test-25b381f7-d3a0-452c-98cf-2d683a547c62: the server could not find the requested resource (get pods dns-test-25b381f7-d3a0-452c-98cf-2d683a547c62)
    Sep  3 21:02:47.913: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9500.svc.cluster.local from pod dns-9500/dns-test-25b381f7-d3a0-452c-98cf-2d683a547c62: the server could not find the requested resource (get pods dns-test-25b381f7-d3a0-452c-98cf-2d683a547c62)
    Sep  3 21:02:47.917: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9500.svc.cluster.local from pod dns-9500/dns-test-25b381f7-d3a0-452c-98cf-2d683a547c62: the server could not find the requested resource (get pods dns-test-25b381f7-d3a0-452c-98cf-2d683a547c62)
    Sep  3 21:02:47.920: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9500.svc.cluster.local from pod dns-9500/dns-test-25b381f7-d3a0-452c-98cf-2d683a547c62: the server could not find the requested resource (get pods dns-test-25b381f7-d3a0-452c-98cf-2d683a547c62)
    Sep  3 21:02:47.923: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9500.svc.cluster.local from pod dns-9500/dns-test-25b381f7-d3a0-452c-98cf-2d683a547c62: the server could not find the requested resource (get pods dns-test-25b381f7-d3a0-452c-98cf-2d683a547c62)
    Sep  3 21:02:47.930: INFO: Lookups using dns-9500/dns-test-25b381f7-d3a0-452c-98cf-2d683a547c62 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9500.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9500.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9500.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9500.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9500.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9500.svc.cluster.local jessie_udp@dns-test-service-2.dns-9500.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9500.svc.cluster.local]

    
    Sep  3 21:02:52.927: INFO: DNS probes using dns-9500/dns-test-25b381f7-d3a0-452c-98cf-2d683a547c62 succeeded
    
    STEP: deleting the pod
    STEP: deleting the test headless service
    [AfterEach] [sig-network] DNS
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 21:02:52.952: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "dns-9500" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":-1,"completed":77,"skipped":1257,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] PodTemplates
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 15 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 21:02:53.168: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "podtemplate-8821" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] PodTemplates should delete a collection of pod templates [Conformance]","total":-1,"completed":78,"skipped":1338,"failed":0}

    
    SSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] ResourceQuota
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 21:02:53.233: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "resourcequota-8476" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":-1,"completed":79,"skipped":1345,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Pods
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 17 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 21:02:55.841: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "pods-806" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Pods should be updated [NodeConformance] [Conformance]","total":-1,"completed":80,"skipped":1368,"failed":0}

    [BeforeEach] [sig-node] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  3 21:02:55.852: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename configmap
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable via environment variable [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating configMap configmap-2881/configmap-test-e46b40ee-19ab-48d1-ac15-1d6f2ebc35d4
    STEP: Creating a pod to test consume configMaps
    Sep  3 21:02:55.897: INFO: Waiting up to 5m0s for pod "pod-configmaps-3b7eb635-c6fe-4756-aef6-d2b0210da3ab" in namespace "configmap-2881" to be "Succeeded or Failed"

    Sep  3 21:02:55.900: INFO: Pod "pod-configmaps-3b7eb635-c6fe-4756-aef6-d2b0210da3ab": Phase="Pending", Reason="", readiness=false. Elapsed: 2.965597ms
    Sep  3 21:02:57.904: INFO: Pod "pod-configmaps-3b7eb635-c6fe-4756-aef6-d2b0210da3ab": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00696019s
    STEP: Saw pod success
    Sep  3 21:02:57.904: INFO: Pod "pod-configmaps-3b7eb635-c6fe-4756-aef6-d2b0210da3ab" satisfied condition "Succeeded or Failed"

    Sep  3 21:02:57.907: INFO: Trying to get logs from node k8s-upgrade-and-conformance-uljqkb-worker-gvulve pod pod-configmaps-3b7eb635-c6fe-4756-aef6-d2b0210da3ab container env-test: <nil>
    STEP: delete the pod
    Sep  3 21:02:57.920: INFO: Waiting for pod pod-configmaps-3b7eb635-c6fe-4756-aef6-d2b0210da3ab to disappear
    Sep  3 21:02:57.924: INFO: Pod pod-configmaps-3b7eb635-c6fe-4756-aef6-d2b0210da3ab no longer exists
    [AfterEach] [sig-node] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 21:02:57.924: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "configmap-2881" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":-1,"completed":81,"skipped":1368,"failed":0}

    
    SSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] ResourceQuota
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 14 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 21:03:15.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "resourcequota-1867" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":-1,"completed":82,"skipped":1381,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] Watchers
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 27 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 21:03:16.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "watch-5357" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":-1,"completed":47,"skipped":829,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected secret
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  3 21:03:15.083: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename projected
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating projection with secret that has name projected-secret-test-6676e30b-62e8-4496-b974-64ab938939f7
    STEP: Creating a pod to test consume secrets
    Sep  3 21:03:15.122: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-89abdedf-08e8-446f-83ad-d648b9077262" in namespace "projected-2626" to be "Succeeded or Failed"

    Sep  3 21:03:15.125: INFO: Pod "pod-projected-secrets-89abdedf-08e8-446f-83ad-d648b9077262": Phase="Pending", Reason="", readiness=false. Elapsed: 3.411867ms
    Sep  3 21:03:17.131: INFO: Pod "pod-projected-secrets-89abdedf-08e8-446f-83ad-d648b9077262": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009246561s
    STEP: Saw pod success
    Sep  3 21:03:17.131: INFO: Pod "pod-projected-secrets-89abdedf-08e8-446f-83ad-d648b9077262" satisfied condition "Succeeded or Failed"

    Sep  3 21:03:17.135: INFO: Trying to get logs from node k8s-upgrade-and-conformance-uljqkb-worker-gvulve pod pod-projected-secrets-89abdedf-08e8-446f-83ad-d648b9077262 container projected-secret-volume-test: <nil>
    STEP: delete the pod
    Sep  3 21:03:17.152: INFO: Waiting for pod pod-projected-secrets-89abdedf-08e8-446f-83ad-d648b9077262 to disappear
    Sep  3 21:03:17.156: INFO: Pod pod-projected-secrets-89abdedf-08e8-446f-83ad-d648b9077262 no longer exists
    [AfterEach] [sig-storage] Projected secret
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 21:03:17.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-2626" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":83,"skipped":1405,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
    [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep  3 21:03:17.276: INFO: Waiting up to 5m0s for pod "downwardapi-volume-95967791-6e19-4f9f-be59-103d2172d321" in namespace "projected-53" to be "Succeeded or Failed"

    Sep  3 21:03:17.285: INFO: Pod "downwardapi-volume-95967791-6e19-4f9f-be59-103d2172d321": Phase="Pending", Reason="", readiness=false. Elapsed: 9.483273ms
    Sep  3 21:03:19.290: INFO: Pod "downwardapi-volume-95967791-6e19-4f9f-be59-103d2172d321": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.014009646s
    STEP: Saw pod success
    Sep  3 21:03:19.290: INFO: Pod "downwardapi-volume-95967791-6e19-4f9f-be59-103d2172d321" satisfied condition "Succeeded or Failed"

    Sep  3 21:03:19.293: INFO: Trying to get logs from node k8s-upgrade-and-conformance-uljqkb-worker-gvulve pod downwardapi-volume-95967791-6e19-4f9f-be59-103d2172d321 container client-container: <nil>
    STEP: delete the pod
    Sep  3 21:03:19.307: INFO: Waiting for pod downwardapi-volume-95967791-6e19-4f9f-be59-103d2172d321 to disappear
    Sep  3 21:03:19.310: INFO: Pod downwardapi-volume-95967791-6e19-4f9f-be59-103d2172d321 no longer exists
    [AfterEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 21:03:19.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-53" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":84,"skipped":1448,"failed":0}

    
    SSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Pods
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 16 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 21:03:25.164: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "pods-5782" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Pods should be submitted and removed [NodeConformance] [Conformance]","total":-1,"completed":48,"skipped":854,"failed":0}

    
    SSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] Garbage collector
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 34 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 21:03:31.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "gc-6784" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":-1,"completed":49,"skipped":863,"failed":0}

    
    SSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
    [It] should provide podname only [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep  3 21:03:31.475: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a766099a-c688-494c-bc46-539a54600a47" in namespace "downward-api-5940" to be "Succeeded or Failed"

    Sep  3 21:03:31.479: INFO: Pod "downwardapi-volume-a766099a-c688-494c-bc46-539a54600a47": Phase="Pending", Reason="", readiness=false. Elapsed: 3.334091ms
    Sep  3 21:03:33.485: INFO: Pod "downwardapi-volume-a766099a-c688-494c-bc46-539a54600a47": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009060895s
    STEP: Saw pod success
    Sep  3 21:03:33.485: INFO: Pod "downwardapi-volume-a766099a-c688-494c-bc46-539a54600a47" satisfied condition "Succeeded or Failed"

    Sep  3 21:03:33.488: INFO: Trying to get logs from node k8s-upgrade-and-conformance-uljqkb-worker-gvulve pod downwardapi-volume-a766099a-c688-494c-bc46-539a54600a47 container client-container: <nil>
    STEP: delete the pod
    Sep  3 21:03:33.506: INFO: Waiting for pod downwardapi-volume-a766099a-c688-494c-bc46-539a54600a47 to disappear
    Sep  3 21:03:33.509: INFO: Pod downwardapi-volume-a766099a-c688-494c-bc46-539a54600a47 no longer exists
    [AfterEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 21:03:33.509: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "downward-api-5940" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":-1,"completed":50,"skipped":873,"failed":0}

    
    SSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Subpath
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 4 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
    STEP: Setting up data
    [It] should support subpaths with downward pod [LinuxOnly] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating pod pod-subpath-test-downwardapi-csql
    STEP: Creating a pod to test atomic-volume-subpath
    Sep  3 21:03:19.381: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-csql" in namespace "subpath-9051" to be "Succeeded or Failed"

    Sep  3 21:03:19.384: INFO: Pod "pod-subpath-test-downwardapi-csql": Phase="Pending", Reason="", readiness=false. Elapsed: 2.833542ms
    Sep  3 21:03:21.389: INFO: Pod "pod-subpath-test-downwardapi-csql": Phase="Running", Reason="", readiness=true. Elapsed: 2.007089554s
    Sep  3 21:03:23.393: INFO: Pod "pod-subpath-test-downwardapi-csql": Phase="Running", Reason="", readiness=true. Elapsed: 4.011286751s
    Sep  3 21:03:25.399: INFO: Pod "pod-subpath-test-downwardapi-csql": Phase="Running", Reason="", readiness=true. Elapsed: 6.017388659s
    Sep  3 21:03:27.403: INFO: Pod "pod-subpath-test-downwardapi-csql": Phase="Running", Reason="", readiness=true. Elapsed: 8.021952735s
    Sep  3 21:03:29.408: INFO: Pod "pod-subpath-test-downwardapi-csql": Phase="Running", Reason="", readiness=true. Elapsed: 10.026325686s
    Sep  3 21:03:31.412: INFO: Pod "pod-subpath-test-downwardapi-csql": Phase="Running", Reason="", readiness=true. Elapsed: 12.030534202s
    Sep  3 21:03:33.417: INFO: Pod "pod-subpath-test-downwardapi-csql": Phase="Running", Reason="", readiness=true. Elapsed: 14.035421239s
    Sep  3 21:03:35.425: INFO: Pod "pod-subpath-test-downwardapi-csql": Phase="Running", Reason="", readiness=true. Elapsed: 16.043682119s
    Sep  3 21:03:37.430: INFO: Pod "pod-subpath-test-downwardapi-csql": Phase="Running", Reason="", readiness=true. Elapsed: 18.048008372s
    Sep  3 21:03:39.434: INFO: Pod "pod-subpath-test-downwardapi-csql": Phase="Running", Reason="", readiness=true. Elapsed: 20.052509838s
    Sep  3 21:03:41.438: INFO: Pod "pod-subpath-test-downwardapi-csql": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.056741634s
    STEP: Saw pod success
    Sep  3 21:03:41.438: INFO: Pod "pod-subpath-test-downwardapi-csql" satisfied condition "Succeeded or Failed"

    Sep  3 21:03:41.442: INFO: Trying to get logs from node k8s-upgrade-and-conformance-uljqkb-worker-gvulve pod pod-subpath-test-downwardapi-csql container test-container-subpath-downwardapi-csql: <nil>
    STEP: delete the pod
    Sep  3 21:03:41.458: INFO: Waiting for pod pod-subpath-test-downwardapi-csql to disappear
    Sep  3 21:03:41.463: INFO: Pod pod-subpath-test-downwardapi-csql no longer exists
    STEP: Deleting pod pod-subpath-test-downwardapi-csql
    Sep  3 21:03:41.463: INFO: Deleting pod "pod-subpath-test-downwardapi-csql" in namespace "subpath-9051"
    [AfterEach] [sig-storage] Subpath
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 21:03:41.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "subpath-9051" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":-1,"completed":85,"skipped":1460,"failed":0}

    
    SSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 24 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 21:03:41.813: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "crd-publish-openapi-2957" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":-1,"completed":51,"skipped":889,"failed":0}

    
    SSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Container Runtime
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 21:03:42.565: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "container-runtime-2365" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":86,"skipped":1466,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Downward API
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  3 21:03:41.843: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename downward-api
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward api env vars
    Sep  3 21:03:41.886: INFO: Waiting up to 5m0s for pod "downward-api-09c8a28b-2d55-4c74-a83d-a96b974907c4" in namespace "downward-api-7365" to be "Succeeded or Failed"

    Sep  3 21:03:41.890: INFO: Pod "downward-api-09c8a28b-2d55-4c74-a83d-a96b974907c4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.542058ms
    Sep  3 21:03:43.895: INFO: Pod "downward-api-09c8a28b-2d55-4c74-a83d-a96b974907c4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008967623s
    STEP: Saw pod success
    Sep  3 21:03:43.895: INFO: Pod "downward-api-09c8a28b-2d55-4c74-a83d-a96b974907c4" satisfied condition "Succeeded or Failed"

    Sep  3 21:03:43.898: INFO: Trying to get logs from node k8s-upgrade-and-conformance-uljqkb-worker-gvulve pod downward-api-09c8a28b-2d55-4c74-a83d-a96b974907c4 container dapi-container: <nil>
    STEP: delete the pod
    Sep  3 21:03:43.913: INFO: Waiting for pod downward-api-09c8a28b-2d55-4c74-a83d-a96b974907c4 to disappear
    Sep  3 21:03:43.916: INFO: Pod downward-api-09c8a28b-2d55-4c74-a83d-a96b974907c4 no longer exists
    [AfterEach] [sig-node] Downward API
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 21:03:43.916: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "downward-api-7365" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":-1,"completed":52,"skipped":895,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Variable Expansion
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  3 21:03:42.670: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename var-expansion
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test env composition
    Sep  3 21:03:42.711: INFO: Waiting up to 5m0s for pod "var-expansion-f972e8cc-8d89-414e-a649-2c10eaff8049" in namespace "var-expansion-7972" to be "Succeeded or Failed"

    Sep  3 21:03:42.715: INFO: Pod "var-expansion-f972e8cc-8d89-414e-a649-2c10eaff8049": Phase="Pending", Reason="", readiness=false. Elapsed: 3.993483ms
    Sep  3 21:03:44.719: INFO: Pod "var-expansion-f972e8cc-8d89-414e-a649-2c10eaff8049": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008778005s
    STEP: Saw pod success
    Sep  3 21:03:44.720: INFO: Pod "var-expansion-f972e8cc-8d89-414e-a649-2c10eaff8049" satisfied condition "Succeeded or Failed"

    Sep  3 21:03:44.723: INFO: Trying to get logs from node k8s-upgrade-and-conformance-uljqkb-worker-gvulve pod var-expansion-f972e8cc-8d89-414e-a649-2c10eaff8049 container dapi-container: <nil>
    STEP: delete the pod
    Sep  3 21:03:44.740: INFO: Waiting for pod var-expansion-f972e8cc-8d89-414e-a649-2c10eaff8049 to disappear
    Sep  3 21:03:44.743: INFO: Pod var-expansion-f972e8cc-8d89-414e-a649-2c10eaff8049 no longer exists
    [AfterEach] [sig-node] Variable Expansion
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 21:03:44.743: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "var-expansion-7972" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":-1,"completed":87,"skipped":1521,"failed":0}

    
    SSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Security Context
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  3 21:03:44.776: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename security-context-test
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-node] Security Context
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
    [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    Sep  3 21:03:44.815: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-407a704d-e717-48a5-b9b8-c13e82fd1da2" in namespace "security-context-test-3326" to be "Succeeded or Failed"

    Sep  3 21:03:44.822: INFO: Pod "busybox-privileged-false-407a704d-e717-48a5-b9b8-c13e82fd1da2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.811933ms
    Sep  3 21:03:46.827: INFO: Pod "busybox-privileged-false-407a704d-e717-48a5-b9b8-c13e82fd1da2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.011920875s
    Sep  3 21:03:46.827: INFO: Pod "busybox-privileged-false-407a704d-e717-48a5-b9b8-c13e82fd1da2" satisfied condition "Succeeded or Failed"

    Sep  3 21:03:46.833: INFO: Got logs for pod "busybox-privileged-false-407a704d-e717-48a5-b9b8-c13e82fd1da2": "ip: RTNETLINK answers: Operation not permitted\n"
    [AfterEach] [sig-node] Security Context
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 21:03:46.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "security-context-test-3326" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":88,"skipped":1534,"failed":0}

    
    SSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 24 lines ...
    STEP: Destroying namespace "webhook-8191-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":-1,"completed":53,"skipped":926,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:35
    [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
... skipping 4 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:64
    [It] should support unsafe sysctls which are actually allowed [MinimumKubeletVersion:1.21] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod with the kernel.shm_rmid_forced sysctl
    STEP: Watching for error events or started pod

    STEP: Waiting for pod completion
    STEP: Checking that the pod succeeded
    STEP: Getting logs from the pod
    STEP: Checking that the sysctl is actually updated
    [AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 21:03:48.988: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "sysctl-7573" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should support unsafe sysctls which are actually allowed [MinimumKubeletVersion:1.21] [Conformance]","total":-1,"completed":89,"skipped":1550,"failed":0}

    
    SSSS
    ------------------------------
    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 20 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 21:03:59.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-9518" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]","total":-1,"completed":90,"skipped":1554,"failed":0}

    
    SSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] Garbage collector
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 35 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 21:04:00.348: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "gc-279" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":-1,"completed":91,"skipped":1572,"failed":0}

    
    SSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Security Context
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  3 21:04:00.374: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename security-context-test
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-node] Security Context
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
    [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    Sep  3 21:04:00.411: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-d28c5122-9bd5-4305-804f-0f95aab9cc9f" in namespace "security-context-test-425" to be "Succeeded or Failed"

    Sep  3 21:04:00.415: INFO: Pod "busybox-readonly-false-d28c5122-9bd5-4305-804f-0f95aab9cc9f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.251638ms
    Sep  3 21:04:02.419: INFO: Pod "busybox-readonly-false-d28c5122-9bd5-4305-804f-0f95aab9cc9f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007904459s
    Sep  3 21:04:02.419: INFO: Pod "busybox-readonly-false-d28c5122-9bd5-4305-804f-0f95aab9cc9f" satisfied condition "Succeeded or Failed"

    [AfterEach] [sig-node] Security Context
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 21:04:02.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "security-context-test-425" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":-1,"completed":92,"skipped":1578,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 7 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 21:04:03.077: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "custom-resource-definition-198" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works  [Conformance]","total":-1,"completed":93,"skipped":1624,"failed":0}

    [BeforeEach] [sig-api-machinery] Watchers
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  3 21:04:03.090: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename watch
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 11 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 21:04:03.154: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "watch-3193" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":-1,"completed":94,"skipped":1624,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  3 21:04:03.208: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename emptydir
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test emptydir 0644 on node default medium
    Sep  3 21:04:03.246: INFO: Waiting up to 5m0s for pod "pod-19723d2b-a259-4dae-be31-8fc24af76b6b" in namespace "emptydir-8460" to be "Succeeded or Failed"

    Sep  3 21:04:03.250: INFO: Pod "pod-19723d2b-a259-4dae-be31-8fc24af76b6b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.647491ms
    Sep  3 21:04:05.255: INFO: Pod "pod-19723d2b-a259-4dae-be31-8fc24af76b6b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008923845s
    STEP: Saw pod success
    Sep  3 21:04:05.255: INFO: Pod "pod-19723d2b-a259-4dae-be31-8fc24af76b6b" satisfied condition "Succeeded or Failed"

    Sep  3 21:04:05.258: INFO: Trying to get logs from node k8s-upgrade-and-conformance-uljqkb-md-0-rg248-796ff9996-j7vhm pod pod-19723d2b-a259-4dae-be31-8fc24af76b6b container test-container: <nil>
    STEP: delete the pod
    Sep  3 21:04:05.280: INFO: Waiting for pod pod-19723d2b-a259-4dae-be31-8fc24af76b6b to disappear
    Sep  3 21:04:05.283: INFO: Pod pod-19723d2b-a259-4dae-be31-8fc24af76b6b no longer exists
    [AfterEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 21:04:05.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-8460" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":95,"skipped":1655,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] Garbage collector
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 9 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 21:04:10.447: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "gc-8889" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":-1,"completed":96,"skipped":1692,"failed":0}

    
    SSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
    [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep  3 21:04:10.532: INFO: Waiting up to 5m0s for pod "downwardapi-volume-692d57cf-0a79-447c-9101-a74f0fc35ba9" in namespace "projected-2284" to be "Succeeded or Failed"

    Sep  3 21:04:10.536: INFO: Pod "downwardapi-volume-692d57cf-0a79-447c-9101-a74f0fc35ba9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.404895ms
    Sep  3 21:04:12.541: INFO: Pod "downwardapi-volume-692d57cf-0a79-447c-9101-a74f0fc35ba9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00884794s
    STEP: Saw pod success
    Sep  3 21:04:12.541: INFO: Pod "downwardapi-volume-692d57cf-0a79-447c-9101-a74f0fc35ba9" satisfied condition "Succeeded or Failed"

    Sep  3 21:04:12.544: INFO: Trying to get logs from node k8s-upgrade-and-conformance-uljqkb-md-0-rg248-796ff9996-wkqbk pod downwardapi-volume-692d57cf-0a79-447c-9101-a74f0fc35ba9 container client-container: <nil>
    STEP: delete the pod
    Sep  3 21:04:12.565: INFO: Waiting for pod downwardapi-volume-692d57cf-0a79-447c-9101-a74f0fc35ba9 to disappear
    Sep  3 21:04:12.568: INFO: Pod downwardapi-volume-692d57cf-0a79-447c-9101-a74f0fc35ba9 no longer exists
    [AfterEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 21:04:12.568: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-2284" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":97,"skipped":1708,"failed":0}

    
    SSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] Ingress API
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 26 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 21:04:12.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "ingress-1227" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Ingress API should support creating Ingress API operations [Conformance]","total":-1,"completed":98,"skipped":1717,"failed":0}

    
    SSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Docker Containers
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  3 21:04:12.721: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename containers
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test override arguments
    Sep  3 21:04:12.758: INFO: Waiting up to 5m0s for pod "client-containers-c3782303-3c10-435c-99a7-3993046e9fbc" in namespace "containers-3341" to be "Succeeded or Failed"

    Sep  3 21:04:12.761: INFO: Pod "client-containers-c3782303-3c10-435c-99a7-3993046e9fbc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.775766ms
    Sep  3 21:04:14.765: INFO: Pod "client-containers-c3782303-3c10-435c-99a7-3993046e9fbc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006818161s
    STEP: Saw pod success
    Sep  3 21:04:14.765: INFO: Pod "client-containers-c3782303-3c10-435c-99a7-3993046e9fbc" satisfied condition "Succeeded or Failed"

    Sep  3 21:04:14.768: INFO: Trying to get logs from node k8s-upgrade-and-conformance-uljqkb-md-0-rg248-796ff9996-wkqbk pod client-containers-c3782303-3c10-435c-99a7-3993046e9fbc container agnhost-container: <nil>
    STEP: delete the pod
    Sep  3 21:04:14.787: INFO: Waiting for pod client-containers-c3782303-3c10-435c-99a7-3993046e9fbc to disappear
    Sep  3 21:04:14.790: INFO: Pod client-containers-c3782303-3c10-435c-99a7-3993046e9fbc no longer exists
    [AfterEach] [sig-node] Docker Containers
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 21:04:14.790: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "containers-3341" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":-1,"completed":99,"skipped":1731,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] DNS
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 24 lines ...
    Sep  3 21:03:52.001: INFO: Unable to read jessie_udp@dns-test-service.dns-5710 from pod dns-5710/dns-test-4f619a76-b64d-44de-9bcc-9075c30e5ac3: the server could not find the requested resource (get pods dns-test-4f619a76-b64d-44de-9bcc-9075c30e5ac3)
    Sep  3 21:03:52.008: INFO: Unable to read jessie_tcp@dns-test-service.dns-5710 from pod dns-5710/dns-test-4f619a76-b64d-44de-9bcc-9075c30e5ac3: the server could not find the requested resource (get pods dns-test-4f619a76-b64d-44de-9bcc-9075c30e5ac3)
    Sep  3 21:03:52.012: INFO: Unable to read jessie_udp@dns-test-service.dns-5710.svc from pod dns-5710/dns-test-4f619a76-b64d-44de-9bcc-9075c30e5ac3: the server could not find the requested resource (get pods dns-test-4f619a76-b64d-44de-9bcc-9075c30e5ac3)
    Sep  3 21:03:52.015: INFO: Unable to read jessie_tcp@dns-test-service.dns-5710.svc from pod dns-5710/dns-test-4f619a76-b64d-44de-9bcc-9075c30e5ac3: the server could not find the requested resource (get pods dns-test-4f619a76-b64d-44de-9bcc-9075c30e5ac3)
    Sep  3 21:03:52.019: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5710.svc from pod dns-5710/dns-test-4f619a76-b64d-44de-9bcc-9075c30e5ac3: the server could not find the requested resource (get pods dns-test-4f619a76-b64d-44de-9bcc-9075c30e5ac3)
    Sep  3 21:03:52.023: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5710.svc from pod dns-5710/dns-test-4f619a76-b64d-44de-9bcc-9075c30e5ac3: the server could not find the requested resource (get pods dns-test-4f619a76-b64d-44de-9bcc-9075c30e5ac3)
    Sep  3 21:03:52.042: INFO: Lookups using dns-5710/dns-test-4f619a76-b64d-44de-9bcc-9075c30e5ac3 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5710 wheezy_tcp@dns-test-service.dns-5710 wheezy_udp@dns-test-service.dns-5710.svc wheezy_tcp@dns-test-service.dns-5710.svc wheezy_udp@_http._tcp.dns-test-service.dns-5710.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5710.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5710 jessie_tcp@dns-test-service.dns-5710 jessie_udp@dns-test-service.dns-5710.svc jessie_tcp@dns-test-service.dns-5710.svc jessie_udp@_http._tcp.dns-test-service.dns-5710.svc jessie_tcp@_http._tcp.dns-test-service.dns-5710.svc]

    
    Sep  3 21:03:57.046: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5710/dns-test-4f619a76-b64d-44de-9bcc-9075c30e5ac3: the server could not find the requested resource (get pods dns-test-4f619a76-b64d-44de-9bcc-9075c30e5ac3)
    Sep  3 21:03:57.050: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5710/dns-test-4f619a76-b64d-44de-9bcc-9075c30e5ac3: the server could not find the requested resource (get pods dns-test-4f619a76-b64d-44de-9bcc-9075c30e5ac3)
    Sep  3 21:03:57.053: INFO: Unable to read wheezy_udp@dns-test-service.dns-5710 from pod dns-5710/dns-test-4f619a76-b64d-44de-9bcc-9075c30e5ac3: the server could not find the requested resource (get pods dns-test-4f619a76-b64d-44de-9bcc-9075c30e5ac3)
    Sep  3 21:03:57.057: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5710 from pod dns-5710/dns-test-4f619a76-b64d-44de-9bcc-9075c30e5ac3: the server could not find the requested resource (get pods dns-test-4f619a76-b64d-44de-9bcc-9075c30e5ac3)
    Sep  3 21:03:57.060: INFO: Unable to read wheezy_udp@dns-test-service.dns-5710.svc from pod dns-5710/dns-test-4f619a76-b64d-44de-9bcc-9075c30e5ac3: the server could not find the requested resource (get pods dns-test-4f619a76-b64d-44de-9bcc-9075c30e5ac3)
... skipping 5 lines ...
    Sep  3 21:03:57.105: INFO: Unable to read jessie_udp@dns-test-service.dns-5710 from pod dns-5710/dns-test-4f619a76-b64d-44de-9bcc-9075c30e5ac3: the server could not find the requested resource (get pods dns-test-4f619a76-b64d-44de-9bcc-9075c30e5ac3)
    Sep  3 21:03:57.108: INFO: Unable to read jessie_tcp@dns-test-service.dns-5710 from pod dns-5710/dns-test-4f619a76-b64d-44de-9bcc-9075c30e5ac3: the server could not find the requested resource (get pods dns-test-4f619a76-b64d-44de-9bcc-9075c30e5ac3)
    Sep  3 21:03:57.111: INFO: Unable to read jessie_udp@dns-test-service.dns-5710.svc from pod dns-5710/dns-test-4f619a76-b64d-44de-9bcc-9075c30e5ac3: the server could not find the requested resource (get pods dns-test-4f619a76-b64d-44de-9bcc-9075c30e5ac3)
    Sep  3 21:03:57.115: INFO: Unable to read jessie_tcp@dns-test-service.dns-5710.svc from pod dns-5710/dns-test-4f619a76-b64d-44de-9bcc-9075c30e5ac3: the server could not find the requested resource (get pods dns-test-4f619a76-b64d-44de-9bcc-9075c30e5ac3)
    Sep  3 21:03:57.119: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5710.svc from pod dns-5710/dns-test-4f619a76-b64d-44de-9bcc-9075c30e5ac3: the server could not find the requested resource (get pods dns-test-4f619a76-b64d-44de-9bcc-9075c30e5ac3)
    Sep  3 21:03:57.122: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5710.svc from pod dns-5710/dns-test-4f619a76-b64d-44de-9bcc-9075c30e5ac3: the server could not find the requested resource (get pods dns-test-4f619a76-b64d-44de-9bcc-9075c30e5ac3)
    Sep  3 21:03:57.143: INFO: Lookups using dns-5710/dns-test-4f619a76-b64d-44de-9bcc-9075c30e5ac3 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5710 wheezy_tcp@dns-test-service.dns-5710 wheezy_udp@dns-test-service.dns-5710.svc wheezy_tcp@dns-test-service.dns-5710.svc wheezy_udp@_http._tcp.dns-test-service.dns-5710.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5710.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5710 jessie_tcp@dns-test-service.dns-5710 jessie_udp@dns-test-service.dns-5710.svc jessie_tcp@dns-test-service.dns-5710.svc jessie_udp@_http._tcp.dns-test-service.dns-5710.svc jessie_tcp@_http._tcp.dns-test-service.dns-5710.svc]

    
    Sep  3 21:04:02.046: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5710/dns-test-4f619a76-b64d-44de-9bcc-9075c30e5ac3: the server could not find the requested resource (get pods dns-test-4f619a76-b64d-44de-9bcc-9075c30e5ac3)
    Sep  3 21:04:02.050: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5710/dns-test-4f619a76-b64d-44de-9bcc-9075c30e5ac3: the server could not find the requested resource (get pods dns-test-4f619a76-b64d-44de-9bcc-9075c30e5ac3)
    Sep  3 21:04:02.054: INFO: Unable to read wheezy_udp@dns-test-service.dns-5710 from pod dns-5710/dns-test-4f619a76-b64d-44de-9bcc-9075c30e5ac3: the server could not find the requested resource (get pods dns-test-4f619a76-b64d-44de-9bcc-9075c30e5ac3)
    Sep  3 21:04:02.058: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5710 from pod dns-5710/dns-test-4f619a76-b64d-44de-9bcc-9075c30e5ac3: the server could not find the requested resource (get pods dns-test-4f619a76-b64d-44de-9bcc-9075c30e5ac3)
    Sep  3 21:04:02.061: INFO: Unable to read wheezy_udp@dns-test-service.dns-5710.svc from pod dns-5710/dns-test-4f619a76-b64d-44de-9bcc-9075c30e5ac3: the server could not find the requested resource (get pods dns-test-4f619a76-b64d-44de-9bcc-9075c30e5ac3)
... skipping 5 lines ...
    Sep  3 21:04:02.107: INFO: Unable to read jessie_udp@dns-test-service.dns-5710 from pod dns-5710/dns-test-4f619a76-b64d-44de-9bcc-9075c30e5ac3: the server could not find the requested resource (get pods dns-test-4f619a76-b64d-44de-9bcc-9075c30e5ac3)
    Sep  3 21:04:02.111: INFO: Unable to read jessie_tcp@dns-test-service.dns-5710 from pod dns-5710/dns-test-4f619a76-b64d-44de-9bcc-9075c30e5ac3: the server could not find the requested resource (get pods dns-test-4f619a76-b64d-44de-9bcc-9075c30e5ac3)
    Sep  3 21:04:02.114: INFO: Unable to read jessie_udp@dns-test-service.dns-5710.svc from pod dns-5710/dns-test-4f619a76-b64d-44de-9bcc-9075c30e5ac3: the server could not find the requested resource (get pods dns-test-4f619a76-b64d-44de-9bcc-9075c30e5ac3)
    Sep  3 21:04:02.118: INFO: Unable to read jessie_tcp@dns-test-service.dns-5710.svc from pod dns-5710/dns-test-4f619a76-b64d-44de-9bcc-9075c30e5ac3: the server could not find the requested resource (get pods dns-test-4f619a76-b64d-44de-9bcc-9075c30e5ac3)
    Sep  3 21:04:02.122: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5710.svc from pod dns-5710/dns-test-4f619a76-b64d-44de-9bcc-9075c30e5ac3: the server could not find the requested resource (get pods dns-test-4f619a76-b64d-44de-9bcc-9075c30e5ac3)
    Sep  3 21:04:02.126: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5710.svc from pod dns-5710/dns-test-4f619a76-b64d-44de-9bcc-9075c30e5ac3: the server could not find the requested resource (get pods dns-test-4f619a76-b64d-44de-9bcc-9075c30e5ac3)
    Sep  3 21:04:02.151: INFO: Lookups using dns-5710/dns-test-4f619a76-b64d-44de-9bcc-9075c30e5ac3 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5710 wheezy_tcp@dns-test-service.dns-5710 wheezy_udp@dns-test-service.dns-5710.svc wheezy_tcp@dns-test-service.dns-5710.svc wheezy_udp@_http._tcp.dns-test-service.dns-5710.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5710.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5710 jessie_tcp@dns-test-service.dns-5710 jessie_udp@dns-test-service.dns-5710.svc jessie_tcp@dns-test-service.dns-5710.svc jessie_udp@_http._tcp.dns-test-service.dns-5710.svc jessie_tcp@_http._tcp.dns-test-service.dns-5710.svc]

    
    Sep  3 21:04:07.047: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5710/dns-test-4f619a76-b64d-44de-9bcc-9075c30e5ac3: the server could not find the requested resource (get pods dns-test-4f619a76-b64d-44de-9bcc-9075c30e5ac3)
    Sep  3 21:04:07.051: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5710/dns-test-4f619a76-b64d-44de-9bcc-9075c30e5ac3: the server could not find the requested resource (get pods dns-test-4f619a76-b64d-44de-9bcc-9075c30e5ac3)
    Sep  3 21:04:07.055: INFO: Unable to read wheezy_udp@dns-test-service.dns-5710 from pod dns-5710/dns-test-4f619a76-b64d-44de-9bcc-9075c30e5ac3: the server could not find the requested resource (get pods dns-test-4f619a76-b64d-44de-9bcc-9075c30e5ac3)
    Sep  3 21:04:07.058: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5710 from pod dns-5710/dns-test-4f619a76-b64d-44de-9bcc-9075c30e5ac3: the server could not find the requested resource (get pods dns-test-4f619a76-b64d-44de-9bcc-9075c30e5ac3)
    Sep  3 21:04:07.062: INFO: Unable to read wheezy_udp@dns-test-service.dns-5710.svc from pod dns-5710/dns-test-4f619a76-b64d-44de-9bcc-9075c30e5ac3: the server could not find the requested resource (get pods dns-test-4f619a76-b64d-44de-9bcc-9075c30e5ac3)
... skipping 5 lines ...
    Sep  3 21:04:07.110: INFO: Unable to read jessie_udp@dns-test-service.dns-5710 from pod dns-5710/dns-test-4f619a76-b64d-44de-9bcc-9075c30e5ac3: the server could not find the requested resource (get pods dns-test-4f619a76-b64d-44de-9bcc-9075c30e5ac3)
    Sep  3 21:04:07.114: INFO: Unable to read jessie_tcp@dns-test-service.dns-5710 from pod dns-5710/dns-test-4f619a76-b64d-44de-9bcc-9075c30e5ac3: the server could not find the requested resource (get pods dns-test-4f619a76-b64d-44de-9bcc-9075c30e5ac3)
    Sep  3 21:04:07.118: INFO: Unable to read jessie_udp@dns-test-service.dns-5710.svc from pod dns-5710/dns-test-4f619a76-b64d-44de-9bcc-9075c30e5ac3: the server could not find the requested resource (get pods dns-test-4f619a76-b64d-44de-9bcc-9075c30e5ac3)
    Sep  3 21:04:07.122: INFO: Unable to read jessie_tcp@dns-test-service.dns-5710.svc from pod dns-5710/dns-test-4f619a76-b64d-44de-9bcc-9075c30e5ac3: the server could not find the requested resource (get pods dns-test-4f619a76-b64d-44de-9bcc-9075c30e5ac3)
    Sep  3 21:04:07.127: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5710.svc from pod dns-5710/dns-test-4f619a76-b64d-44de-9bcc-9075c30e5ac3: the server could not find the requested resource (get pods dns-test-4f619a76-b64d-44de-9bcc-9075c30e5ac3)
    Sep  3 21:04:07.132: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5710.svc from pod dns-5710/dns-test-4f619a76-b64d-44de-9bcc-9075c30e5ac3: the server could not find the requested resource (get pods dns-test-4f619a76-b64d-44de-9bcc-9075c30e5ac3)
    Sep  3 21:04:07.157: INFO: Lookups using dns-5710/dns-test-4f619a76-b64d-44de-9bcc-9075c30e5ac3 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5710 wheezy_tcp@dns-test-service.dns-5710 wheezy_udp@dns-test-service.dns-5710.svc wheezy_tcp@dns-test-service.dns-5710.svc wheezy_udp@_http._tcp.dns-test-service.dns-5710.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5710.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5710 jessie_tcp@dns-test-service.dns-5710 jessie_udp@dns-test-service.dns-5710.svc jessie_tcp@dns-test-service.dns-5710.svc jessie_udp@_http._tcp.dns-test-service.dns-5710.svc jessie_tcp@_http._tcp.dns-test-service.dns-5710.svc]

    
    Sep  3 21:04:12.046: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5710/dns-test-4f619a76-b64d-44de-9bcc-9075c30e5ac3: the server could not find the requested resource (get pods dns-test-4f619a76-b64d-44de-9bcc-9075c30e5ac3)
    Sep  3 21:04:12.049: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5710/dns-test-4f619a76-b64d-44de-9bcc-9075c30e5ac3: the server could not find the requested resource (get pods dns-test-4f619a76-b64d-44de-9bcc-9075c30e5ac3)
    Sep  3 21:04:12.052: INFO: Unable to read wheezy_udp@dns-test-service.dns-5710 from pod dns-5710/dns-test-4f619a76-b64d-44de-9bcc-9075c30e5ac3: the server could not find the requested resource (get pods dns-test-4f619a76-b64d-44de-9bcc-9075c30e5ac3)
    Sep  3 21:04:12.055: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5710 from pod dns-5710/dns-test-4f619a76-b64d-44de-9bcc-9075c30e5ac3: the server could not find the requested resource (get pods dns-test-4f619a76-b64d-44de-9bcc-9075c30e5ac3)
    Sep  3 21:04:12.059: INFO: Unable to read wheezy_udp@dns-test-service.dns-5710.svc from pod dns-5710/dns-test-4f619a76-b64d-44de-9bcc-9075c30e5ac3: the server could not find the requested resource (get pods dns-test-4f619a76-b64d-44de-9bcc-9075c30e5ac3)
... skipping 5 lines ...
    Sep  3 21:04:12.104: INFO: Unable to read jessie_udp@dns-test-service.dns-5710 from pod dns-5710/dns-test-4f619a76-b64d-44de-9bcc-9075c30e5ac3: the server could not find the requested resource (get pods dns-test-4f619a76-b64d-44de-9bcc-9075c30e5ac3)
    Sep  3 21:04:12.108: INFO: Unable to read jessie_tcp@dns-test-service.dns-5710 from pod dns-5710/dns-test-4f619a76-b64d-44de-9bcc-9075c30e5ac3: the server could not find the requested resource (get pods dns-test-4f619a76-b64d-44de-9bcc-9075c30e5ac3)
    Sep  3 21:04:12.112: INFO: Unable to read jessie_udp@dns-test-service.dns-5710.svc from pod dns-5710/dns-test-4f619a76-b64d-44de-9bcc-9075c30e5ac3: the server could not find the requested resource (get pods dns-test-4f619a76-b64d-44de-9bcc-9075c30e5ac3)
    Sep  3 21:04:12.115: INFO: Unable to read jessie_tcp@dns-test-service.dns-5710.svc from pod dns-5710/dns-test-4f619a76-b64d-44de-9bcc-9075c30e5ac3: the server could not find the requested resource (get pods dns-test-4f619a76-b64d-44de-9bcc-9075c30e5ac3)
    Sep  3 21:04:12.118: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5710.svc from pod dns-5710/dns-test-4f619a76-b64d-44de-9bcc-9075c30e5ac3: the server could not find the requested resource (get pods dns-test-4f619a76-b64d-44de-9bcc-9075c30e5ac3)
    Sep  3 21:04:12.121: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5710.svc from pod dns-5710/dns-test-4f619a76-b64d-44de-9bcc-9075c30e5ac3: the server could not find the requested resource (get pods dns-test-4f619a76-b64d-44de-9bcc-9075c30e5ac3)
    Sep  3 21:04:12.140: INFO: Lookups using dns-5710/dns-test-4f619a76-b64d-44de-9bcc-9075c30e5ac3 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5710 wheezy_tcp@dns-test-service.dns-5710 wheezy_udp@dns-test-service.dns-5710.svc wheezy_tcp@dns-test-service.dns-5710.svc wheezy_udp@_http._tcp.dns-test-service.dns-5710.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5710.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5710 jessie_tcp@dns-test-service.dns-5710 jessie_udp@dns-test-service.dns-5710.svc jessie_tcp@dns-test-service.dns-5710.svc jessie_udp@_http._tcp.dns-test-service.dns-5710.svc jessie_tcp@_http._tcp.dns-test-service.dns-5710.svc]

    
    Sep  3 21:04:17.052: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5710/dns-test-4f619a76-b64d-44de-9bcc-9075c30e5ac3: the server could not find the requested resource (get pods dns-test-4f619a76-b64d-44de-9bcc-9075c30e5ac3)
    Sep  3 21:04:17.056: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5710/dns-test-4f619a76-b64d-44de-9bcc-9075c30e5ac3: the server could not find the requested resource (get pods dns-test-4f619a76-b64d-44de-9bcc-9075c30e5ac3)
    Sep  3 21:04:17.059: INFO: Unable to read wheezy_udp@dns-test-service.dns-5710 from pod dns-5710/dns-test-4f619a76-b64d-44de-9bcc-9075c30e5ac3: the server could not find the requested resource (get pods dns-test-4f619a76-b64d-44de-9bcc-9075c30e5ac3)
    Sep  3 21:04:17.063: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5710 from pod dns-5710/dns-test-4f619a76-b64d-44de-9bcc-9075c30e5ac3: the server could not find the requested resource (get pods dns-test-4f619a76-b64d-44de-9bcc-9075c30e5ac3)
    Sep  3 21:04:17.067: INFO: Unable to read wheezy_udp@dns-test-service.dns-5710.svc from pod dns-5710/dns-test-4f619a76-b64d-44de-9bcc-9075c30e5ac3: the server could not find the requested resource (get pods dns-test-4f619a76-b64d-44de-9bcc-9075c30e5ac3)
... skipping 5 lines ...
    Sep  3 21:04:17.108: INFO: Unable to read jessie_udp@dns-test-service.dns-5710 from pod dns-5710/dns-test-4f619a76-b64d-44de-9bcc-9075c30e5ac3: the server could not find the requested resource (get pods dns-test-4f619a76-b64d-44de-9bcc-9075c30e5ac3)
    Sep  3 21:04:17.111: INFO: Unable to read jessie_tcp@dns-test-service.dns-5710 from pod dns-5710/dns-test-4f619a76-b64d-44de-9bcc-9075c30e5ac3: the server could not find the requested resource (get pods dns-test-4f619a76-b64d-44de-9bcc-9075c30e5ac3)
    Sep  3 21:04:17.115: INFO: Unable to read jessie_udp@dns-test-service.dns-5710.svc from pod dns-5710/dns-test-4f619a76-b64d-44de-9bcc-9075c30e5ac3: the server could not find the requested resource (get pods dns-test-4f619a76-b64d-44de-9bcc-9075c30e5ac3)
    Sep  3 21:04:17.118: INFO: Unable to read jessie_tcp@dns-test-service.dns-5710.svc from pod dns-5710/dns-test-4f619a76-b64d-44de-9bcc-9075c30e5ac3: the server could not find the requested resource (get pods dns-test-4f619a76-b64d-44de-9bcc-9075c30e5ac3)
    Sep  3 21:04:17.123: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5710.svc from pod dns-5710/dns-test-4f619a76-b64d-44de-9bcc-9075c30e5ac3: the server could not find the requested resource (get pods dns-test-4f619a76-b64d-44de-9bcc-9075c30e5ac3)
    Sep  3 21:04:17.127: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5710.svc from pod dns-5710/dns-test-4f619a76-b64d-44de-9bcc-9075c30e5ac3: the server could not find the requested resource (get pods dns-test-4f619a76-b64d-44de-9bcc-9075c30e5ac3)
    Sep  3 21:04:17.153: INFO: Lookups using dns-5710/dns-test-4f619a76-b64d-44de-9bcc-9075c30e5ac3 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5710 wheezy_tcp@dns-test-service.dns-5710 wheezy_udp@dns-test-service.dns-5710.svc wheezy_tcp@dns-test-service.dns-5710.svc wheezy_udp@_http._tcp.dns-test-service.dns-5710.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5710.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5710 jessie_tcp@dns-test-service.dns-5710 jessie_udp@dns-test-service.dns-5710.svc jessie_tcp@dns-test-service.dns-5710.svc jessie_udp@_http._tcp.dns-test-service.dns-5710.svc jessie_tcp@_http._tcp.dns-test-service.dns-5710.svc]

    
    Sep  3 21:04:22.154: INFO: DNS probes using dns-5710/dns-test-4f619a76-b64d-44de-9bcc-9075c30e5ac3 succeeded
    
    STEP: deleting the pod
    STEP: deleting the test service
    STEP: deleting the test headless service
    [AfterEach] [sig-network] DNS
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 21:04:22.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "dns-5710" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":-1,"completed":54,"skipped":986,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Probing container
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 23 lines ...
    • [SLOW TEST:150.452 seconds]
    [sig-node] Probing container
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
      should have monotonically increasing restart count [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    ------------------------------
    {"msg":"PASSED [sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":-1,"completed":48,"skipped":824,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] DNS
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 16 lines ...
    Sep  3 21:04:26.454: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7995.svc.cluster.local from pod dns-7995/dns-test-3fa86c19-0ce3-4cb7-bd7d-8e5cae7a9e42: the server could not find the requested resource (get pods dns-test-3fa86c19-0ce3-4cb7-bd7d-8e5cae7a9e42)
    Sep  3 21:04:26.458: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7995.svc.cluster.local from pod dns-7995/dns-test-3fa86c19-0ce3-4cb7-bd7d-8e5cae7a9e42: the server could not find the requested resource (get pods dns-test-3fa86c19-0ce3-4cb7-bd7d-8e5cae7a9e42)
    Sep  3 21:04:26.480: INFO: Unable to read jessie_udp@dns-test-service.dns-7995.svc.cluster.local from pod dns-7995/dns-test-3fa86c19-0ce3-4cb7-bd7d-8e5cae7a9e42: the server could not find the requested resource (get pods dns-test-3fa86c19-0ce3-4cb7-bd7d-8e5cae7a9e42)
    Sep  3 21:04:26.483: INFO: Unable to read jessie_tcp@dns-test-service.dns-7995.svc.cluster.local from pod dns-7995/dns-test-3fa86c19-0ce3-4cb7-bd7d-8e5cae7a9e42: the server could not find the requested resource (get pods dns-test-3fa86c19-0ce3-4cb7-bd7d-8e5cae7a9e42)
    Sep  3 21:04:26.487: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7995.svc.cluster.local from pod dns-7995/dns-test-3fa86c19-0ce3-4cb7-bd7d-8e5cae7a9e42: the server could not find the requested resource (get pods dns-test-3fa86c19-0ce3-4cb7-bd7d-8e5cae7a9e42)
    Sep  3 21:04:26.490: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7995.svc.cluster.local from pod dns-7995/dns-test-3fa86c19-0ce3-4cb7-bd7d-8e5cae7a9e42: the server could not find the requested resource (get pods dns-test-3fa86c19-0ce3-4cb7-bd7d-8e5cae7a9e42)
    Sep  3 21:04:26.509: INFO: Lookups using dns-7995/dns-test-3fa86c19-0ce3-4cb7-bd7d-8e5cae7a9e42 failed for: [wheezy_udp@dns-test-service.dns-7995.svc.cluster.local wheezy_tcp@dns-test-service.dns-7995.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7995.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7995.svc.cluster.local jessie_udp@dns-test-service.dns-7995.svc.cluster.local jessie_tcp@dns-test-service.dns-7995.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7995.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7995.svc.cluster.local]

    
    Sep  3 21:04:31.514: INFO: Unable to read wheezy_udp@dns-test-service.dns-7995.svc.cluster.local from pod dns-7995/dns-test-3fa86c19-0ce3-4cb7-bd7d-8e5cae7a9e42: the server could not find the requested resource (get pods dns-test-3fa86c19-0ce3-4cb7-bd7d-8e5cae7a9e42)
    Sep  3 21:04:31.518: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7995.svc.cluster.local from pod dns-7995/dns-test-3fa86c19-0ce3-4cb7-bd7d-8e5cae7a9e42: the server could not find the requested resource (get pods dns-test-3fa86c19-0ce3-4cb7-bd7d-8e5cae7a9e42)
    Sep  3 21:04:31.521: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7995.svc.cluster.local from pod dns-7995/dns-test-3fa86c19-0ce3-4cb7-bd7d-8e5cae7a9e42: the server could not find the requested resource (get pods dns-test-3fa86c19-0ce3-4cb7-bd7d-8e5cae7a9e42)
    Sep  3 21:04:31.526: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7995.svc.cluster.local from pod dns-7995/dns-test-3fa86c19-0ce3-4cb7-bd7d-8e5cae7a9e42: the server could not find the requested resource (get pods dns-test-3fa86c19-0ce3-4cb7-bd7d-8e5cae7a9e42)
    Sep  3 21:04:31.554: INFO: Unable to read jessie_udp@dns-test-service.dns-7995.svc.cluster.local from pod dns-7995/dns-test-3fa86c19-0ce3-4cb7-bd7d-8e5cae7a9e42: the server could not find the requested resource (get pods dns-test-3fa86c19-0ce3-4cb7-bd7d-8e5cae7a9e42)
    Sep  3 21:04:31.557: INFO: Unable to read jessie_tcp@dns-test-service.dns-7995.svc.cluster.local from pod dns-7995/dns-test-3fa86c19-0ce3-4cb7-bd7d-8e5cae7a9e42: the server could not find the requested resource (get pods dns-test-3fa86c19-0ce3-4cb7-bd7d-8e5cae7a9e42)
    Sep  3 21:04:31.560: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7995.svc.cluster.local from pod dns-7995/dns-test-3fa86c19-0ce3-4cb7-bd7d-8e5cae7a9e42: the server could not find the requested resource (get pods dns-test-3fa86c19-0ce3-4cb7-bd7d-8e5cae7a9e42)
    Sep  3 21:04:31.565: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7995.svc.cluster.local from pod dns-7995/dns-test-3fa86c19-0ce3-4cb7-bd7d-8e5cae7a9e42: the server could not find the requested resource (get pods dns-test-3fa86c19-0ce3-4cb7-bd7d-8e5cae7a9e42)
    Sep  3 21:04:31.587: INFO: Lookups using dns-7995/dns-test-3fa86c19-0ce3-4cb7-bd7d-8e5cae7a9e42 failed for: [wheezy_udp@dns-test-service.dns-7995.svc.cluster.local wheezy_tcp@dns-test-service.dns-7995.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7995.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7995.svc.cluster.local jessie_udp@dns-test-service.dns-7995.svc.cluster.local jessie_tcp@dns-test-service.dns-7995.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7995.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7995.svc.cluster.local]

    
    Sep  3 21:04:36.514: INFO: Unable to read wheezy_udp@dns-test-service.dns-7995.svc.cluster.local from pod dns-7995/dns-test-3fa86c19-0ce3-4cb7-bd7d-8e5cae7a9e42: the server could not find the requested resource (get pods dns-test-3fa86c19-0ce3-4cb7-bd7d-8e5cae7a9e42)
    Sep  3 21:04:36.518: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7995.svc.cluster.local from pod dns-7995/dns-test-3fa86c19-0ce3-4cb7-bd7d-8e5cae7a9e42: the server could not find the requested resource (get pods dns-test-3fa86c19-0ce3-4cb7-bd7d-8e5cae7a9e42)
    Sep  3 21:04:36.522: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7995.svc.cluster.local from pod dns-7995/dns-test-3fa86c19-0ce3-4cb7-bd7d-8e5cae7a9e42: the server could not find the requested resource (get pods dns-test-3fa86c19-0ce3-4cb7-bd7d-8e5cae7a9e42)
    Sep  3 21:04:36.526: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7995.svc.cluster.local from pod dns-7995/dns-test-3fa86c19-0ce3-4cb7-bd7d-8e5cae7a9e42: the server could not find the requested resource (get pods dns-test-3fa86c19-0ce3-4cb7-bd7d-8e5cae7a9e42)
    Sep  3 21:04:36.551: INFO: Unable to read jessie_udp@dns-test-service.dns-7995.svc.cluster.local from pod dns-7995/dns-test-3fa86c19-0ce3-4cb7-bd7d-8e5cae7a9e42: the server could not find the requested resource (get pods dns-test-3fa86c19-0ce3-4cb7-bd7d-8e5cae7a9e42)
    Sep  3 21:04:36.554: INFO: Unable to read jessie_tcp@dns-test-service.dns-7995.svc.cluster.local from pod dns-7995/dns-test-3fa86c19-0ce3-4cb7-bd7d-8e5cae7a9e42: the server could not find the requested resource (get pods dns-test-3fa86c19-0ce3-4cb7-bd7d-8e5cae7a9e42)
    Sep  3 21:04:36.558: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7995.svc.cluster.local from pod dns-7995/dns-test-3fa86c19-0ce3-4cb7-bd7d-8e5cae7a9e42: the server could not find the requested resource (get pods dns-test-3fa86c19-0ce3-4cb7-bd7d-8e5cae7a9e42)
    Sep  3 21:04:36.561: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7995.svc.cluster.local from pod dns-7995/dns-test-3fa86c19-0ce3-4cb7-bd7d-8e5cae7a9e42: the server could not find the requested resource (get pods dns-test-3fa86c19-0ce3-4cb7-bd7d-8e5cae7a9e42)
    Sep  3 21:04:36.582: INFO: Lookups using dns-7995/dns-test-3fa86c19-0ce3-4cb7-bd7d-8e5cae7a9e42 failed for: [wheezy_udp@dns-test-service.dns-7995.svc.cluster.local wheezy_tcp@dns-test-service.dns-7995.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7995.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7995.svc.cluster.local jessie_udp@dns-test-service.dns-7995.svc.cluster.local jessie_tcp@dns-test-service.dns-7995.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7995.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7995.svc.cluster.local]

    
    Sep  3 21:04:41.513: INFO: Unable to read wheezy_udp@dns-test-service.dns-7995.svc.cluster.local from pod dns-7995/dns-test-3fa86c19-0ce3-4cb7-bd7d-8e5cae7a9e42: the server could not find the requested resource (get pods dns-test-3fa86c19-0ce3-4cb7-bd7d-8e5cae7a9e42)
    Sep  3 21:04:41.516: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7995.svc.cluster.local from pod dns-7995/dns-test-3fa86c19-0ce3-4cb7-bd7d-8e5cae7a9e42: the server could not find the requested resource (get pods dns-test-3fa86c19-0ce3-4cb7-bd7d-8e5cae7a9e42)
    Sep  3 21:04:41.520: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7995.svc.cluster.local from pod dns-7995/dns-test-3fa86c19-0ce3-4cb7-bd7d-8e5cae7a9e42: the server could not find the requested resource (get pods dns-test-3fa86c19-0ce3-4cb7-bd7d-8e5cae7a9e42)
    Sep  3 21:04:41.523: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7995.svc.cluster.local from pod dns-7995/dns-test-3fa86c19-0ce3-4cb7-bd7d-8e5cae7a9e42: the server could not find the requested resource (get pods dns-test-3fa86c19-0ce3-4cb7-bd7d-8e5cae7a9e42)
    Sep  3 21:04:41.547: INFO: Unable to read jessie_udp@dns-test-service.dns-7995.svc.cluster.local from pod dns-7995/dns-test-3fa86c19-0ce3-4cb7-bd7d-8e5cae7a9e42: the server could not find the requested resource (get pods dns-test-3fa86c19-0ce3-4cb7-bd7d-8e5cae7a9e42)
    Sep  3 21:04:41.551: INFO: Unable to read jessie_tcp@dns-test-service.dns-7995.svc.cluster.local from pod dns-7995/dns-test-3fa86c19-0ce3-4cb7-bd7d-8e5cae7a9e42: the server could not find the requested resource (get pods dns-test-3fa86c19-0ce3-4cb7-bd7d-8e5cae7a9e42)
    Sep  3 21:04:41.554: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7995.svc.cluster.local from pod dns-7995/dns-test-3fa86c19-0ce3-4cb7-bd7d-8e5cae7a9e42: the server could not find the requested resource (get pods dns-test-3fa86c19-0ce3-4cb7-bd7d-8e5cae7a9e42)
    Sep  3 21:04:41.557: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7995.svc.cluster.local from pod dns-7995/dns-test-3fa86c19-0ce3-4cb7-bd7d-8e5cae7a9e42: the server could not find the requested resource (get pods dns-test-3fa86c19-0ce3-4cb7-bd7d-8e5cae7a9e42)
    Sep  3 21:04:41.576: INFO: Lookups using dns-7995/dns-test-3fa86c19-0ce3-4cb7-bd7d-8e5cae7a9e42 failed for: [wheezy_udp@dns-test-service.dns-7995.svc.cluster.local wheezy_tcp@dns-test-service.dns-7995.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7995.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7995.svc.cluster.local jessie_udp@dns-test-service.dns-7995.svc.cluster.local jessie_tcp@dns-test-service.dns-7995.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7995.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7995.svc.cluster.local]

    
    Sep  3 21:04:46.515: INFO: Unable to read wheezy_udp@dns-test-service.dns-7995.svc.cluster.local from pod dns-7995/dns-test-3fa86c19-0ce3-4cb7-bd7d-8e5cae7a9e42: the server could not find the requested resource (get pods dns-test-3fa86c19-0ce3-4cb7-bd7d-8e5cae7a9e42)
    Sep  3 21:04:46.522: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7995.svc.cluster.local from pod dns-7995/dns-test-3fa86c19-0ce3-4cb7-bd7d-8e5cae7a9e42: the server could not find the requested resource (get pods dns-test-3fa86c19-0ce3-4cb7-bd7d-8e5cae7a9e42)
    Sep  3 21:04:46.530: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7995.svc.cluster.local from pod dns-7995/dns-test-3fa86c19-0ce3-4cb7-bd7d-8e5cae7a9e42: the server could not find the requested resource (get pods dns-test-3fa86c19-0ce3-4cb7-bd7d-8e5cae7a9e42)
    Sep  3 21:04:46.534: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7995.svc.cluster.local from pod dns-7995/dns-test-3fa86c19-0ce3-4cb7-bd7d-8e5cae7a9e42: the server could not find the requested resource (get pods dns-test-3fa86c19-0ce3-4cb7-bd7d-8e5cae7a9e42)
    Sep  3 21:04:46.575: INFO: Unable to read jessie_udp@dns-test-service.dns-7995.svc.cluster.local from pod dns-7995/dns-test-3fa86c19-0ce3-4cb7-bd7d-8e5cae7a9e42: the server could not find the requested resource (get pods dns-test-3fa86c19-0ce3-4cb7-bd7d-8e5cae7a9e42)
    Sep  3 21:04:46.580: INFO: Unable to read jessie_tcp@dns-test-service.dns-7995.svc.cluster.local from pod dns-7995/dns-test-3fa86c19-0ce3-4cb7-bd7d-8e5cae7a9e42: the server could not find the requested resource (get pods dns-test-3fa86c19-0ce3-4cb7-bd7d-8e5cae7a9e42)
    Sep  3 21:04:46.585: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7995.svc.cluster.local from pod dns-7995/dns-test-3fa86c19-0ce3-4cb7-bd7d-8e5cae7a9e42: the server could not find the requested resource (get pods dns-test-3fa86c19-0ce3-4cb7-bd7d-8e5cae7a9e42)
    Sep  3 21:04:46.590: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7995.svc.cluster.local from pod dns-7995/dns-test-3fa86c19-0ce3-4cb7-bd7d-8e5cae7a9e42: the server could not find the requested resource (get pods dns-test-3fa86c19-0ce3-4cb7-bd7d-8e5cae7a9e42)
    Sep  3 21:04:46.616: INFO: Lookups using dns-7995/dns-test-3fa86c19-0ce3-4cb7-bd7d-8e5cae7a9e42 failed for: [wheezy_udp@dns-test-service.dns-7995.svc.cluster.local wheezy_tcp@dns-test-service.dns-7995.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7995.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7995.svc.cluster.local jessie_udp@dns-test-service.dns-7995.svc.cluster.local jessie_tcp@dns-test-service.dns-7995.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7995.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7995.svc.cluster.local]

    
    Sep  3 21:04:51.514: INFO: Unable to read wheezy_udp@dns-test-service.dns-7995.svc.cluster.local from pod dns-7995/dns-test-3fa86c19-0ce3-4cb7-bd7d-8e5cae7a9e42: the server could not find the requested resource (get pods dns-test-3fa86c19-0ce3-4cb7-bd7d-8e5cae7a9e42)
    Sep  3 21:04:51.518: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7995.svc.cluster.local from pod dns-7995/dns-test-3fa86c19-0ce3-4cb7-bd7d-8e5cae7a9e42: the server could not find the requested resource (get pods dns-test-3fa86c19-0ce3-4cb7-bd7d-8e5cae7a9e42)
    Sep  3 21:04:51.523: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7995.svc.cluster.local from pod dns-7995/dns-test-3fa86c19-0ce3-4cb7-bd7d-8e5cae7a9e42: the server could not find the requested resource (get pods dns-test-3fa86c19-0ce3-4cb7-bd7d-8e5cae7a9e42)
    Sep  3 21:04:51.526: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7995.svc.cluster.local from pod dns-7995/dns-test-3fa86c19-0ce3-4cb7-bd7d-8e5cae7a9e42: the server could not find the requested resource (get pods dns-test-3fa86c19-0ce3-4cb7-bd7d-8e5cae7a9e42)
    Sep  3 21:04:51.552: INFO: Unable to read jessie_udp@dns-test-service.dns-7995.svc.cluster.local from pod dns-7995/dns-test-3fa86c19-0ce3-4cb7-bd7d-8e5cae7a9e42: the server could not find the requested resource (get pods dns-test-3fa86c19-0ce3-4cb7-bd7d-8e5cae7a9e42)
    Sep  3 21:04:51.555: INFO: Unable to read jessie_tcp@dns-test-service.dns-7995.svc.cluster.local from pod dns-7995/dns-test-3fa86c19-0ce3-4cb7-bd7d-8e5cae7a9e42: the server could not find the requested resource (get pods dns-test-3fa86c19-0ce3-4cb7-bd7d-8e5cae7a9e42)
    Sep  3 21:04:51.559: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7995.svc.cluster.local from pod dns-7995/dns-test-3fa86c19-0ce3-4cb7-bd7d-8e5cae7a9e42: the server could not find the requested resource (get pods dns-test-3fa86c19-0ce3-4cb7-bd7d-8e5cae7a9e42)
    Sep  3 21:04:51.563: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7995.svc.cluster.local from pod dns-7995/dns-test-3fa86c19-0ce3-4cb7-bd7d-8e5cae7a9e42: the server could not find the requested resource (get pods dns-test-3fa86c19-0ce3-4cb7-bd7d-8e5cae7a9e42)
    Sep  3 21:04:51.584: INFO: Lookups using dns-7995/dns-test-3fa86c19-0ce3-4cb7-bd7d-8e5cae7a9e42 failed for: [wheezy_udp@dns-test-service.dns-7995.svc.cluster.local wheezy_tcp@dns-test-service.dns-7995.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7995.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7995.svc.cluster.local jessie_udp@dns-test-service.dns-7995.svc.cluster.local jessie_tcp@dns-test-service.dns-7995.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7995.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7995.svc.cluster.local]

    
    Sep  3 21:04:56.577: INFO: DNS probes using dns-7995/dns-test-3fa86c19-0ce3-4cb7-bd7d-8e5cae7a9e42 succeeded
    
    STEP: deleting the pod
    STEP: deleting the test service
    STEP: deleting the test headless service
    [AfterEach] [sig-network] DNS
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 21:04:56.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "dns-7995" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] DNS should provide DNS for services  [Conformance]","total":-1,"completed":55,"skipped":1031,"failed":0}

    
    S
    ------------------------------
    [BeforeEach] [sig-node] InitContainer [NodeConformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  3 21:04:14.833: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename init-container
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-node] InitContainer [NodeConformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162
    [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: creating the pod
    Sep  3 21:04:14.873: INFO: PodSpec: initContainers in spec.initContainers
    Sep  3 21:05:01.061: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-dd871de2-53e3-4673-9a0a-9ff278270795", GenerateName:"", Namespace:"init-container-5655", SelfLink:"", UID:"af997c38-51d5-43e1-b156-2e0d3bcd57c1", ResourceVersion:"16675", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63797835854, loc:(*time.Location)(0x9e363e0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"873260727"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0036876f8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003687710)}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003687728), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003687740)}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-api-access-2sjkv", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(0xc002ac3b80), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-2sjkv", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-2sjkv", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.4.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-2sjkv", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0044985a0), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"k8s-upgrade-and-conformance-uljqkb-md-0-rg248-796ff9996-j7vhm", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0029e6000), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc004498620)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc004498640)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc004498648), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00449864c), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc0036a75c0), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63797835854, loc:(*time.Location)(0x9e363e0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63797835854, loc:(*time.Location)(0x9e363e0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63797835854, loc:(*time.Location)(0x9e363e0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63797835854, loc:(*time.Location)(0x9e363e0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.18.0.4", PodIP:"192.168.0.65", PodIPs:[]v1.PodIP{v1.PodIP{IP:"192.168.0.65"}}, StartTime:(*v1.Time)(0xc003687770), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0029e60e0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0029e6150)}, Ready:false, RestartCount:3, Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", ImageID:"k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592", ContainerID:"containerd://766937dff3a57c7ea9cd3bddf6a2b39a3e67a82626adb8aeaf638646e1a19c9d", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002ac3c20), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002ac3c00), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.4.1", ImageID:"", ContainerID:"", Started:(*bool)(0xc0044986cf)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}

    [AfterEach] [sig-node] InitContainer [NodeConformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 21:05:01.062: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "init-container-5655" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":-1,"completed":100,"skipped":1752,"failed":0}

    
    SSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] KubeletManagedEtcHosts
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 47 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 21:05:01.688: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "e2e-kubelet-etc-hosts-774" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":56,"skipped":1032,"failed":0}

    
    SSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  3 21:05:01.717: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename secrets
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating secret with name secret-test-7bb72297-7c42-47ef-9d71-34309d4f1156
    STEP: Creating a pod to test consume secrets
    Sep  3 21:05:01.757: INFO: Waiting up to 5m0s for pod "pod-secrets-9a72e840-32f2-474f-b960-15719478e9b3" in namespace "secrets-4481" to be "Succeeded or Failed"

    Sep  3 21:05:01.761: INFO: Pod "pod-secrets-9a72e840-32f2-474f-b960-15719478e9b3": Phase="Pending", Reason="", readiness=false. Elapsed: 3.301009ms
    Sep  3 21:05:03.766: INFO: Pod "pod-secrets-9a72e840-32f2-474f-b960-15719478e9b3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008930216s
    STEP: Saw pod success
    Sep  3 21:05:03.767: INFO: Pod "pod-secrets-9a72e840-32f2-474f-b960-15719478e9b3" satisfied condition "Succeeded or Failed"

    Sep  3 21:05:03.771: INFO: Trying to get logs from node k8s-upgrade-and-conformance-uljqkb-md-0-rg248-796ff9996-j7vhm pod pod-secrets-9a72e840-32f2-474f-b960-15719478e9b3 container secret-volume-test: <nil>
    STEP: delete the pod
    Sep  3 21:05:03.789: INFO: Waiting for pod pod-secrets-9a72e840-32f2-474f-b960-15719478e9b3 to disappear
    Sep  3 21:05:03.793: INFO: Pod pod-secrets-9a72e840-32f2-474f-b960-15719478e9b3 no longer exists
    [AfterEach] [sig-storage] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 21:05:03.793: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "secrets-4481" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":57,"skipped":1038,"failed":0}

    
    SSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  3 21:05:03.818: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename configmap
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating configMap with name configmap-test-volume-eaf0dc9a-f91d-4fef-b17b-53ce53bbdc11
    STEP: Creating a pod to test consume configMaps
    Sep  3 21:05:03.870: INFO: Waiting up to 5m0s for pod "pod-configmaps-50eeb547-754e-4c9f-93a0-b73f288e8038" in namespace "configmap-3575" to be "Succeeded or Failed"

    Sep  3 21:05:03.873: INFO: Pod "pod-configmaps-50eeb547-754e-4c9f-93a0-b73f288e8038": Phase="Pending", Reason="", readiness=false. Elapsed: 2.882744ms
    Sep  3 21:05:05.877: INFO: Pod "pod-configmaps-50eeb547-754e-4c9f-93a0-b73f288e8038": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007614506s
    STEP: Saw pod success
    Sep  3 21:05:05.877: INFO: Pod "pod-configmaps-50eeb547-754e-4c9f-93a0-b73f288e8038" satisfied condition "Succeeded or Failed"

    Sep  3 21:05:05.881: INFO: Trying to get logs from node k8s-upgrade-and-conformance-uljqkb-md-0-rg248-796ff9996-j7vhm pod pod-configmaps-50eeb547-754e-4c9f-93a0-b73f288e8038 container agnhost-container: <nil>
    STEP: delete the pod
    Sep  3 21:05:05.897: INFO: Waiting for pod pod-configmaps-50eeb547-754e-4c9f-93a0-b73f288e8038 to disappear
    Sep  3 21:05:05.900: INFO: Pod pod-configmaps-50eeb547-754e-4c9f-93a0-b73f288e8038 no longer exists
    [AfterEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 21:05:05.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "configmap-3575" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":-1,"completed":58,"skipped":1046,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir wrapper volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 11 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 21:05:08.029: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-wrapper-6706" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":-1,"completed":59,"skipped":1078,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] DNS
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 19 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 21:05:10.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "dns-1941" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":-1,"completed":60,"skipped":1101,"failed":0}

    
    SSS
    ------------------------------
    [BeforeEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  3 21:05:10.217: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename configmap
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating configMap with name configmap-test-volume-map-419643fb-40a2-4992-b1a9-310f85213e88
    STEP: Creating a pod to test consume configMaps
    Sep  3 21:05:10.263: INFO: Waiting up to 5m0s for pod "pod-configmaps-39171e5d-9b7a-4562-b93d-1325f2bba248" in namespace "configmap-7057" to be "Succeeded or Failed"

    Sep  3 21:05:10.267: INFO: Pod "pod-configmaps-39171e5d-9b7a-4562-b93d-1325f2bba248": Phase="Pending", Reason="", readiness=false. Elapsed: 3.30179ms
    Sep  3 21:05:12.271: INFO: Pod "pod-configmaps-39171e5d-9b7a-4562-b93d-1325f2bba248": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007839962s
    STEP: Saw pod success
    Sep  3 21:05:12.271: INFO: Pod "pod-configmaps-39171e5d-9b7a-4562-b93d-1325f2bba248" satisfied condition "Succeeded or Failed"

    Sep  3 21:05:12.274: INFO: Trying to get logs from node k8s-upgrade-and-conformance-uljqkb-md-0-rg248-796ff9996-j7vhm pod pod-configmaps-39171e5d-9b7a-4562-b93d-1325f2bba248 container agnhost-container: <nil>
    STEP: delete the pod
    Sep  3 21:05:12.291: INFO: Waiting for pod pod-configmaps-39171e5d-9b7a-4562-b93d-1325f2bba248 to disappear
    Sep  3 21:05:12.294: INFO: Pod pod-configmaps-39171e5d-9b7a-4562-b93d-1325f2bba248 no longer exists
    [AfterEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 21:05:12.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "configmap-7057" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":-1,"completed":61,"skipped":1104,"failed":0}

    
    SSS
    ------------------------------
    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 20 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 21:05:22.161: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-538" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never  [Conformance]","total":-1,"completed":62,"skipped":1107,"failed":0}

    
    S
    ------------------------------
    [BeforeEach] [sig-scheduling] LimitRange
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 32 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 21:05:29.285: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "limitrange-7697" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":-1,"completed":63,"skipped":1108,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 24 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 21:05:37.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "crd-publish-openapi-9894" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":-1,"completed":64,"skipped":1129,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  3 21:05:37.841: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename emptydir
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test emptydir volume type on tmpfs
    Sep  3 21:05:37.887: INFO: Waiting up to 5m0s for pod "pod-014506a8-940a-49d2-aa11-191a6f84bdde" in namespace "emptydir-8696" to be "Succeeded or Failed"

    Sep  3 21:05:37.893: INFO: Pod "pod-014506a8-940a-49d2-aa11-191a6f84bdde": Phase="Pending", Reason="", readiness=false. Elapsed: 5.586776ms
    Sep  3 21:05:39.897: INFO: Pod "pod-014506a8-940a-49d2-aa11-191a6f84bdde": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009439494s
    STEP: Saw pod success
    Sep  3 21:05:39.897: INFO: Pod "pod-014506a8-940a-49d2-aa11-191a6f84bdde" satisfied condition "Succeeded or Failed"

    Sep  3 21:05:39.900: INFO: Trying to get logs from node k8s-upgrade-and-conformance-uljqkb-worker-gvulve pod pod-014506a8-940a-49d2-aa11-191a6f84bdde container test-container: <nil>
    STEP: delete the pod
    Sep  3 21:05:39.927: INFO: Waiting for pod pod-014506a8-940a-49d2-aa11-191a6f84bdde to disappear
    Sep  3 21:05:39.931: INFO: Pod pod-014506a8-940a-49d2-aa11-191a6f84bdde no longer exists
    [AfterEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 21:05:39.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-8696" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":65,"skipped":1163,"failed":0}

    
    SSS
    ------------------------------
    [BeforeEach] [sig-node] RuntimeClass
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 19 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 21:05:40.064: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "runtimeclass-5708" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] RuntimeClass  should support RuntimeClasses API operations [Conformance]","total":-1,"completed":66,"skipped":1166,"failed":0}

    
    SSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Downward API
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  3 21:05:40.089: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename downward-api
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward api env vars
    Sep  3 21:05:40.128: INFO: Waiting up to 5m0s for pod "downward-api-a584b088-754a-4112-afa0-4711ee032cef" in namespace "downward-api-9820" to be "Succeeded or Failed"

    Sep  3 21:05:40.132: INFO: Pod "downward-api-a584b088-754a-4112-afa0-4711ee032cef": Phase="Pending", Reason="", readiness=false. Elapsed: 3.777195ms
    Sep  3 21:05:42.138: INFO: Pod "downward-api-a584b088-754a-4112-afa0-4711ee032cef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009959156s
    STEP: Saw pod success
    Sep  3 21:05:42.138: INFO: Pod "downward-api-a584b088-754a-4112-afa0-4711ee032cef" satisfied condition "Succeeded or Failed"

    Sep  3 21:05:42.143: INFO: Trying to get logs from node k8s-upgrade-and-conformance-uljqkb-worker-gvulve pod downward-api-a584b088-754a-4112-afa0-4711ee032cef container dapi-container: <nil>
    STEP: delete the pod
    Sep  3 21:05:42.173: INFO: Waiting for pod downward-api-a584b088-754a-4112-afa0-4711ee032cef to disappear
    Sep  3 21:05:42.181: INFO: Pod downward-api-a584b088-754a-4112-afa0-4711ee032cef no longer exists
    [AfterEach] [sig-node] Downward API
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 21:05:42.181: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "downward-api-9820" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":-1,"completed":67,"skipped":1174,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  3 21:05:42.302: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename emptydir
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test emptydir 0666 on tmpfs
    Sep  3 21:05:42.361: INFO: Waiting up to 5m0s for pod "pod-a913ac55-3b18-45e0-a221-fa391ef5f334" in namespace "emptydir-4856" to be "Succeeded or Failed"

    Sep  3 21:05:42.368: INFO: Pod "pod-a913ac55-3b18-45e0-a221-fa391ef5f334": Phase="Pending", Reason="", readiness=false. Elapsed: 7.201297ms
    Sep  3 21:05:44.373: INFO: Pod "pod-a913ac55-3b18-45e0-a221-fa391ef5f334": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.012234963s
    STEP: Saw pod success
    Sep  3 21:05:44.373: INFO: Pod "pod-a913ac55-3b18-45e0-a221-fa391ef5f334" satisfied condition "Succeeded or Failed"

    Sep  3 21:05:44.377: INFO: Trying to get logs from node k8s-upgrade-and-conformance-uljqkb-worker-gvulve pod pod-a913ac55-3b18-45e0-a221-fa391ef5f334 container test-container: <nil>
    STEP: delete the pod
    Sep  3 21:05:44.398: INFO: Waiting for pod pod-a913ac55-3b18-45e0-a221-fa391ef5f334 to disappear
    Sep  3 21:05:44.401: INFO: Pod pod-a913ac55-3b18-45e0-a221-fa391ef5f334 no longer exists
    [AfterEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 21:05:44.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-4856" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":68,"skipped":1230,"failed":0}

    
    SSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 10 lines ...
    STEP: Deploying the webhook service
    STEP: Verifying the service has paired with the endpoint
    Sep  3 21:05:04.911: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1
    [It] should be able to convert from CR v1 to CR v2 [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    Sep  3 21:05:04.915: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  3 21:05:17.488: INFO: error waiting for conversion to succeed during setup: conversion webhook for stable.example.com/v2, Kind=E2e-test-crd-webhook-4933-crd failed: Post "https://e2e-test-crd-conversion-webhook.crd-webhook-9146.svc:9443/crdconvert?timeout=30s": net/http: TLS handshake timeout

    Sep  3 21:05:27.592: INFO: error waiting for conversion to succeed during setup: conversion webhook for stable.example.com/v2, Kind=E2e-test-crd-webhook-4933-crd failed: Post "https://e2e-test-crd-conversion-webhook.crd-webhook-9146.svc:9443/crdconvert?timeout=30s": net/http: TLS handshake timeout

    Sep  3 21:05:37.694: INFO: error waiting for conversion to succeed during setup: conversion webhook for stable.example.com/v2, Kind=E2e-test-crd-webhook-4933-crd failed: Post "https://e2e-test-crd-conversion-webhook.crd-webhook-9146.svc:9443/crdconvert?timeout=30s": net/http: TLS handshake timeout

    Sep  3 21:05:47.799: INFO: error waiting for conversion to succeed during setup: conversion webhook for stable.example.com/v2, Kind=E2e-test-crd-webhook-4933-crd failed: Post "https://e2e-test-crd-conversion-webhook.crd-webhook-9146.svc:9443/crdconvert?timeout=30s": net/http: TLS handshake timeout

    Sep  3 21:05:57.805: INFO: error waiting for conversion to succeed during setup: conversion webhook for stable.example.com/v2, Kind=E2e-test-crd-webhook-4933-crd failed: Post "https://e2e-test-crd-conversion-webhook.crd-webhook-9146.svc:9443/crdconvert?timeout=30s": net/http: TLS handshake timeout

    Sep  3 21:05:57.805: FAIL: Unexpected error:

        <*errors.errorString | 0xc000242290>: {
            s: "timed out waiting for the condition",
        }
        timed out waiting for the condition
    occurred
    
... skipping 21 lines ...
    • Failure [57.275 seconds]
    [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
      should be able to convert from CR v1 to CR v2 [Conformance] [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    
      Sep  3 21:05:57.805: Unexpected error:

          <*errors.errorString | 0xc000242290>: {
              s: "timed out waiting for the condition",
          }
          timed out waiting for the condition
      occurred
    
... skipping 25 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 21:06:00.563: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "resourcequota-2286" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":-1,"completed":69,"skipped":1234,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    {"msg":"FAILED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":-1,"completed":100,"skipped":1770,"failed":1,"failures":["[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]"]}

    [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  3 21:05:58.390: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename crd-webhook
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 18 lines ...
    STEP: Destroying namespace "crd-webhook-9228" for this suite.
    [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":-1,"completed":101,"skipped":1770,"failed":1,"failures":["[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 25 lines ...
    STEP: Destroying namespace "webhook-9145-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":-1,"completed":102,"skipped":1803,"failed":1,"failures":["[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]"]}

    
    SSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] DisruptionController
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 15 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 21:06:13.442: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "disruption-5396" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] DisruptionController should create a PodDisruptionBudget [Conformance]","total":-1,"completed":103,"skipped":1812,"failed":1,"failures":["[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]"]}

    
    SSSSSSS
    ------------------------------
    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 11 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 21:06:13.560: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-7928" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path  [Conformance]","total":-1,"completed":104,"skipped":1819,"failed":1,"failures":["[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]"]}

    
    SS
    ------------------------------
    [BeforeEach] [sig-storage] Subpath
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 4 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
    STEP: Setting up data
    [It] should support subpaths with projected pod [LinuxOnly] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating pod pod-subpath-test-projected-58jb
    STEP: Creating a pod to test atomic-volume-subpath
    Sep  3 21:06:00.702: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-58jb" in namespace "subpath-9698" to be "Succeeded or Failed"

    Sep  3 21:06:00.706: INFO: Pod "pod-subpath-test-projected-58jb": Phase="Pending", Reason="", readiness=false. Elapsed: 3.16947ms
    Sep  3 21:06:02.710: INFO: Pod "pod-subpath-test-projected-58jb": Phase="Running", Reason="", readiness=true. Elapsed: 2.00804589s
    Sep  3 21:06:04.714: INFO: Pod "pod-subpath-test-projected-58jb": Phase="Running", Reason="", readiness=true. Elapsed: 4.011908548s
    Sep  3 21:06:06.718: INFO: Pod "pod-subpath-test-projected-58jb": Phase="Running", Reason="", readiness=true. Elapsed: 6.015722985s
    Sep  3 21:06:08.723: INFO: Pod "pod-subpath-test-projected-58jb": Phase="Running", Reason="", readiness=true. Elapsed: 8.020755386s
    Sep  3 21:06:10.727: INFO: Pod "pod-subpath-test-projected-58jb": Phase="Running", Reason="", readiness=true. Elapsed: 10.025007318s
    Sep  3 21:06:12.733: INFO: Pod "pod-subpath-test-projected-58jb": Phase="Running", Reason="", readiness=true. Elapsed: 12.030211721s
    Sep  3 21:06:14.736: INFO: Pod "pod-subpath-test-projected-58jb": Phase="Running", Reason="", readiness=true. Elapsed: 14.033845837s
    Sep  3 21:06:16.740: INFO: Pod "pod-subpath-test-projected-58jb": Phase="Running", Reason="", readiness=true. Elapsed: 16.037424329s
    Sep  3 21:06:18.744: INFO: Pod "pod-subpath-test-projected-58jb": Phase="Running", Reason="", readiness=true. Elapsed: 18.041456775s
    Sep  3 21:06:20.749: INFO: Pod "pod-subpath-test-projected-58jb": Phase="Running", Reason="", readiness=true. Elapsed: 20.046342838s
    Sep  3 21:06:22.753: INFO: Pod "pod-subpath-test-projected-58jb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.050767901s
    STEP: Saw pod success
    Sep  3 21:06:22.753: INFO: Pod "pod-subpath-test-projected-58jb" satisfied condition "Succeeded or Failed"

    Sep  3 21:06:22.756: INFO: Trying to get logs from node k8s-upgrade-and-conformance-uljqkb-worker-gvulve pod pod-subpath-test-projected-58jb container test-container-subpath-projected-58jb: <nil>
    STEP: delete the pod
    Sep  3 21:06:22.772: INFO: Waiting for pod pod-subpath-test-projected-58jb to disappear
    Sep  3 21:06:22.775: INFO: Pod pod-subpath-test-projected-58jb no longer exists
    STEP: Deleting pod pod-subpath-test-projected-58jb
    Sep  3 21:06:22.775: INFO: Deleting pod "pod-subpath-test-projected-58jb" in namespace "subpath-9698"
    [AfterEach] [sig-storage] Subpath
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 21:06:22.778: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "subpath-9698" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":-1,"completed":70,"skipped":1286,"failed":0}

    
    SSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 9 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 21:06:26.687: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "crd-publish-openapi-7513" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":-1,"completed":105,"skipped":1821,"failed":1,"failures":["[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  3 21:06:26.740: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename emptydir
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test emptydir 0666 on node default medium
    Sep  3 21:06:26.776: INFO: Waiting up to 5m0s for pod "pod-8202f858-e2b9-4605-95d3-5649988f56d4" in namespace "emptydir-9603" to be "Succeeded or Failed"

    Sep  3 21:06:26.780: INFO: Pod "pod-8202f858-e2b9-4605-95d3-5649988f56d4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.002068ms
    Sep  3 21:06:28.784: INFO: Pod "pod-8202f858-e2b9-4605-95d3-5649988f56d4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008511298s
    STEP: Saw pod success
    Sep  3 21:06:28.784: INFO: Pod "pod-8202f858-e2b9-4605-95d3-5649988f56d4" satisfied condition "Succeeded or Failed"

    Sep  3 21:06:28.787: INFO: Trying to get logs from node k8s-upgrade-and-conformance-uljqkb-worker-gvulve pod pod-8202f858-e2b9-4605-95d3-5649988f56d4 container test-container: <nil>
    STEP: delete the pod
    Sep  3 21:06:28.800: INFO: Waiting for pod pod-8202f858-e2b9-4605-95d3-5649988f56d4 to disappear
    Sep  3 21:06:28.803: INFO: Pod pod-8202f858-e2b9-4605-95d3-5649988f56d4 no longer exists
    [AfterEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 21:06:28.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-9603" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":106,"skipped":1843,"failed":1,"failures":["[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]"]}

    
    SSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  3 21:06:28.837: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename secrets
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in env vars [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating secret with name secret-test-13d6a17e-9dd8-4ceb-8f74-f541c2400994
    STEP: Creating a pod to test consume secrets
    Sep  3 21:06:28.878: INFO: Waiting up to 5m0s for pod "pod-secrets-df9acb31-d16d-4312-9026-2aec6d4cd9fa" in namespace "secrets-780" to be "Succeeded or Failed"

    Sep  3 21:06:28.886: INFO: Pod "pod-secrets-df9acb31-d16d-4312-9026-2aec6d4cd9fa": Phase="Pending", Reason="", readiness=false. Elapsed: 8.375125ms
    Sep  3 21:06:30.890: INFO: Pod "pod-secrets-df9acb31-d16d-4312-9026-2aec6d4cd9fa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.012223197s
    STEP: Saw pod success
    Sep  3 21:06:30.890: INFO: Pod "pod-secrets-df9acb31-d16d-4312-9026-2aec6d4cd9fa" satisfied condition "Succeeded or Failed"

    Sep  3 21:06:30.893: INFO: Trying to get logs from node k8s-upgrade-and-conformance-uljqkb-md-0-rg248-796ff9996-j7vhm pod pod-secrets-df9acb31-d16d-4312-9026-2aec6d4cd9fa container secret-env-test: <nil>
    STEP: delete the pod
    Sep  3 21:06:30.910: INFO: Waiting for pod pod-secrets-df9acb31-d16d-4312-9026-2aec6d4cd9fa to disappear
    Sep  3 21:06:30.913: INFO: Pod pod-secrets-df9acb31-d16d-4312-9026-2aec6d4cd9fa no longer exists
    [AfterEach] [sig-node] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 21:06:30.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "secrets-780" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":-1,"completed":107,"skipped":1859,"failed":1,"failures":["[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Variable Expansion
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  3 21:06:30.981: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename var-expansion
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should allow substituting values in a volume subpath [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test substitution in volume subpath
    Sep  3 21:06:31.022: INFO: Waiting up to 5m0s for pod "var-expansion-e38f3d96-d3f4-4ab6-960b-00463eaeaf14" in namespace "var-expansion-5669" to be "Succeeded or Failed"

    Sep  3 21:06:31.025: INFO: Pod "var-expansion-e38f3d96-d3f4-4ab6-960b-00463eaeaf14": Phase="Pending", Reason="", readiness=false. Elapsed: 2.891939ms
    Sep  3 21:06:33.030: INFO: Pod "var-expansion-e38f3d96-d3f4-4ab6-960b-00463eaeaf14": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007895912s
    STEP: Saw pod success
    Sep  3 21:06:33.030: INFO: Pod "var-expansion-e38f3d96-d3f4-4ab6-960b-00463eaeaf14" satisfied condition "Succeeded or Failed"

    Sep  3 21:06:33.033: INFO: Trying to get logs from node k8s-upgrade-and-conformance-uljqkb-md-0-rg248-796ff9996-j7vhm pod var-expansion-e38f3d96-d3f4-4ab6-960b-00463eaeaf14 container dapi-container: <nil>
    STEP: delete the pod
    Sep  3 21:06:33.047: INFO: Waiting for pod var-expansion-e38f3d96-d3f4-4ab6-960b-00463eaeaf14 to disappear
    Sep  3 21:06:33.051: INFO: Pod var-expansion-e38f3d96-d3f4-4ab6-960b-00463eaeaf14 no longer exists
    [AfterEach] [sig-node] Variable Expansion
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 21:06:33.051: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "var-expansion-5669" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a volume subpath [Conformance]","total":-1,"completed":108,"skipped":1891,"failed":1,"failures":["[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]"]}

    
    SSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] ReplicationController
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 21:06:39.140: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "replication-controller-4408" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":-1,"completed":109,"skipped":1902,"failed":1,"failures":["[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] version v1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 345 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 21:06:45.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "proxy-4898" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod  [Conformance]","total":-1,"completed":71,"skipped":1290,"failed":0}

    
    SSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] ResourceQuota
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 9 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 21:06:46.227: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "resourcequota-5350" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":-1,"completed":110,"skipped":1923,"failed":1,"failures":["[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  3 21:06:45.276: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename emptydir
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test emptydir 0777 on node default medium
    Sep  3 21:06:45.327: INFO: Waiting up to 5m0s for pod "pod-572fd88b-2438-4371-99ce-3d100dd1247f" in namespace "emptydir-828" to be "Succeeded or Failed"

    Sep  3 21:06:45.332: INFO: Pod "pod-572fd88b-2438-4371-99ce-3d100dd1247f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.668463ms
    Sep  3 21:06:47.338: INFO: Pod "pod-572fd88b-2438-4371-99ce-3d100dd1247f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010185949s
    STEP: Saw pod success
    Sep  3 21:06:47.338: INFO: Pod "pod-572fd88b-2438-4371-99ce-3d100dd1247f" satisfied condition "Succeeded or Failed"

    Sep  3 21:06:47.341: INFO: Trying to get logs from node k8s-upgrade-and-conformance-uljqkb-worker-gvulve pod pod-572fd88b-2438-4371-99ce-3d100dd1247f container test-container: <nil>
    STEP: delete the pod
    Sep  3 21:06:47.359: INFO: Waiting for pod pod-572fd88b-2438-4371-99ce-3d100dd1247f to disappear
    Sep  3 21:06:47.362: INFO: Pod pod-572fd88b-2438-4371-99ce-3d100dd1247f no longer exists
    [AfterEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 21:06:47.362: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-828" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":72,"skipped":1297,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Security Context
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  3 21:06:46.272: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename security-context-test
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-node] Security Context
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
    [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    Sep  3 21:06:46.307: INFO: Waiting up to 5m0s for pod "busybox-user-65534-cabd83e2-39da-466e-919c-a6efa3ac0eea" in namespace "security-context-test-9766" to be "Succeeded or Failed"

    Sep  3 21:06:46.309: INFO: Pod "busybox-user-65534-cabd83e2-39da-466e-919c-a6efa3ac0eea": Phase="Pending", Reason="", readiness=false. Elapsed: 2.527718ms
    Sep  3 21:06:48.314: INFO: Pod "busybox-user-65534-cabd83e2-39da-466e-919c-a6efa3ac0eea": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007062358s
    Sep  3 21:06:48.314: INFO: Pod "busybox-user-65534-cabd83e2-39da-466e-919c-a6efa3ac0eea" satisfied condition "Succeeded or Failed"

    [AfterEach] [sig-node] Security Context
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 21:06:48.314: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "security-context-test-9766" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":111,"skipped":1949,"failed":1,"failures":["[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:35
    [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
... skipping 4 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:64
    [It] should support sysctls [MinimumKubeletVersion:1.21] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod with the kernel.shm_rmid_forced sysctl
    STEP: Watching for error events or started pod

    STEP: Waiting for pod completion
    STEP: Checking that the pod succeeded
    STEP: Getting logs from the pod
    STEP: Checking that the sysctl is actually updated
    [AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 21:06:50.429: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "sysctl-8779" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should support sysctls [MinimumKubeletVersion:1.21] [Conformance]","total":-1,"completed":112,"skipped":1974,"failed":1,"failures":["[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]"]}

    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  3 21:06:50.439: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename kubectl
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 15 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 21:06:51.697: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-7437" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","total":-1,"completed":113,"skipped":1974,"failed":1,"failures":["[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]"]}

    
    S
    ------------------------------
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 36 lines ...
    STEP: Destroying namespace "services-1342" for this suite.
    [AfterEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":-1,"completed":73,"skipped":1346,"failed":0}

    
    SSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  3 21:06:57.747: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename secrets
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating secret with name secret-test-c5c44982-052f-4758-bcdb-7830773c9317
    STEP: Creating a pod to test consume secrets
    Sep  3 21:06:57.812: INFO: Waiting up to 5m0s for pod "pod-secrets-a2172cd3-d55c-409b-ae87-4bc22245c6e7" in namespace "secrets-3738" to be "Succeeded or Failed"

    Sep  3 21:06:57.815: INFO: Pod "pod-secrets-a2172cd3-d55c-409b-ae87-4bc22245c6e7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.881521ms
    Sep  3 21:06:59.820: INFO: Pod "pod-secrets-a2172cd3-d55c-409b-ae87-4bc22245c6e7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007335053s
    STEP: Saw pod success
    Sep  3 21:06:59.820: INFO: Pod "pod-secrets-a2172cd3-d55c-409b-ae87-4bc22245c6e7" satisfied condition "Succeeded or Failed"

    Sep  3 21:06:59.822: INFO: Trying to get logs from node k8s-upgrade-and-conformance-uljqkb-md-0-rg248-796ff9996-wkqbk pod pod-secrets-a2172cd3-d55c-409b-ae87-4bc22245c6e7 container secret-volume-test: <nil>
    STEP: delete the pod
    Sep  3 21:06:59.843: INFO: Waiting for pod pod-secrets-a2172cd3-d55c-409b-ae87-4bc22245c6e7 to disappear
    Sep  3 21:06:59.845: INFO: Pod pod-secrets-a2172cd3-d55c-409b-ae87-4bc22245c6e7 no longer exists
    [AfterEach] [sig-storage] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 21:06:59.845: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "secrets-3738" for this suite.
    STEP: Destroying namespace "secret-namespace-4828" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":-1,"completed":74,"skipped":1354,"failed":0}

    
    SSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Probing container
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 21 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 21:07:13.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "container-probe-1752" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":-1,"completed":114,"skipped":1975,"failed":1,"failures":["[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]"]}

    
    SSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 7 lines ...
    STEP: Deploying the webhook pod
    STEP: Wait for the deployment to be ready
    Sep  3 21:07:14.080: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
    STEP: Deploying the webhook service
    STEP: Verifying the service has paired with the endpoint
    Sep  3 21:07:17.105: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
    [It] should unconditionally reject operations on fail closed webhook [Conformance]

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API

    STEP: create a namespace for the webhook
    STEP: create a configmap should be unconditionally rejected by the webhook
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 21:07:17.152: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "webhook-43" for this suite.
    STEP: Destroying namespace "webhook-43-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":-1,"completed":115,"skipped":1979,"failed":1,"failures":["[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]"]}

    
    SSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected configMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  3 21:07:17.228: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename projected
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating configMap with name projected-configmap-test-volume-ba127299-4d89-47fe-87a2-577df9dfaa5c
    STEP: Creating a pod to test consume configMaps
    Sep  3 21:07:17.307: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-d24ac9f9-e651-4bc5-b786-767ddf398b25" in namespace "projected-2219" to be "Succeeded or Failed"

    Sep  3 21:07:17.313: INFO: Pod "pod-projected-configmaps-d24ac9f9-e651-4bc5-b786-767ddf398b25": Phase="Pending", Reason="", readiness=false. Elapsed: 5.767324ms
    Sep  3 21:07:19.319: INFO: Pod "pod-projected-configmaps-d24ac9f9-e651-4bc5-b786-767ddf398b25": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.011714538s
    STEP: Saw pod success
    Sep  3 21:07:19.319: INFO: Pod "pod-projected-configmaps-d24ac9f9-e651-4bc5-b786-767ddf398b25" satisfied condition "Succeeded or Failed"

    Sep  3 21:07:19.321: INFO: Trying to get logs from node k8s-upgrade-and-conformance-uljqkb-worker-gvulve pod pod-projected-configmaps-d24ac9f9-e651-4bc5-b786-767ddf398b25 container agnhost-container: <nil>
    STEP: delete the pod
    Sep  3 21:07:19.344: INFO: Waiting for pod pod-projected-configmaps-d24ac9f9-e651-4bc5-b786-767ddf398b25 to disappear
    Sep  3 21:07:19.347: INFO: Pod pod-projected-configmaps-d24ac9f9-e651-4bc5-b786-767ddf398b25 no longer exists
    [AfterEach] [sig-storage] Projected configMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 21:07:19.347: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-2219" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":116,"skipped":1983,"failed":1,"failures":["[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] server version
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 21:07:19.446: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "server-version-5567" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] server version should find the server version [Conformance]","total":-1,"completed":117,"skipped":2028,"failed":1,"failures":["[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]"]}

    
    SSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-instrumentation] Events API
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 21:07:19.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "events-4002" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-instrumentation] Events API should delete a collection of events [Conformance]","total":-1,"completed":118,"skipped":2044,"failed":1,"failures":["[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]"]}

    
    SSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Pods Extended
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 11 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 21:07:19.647: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "pods-6047" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Pods Extended Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":-1,"completed":119,"skipped":2057,"failed":1,"failures":["[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]"]}

    
    SSS
    ------------------------------
    [BeforeEach] [sig-node] Container Runtime
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 21 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 21:07:21.111: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "container-runtime-7344" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":-1,"completed":75,"skipped":1364,"failed":0}

    
    SSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
    [It] should provide container's cpu limit [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep  3 21:07:19.708: INFO: Waiting up to 5m0s for pod "downwardapi-volume-137a781e-62c1-4863-96f5-7f9a71a4b4af" in namespace "projected-2769" to be "Succeeded or Failed"

    Sep  3 21:07:19.711: INFO: Pod "downwardapi-volume-137a781e-62c1-4863-96f5-7f9a71a4b4af": Phase="Pending", Reason="", readiness=false. Elapsed: 2.635948ms
    Sep  3 21:07:21.714: INFO: Pod "downwardapi-volume-137a781e-62c1-4863-96f5-7f9a71a4b4af": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00640337s
    STEP: Saw pod success
    Sep  3 21:07:21.715: INFO: Pod "downwardapi-volume-137a781e-62c1-4863-96f5-7f9a71a4b4af" satisfied condition "Succeeded or Failed"

    Sep  3 21:07:21.717: INFO: Trying to get logs from node k8s-upgrade-and-conformance-uljqkb-md-0-rg248-796ff9996-j7vhm pod downwardapi-volume-137a781e-62c1-4863-96f5-7f9a71a4b4af container client-container: <nil>
    STEP: delete the pod
    Sep  3 21:07:21.732: INFO: Waiting for pod downwardapi-volume-137a781e-62c1-4863-96f5-7f9a71a4b4af to disappear
    Sep  3 21:07:21.735: INFO: Pod downwardapi-volume-137a781e-62c1-4863-96f5-7f9a71a4b4af no longer exists
    [AfterEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 21:07:21.735: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-2769" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":-1,"completed":120,"skipped":2060,"failed":1,"failures":["[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]"]}

    
    SSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  3 21:07:21.142: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename configmap
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating configMap with name configmap-test-volume-d314ce20-a7e2-41d4-a4b5-7a00231472a7
    STEP: Creating a pod to test consume configMaps
    Sep  3 21:07:21.189: INFO: Waiting up to 5m0s for pod "pod-configmaps-a42f6b16-ae2a-42bb-bcc9-5ba8794b5761" in namespace "configmap-4981" to be "Succeeded or Failed"

    Sep  3 21:07:21.192: INFO: Pod "pod-configmaps-a42f6b16-ae2a-42bb-bcc9-5ba8794b5761": Phase="Pending", Reason="", readiness=false. Elapsed: 2.830972ms
    Sep  3 21:07:23.197: INFO: Pod "pod-configmaps-a42f6b16-ae2a-42bb-bcc9-5ba8794b5761": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007324075s
    Sep  3 21:07:25.201: INFO: Pod "pod-configmaps-a42f6b16-ae2a-42bb-bcc9-5ba8794b5761": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011514408s
    STEP: Saw pod success
    Sep  3 21:07:25.201: INFO: Pod "pod-configmaps-a42f6b16-ae2a-42bb-bcc9-5ba8794b5761" satisfied condition "Succeeded or Failed"

    Sep  3 21:07:25.204: INFO: Trying to get logs from node k8s-upgrade-and-conformance-uljqkb-worker-gvulve pod pod-configmaps-a42f6b16-ae2a-42bb-bcc9-5ba8794b5761 container agnhost-container: <nil>
    STEP: delete the pod
    Sep  3 21:07:25.221: INFO: Waiting for pod pod-configmaps-a42f6b16-ae2a-42bb-bcc9-5ba8794b5761 to disappear
    Sep  3 21:07:25.226: INFO: Pod pod-configmaps-a42f6b16-ae2a-42bb-bcc9-5ba8794b5761 no longer exists
    [AfterEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 21:07:25.226: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "configmap-4981" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":76,"skipped":1377,"failed":0}

    
    SSSS
    ------------------------------
    [BeforeEach] [sig-node] Events
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 17 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 21:07:31.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "events-3883" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]","total":-1,"completed":77,"skipped":1381,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-auth] ServiceAccounts
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  3 21:07:31.359: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename svcaccounts
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should mount projected service account token [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test service account token: 
    Sep  3 21:07:31.397: INFO: Waiting up to 5m0s for pod "test-pod-c6bfb7fd-c6ef-43e8-9825-e9dc006793e2" in namespace "svcaccounts-4198" to be "Succeeded or Failed"

    Sep  3 21:07:31.401: INFO: Pod "test-pod-c6bfb7fd-c6ef-43e8-9825-e9dc006793e2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.126026ms
    Sep  3 21:07:33.406: INFO: Pod "test-pod-c6bfb7fd-c6ef-43e8-9825-e9dc006793e2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008731861s
    STEP: Saw pod success
    Sep  3 21:07:33.406: INFO: Pod "test-pod-c6bfb7fd-c6ef-43e8-9825-e9dc006793e2" satisfied condition "Succeeded or Failed"

    Sep  3 21:07:33.409: INFO: Trying to get logs from node k8s-upgrade-and-conformance-uljqkb-worker-gvulve pod test-pod-c6bfb7fd-c6ef-43e8-9825-e9dc006793e2 container agnhost-container: <nil>
    STEP: delete the pod
    Sep  3 21:07:33.425: INFO: Waiting for pod test-pod-c6bfb7fd-c6ef-43e8-9825-e9dc006793e2 to disappear
    Sep  3 21:07:33.427: INFO: Pod test-pod-c6bfb7fd-c6ef-43e8-9825-e9dc006793e2 no longer exists
    [AfterEach] [sig-auth] ServiceAccounts
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 21:07:33.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "svcaccounts-4198" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-auth] ServiceAccounts should mount projected service account token [Conformance]","total":-1,"completed":78,"skipped":1401,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 24 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 21:07:41.851: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "crd-publish-openapi-5682" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":-1,"completed":79,"skipped":1439,"failed":0}

    
    SSS
    ------------------------------
    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 21:07:42.010: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-3767" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes control plane services is included in cluster-info  [Conformance]","total":-1,"completed":80,"skipped":1442,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 64 lines ...
    STEP: Destroying namespace "services-9809" for this suite.
    [AfterEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":121,"skipped":2077,"failed":1,"failures":["[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]"]}

    
    SSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] DisruptionController
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 21:07:48.141: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "disruption-7466" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] DisruptionController should observe PodDisruptionBudget status updated [Conformance]","total":-1,"completed":81,"skipped":1467,"failed":0}

    
    SSSS
    ------------------------------
    [BeforeEach] [sig-storage] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  3 21:07:48.156: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename secrets
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating secret with name secret-test-3727b900-3730-4a69-b6ec-9146f1faeefd
    STEP: Creating a pod to test consume secrets
    Sep  3 21:07:48.204: INFO: Waiting up to 5m0s for pod "pod-secrets-09d6bc3a-7c6a-43b4-a60e-52aaaf562662" in namespace "secrets-5421" to be "Succeeded or Failed"

    Sep  3 21:07:48.207: INFO: Pod "pod-secrets-09d6bc3a-7c6a-43b4-a60e-52aaaf562662": Phase="Pending", Reason="", readiness=false. Elapsed: 2.895795ms
    Sep  3 21:07:50.212: INFO: Pod "pod-secrets-09d6bc3a-7c6a-43b4-a60e-52aaaf562662": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007742397s
    STEP: Saw pod success
    Sep  3 21:07:50.212: INFO: Pod "pod-secrets-09d6bc3a-7c6a-43b4-a60e-52aaaf562662" satisfied condition "Succeeded or Failed"

    Sep  3 21:07:50.215: INFO: Trying to get logs from node k8s-upgrade-and-conformance-uljqkb-md-0-rg248-796ff9996-wkqbk pod pod-secrets-09d6bc3a-7c6a-43b4-a60e-52aaaf562662 container secret-volume-test: <nil>
    STEP: delete the pod
    Sep  3 21:07:50.233: INFO: Waiting for pod pod-secrets-09d6bc3a-7c6a-43b4-a60e-52aaaf562662 to disappear
    Sep  3 21:07:50.236: INFO: Pod pod-secrets-09d6bc3a-7c6a-43b4-a60e-52aaaf562662 no longer exists
    [AfterEach] [sig-storage] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 21:07:50.236: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "secrets-5421" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":82,"skipped":1471,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Subpath
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 4 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
    STEP: Setting up data
    [It] should support subpaths with secret pod [LinuxOnly] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating pod pod-subpath-test-secret-hns9
    STEP: Creating a pod to test atomic-volume-subpath
    Sep  3 21:07:45.420: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-hns9" in namespace "subpath-9937" to be "Succeeded or Failed"

    Sep  3 21:07:45.429: INFO: Pod "pod-subpath-test-secret-hns9": Phase="Pending", Reason="", readiness=false. Elapsed: 9.595436ms
    Sep  3 21:07:47.434: INFO: Pod "pod-subpath-test-secret-hns9": Phase="Running", Reason="", readiness=true. Elapsed: 2.013939806s
    Sep  3 21:07:49.439: INFO: Pod "pod-subpath-test-secret-hns9": Phase="Running", Reason="", readiness=true. Elapsed: 4.019017302s
    Sep  3 21:07:51.443: INFO: Pod "pod-subpath-test-secret-hns9": Phase="Running", Reason="", readiness=true. Elapsed: 6.023337131s
    Sep  3 21:07:53.449: INFO: Pod "pod-subpath-test-secret-hns9": Phase="Running", Reason="", readiness=true. Elapsed: 8.029250076s
    Sep  3 21:07:55.453: INFO: Pod "pod-subpath-test-secret-hns9": Phase="Running", Reason="", readiness=true. Elapsed: 10.033112415s
    Sep  3 21:07:57.457: INFO: Pod "pod-subpath-test-secret-hns9": Phase="Running", Reason="", readiness=true. Elapsed: 12.037575133s
    Sep  3 21:07:59.462: INFO: Pod "pod-subpath-test-secret-hns9": Phase="Running", Reason="", readiness=true. Elapsed: 14.042608257s
    Sep  3 21:08:01.467: INFO: Pod "pod-subpath-test-secret-hns9": Phase="Running", Reason="", readiness=true. Elapsed: 16.047503947s
    Sep  3 21:08:03.472: INFO: Pod "pod-subpath-test-secret-hns9": Phase="Running", Reason="", readiness=true. Elapsed: 18.052627264s
    Sep  3 21:08:05.477: INFO: Pod "pod-subpath-test-secret-hns9": Phase="Running", Reason="", readiness=true. Elapsed: 20.057623022s
    Sep  3 21:08:07.481: INFO: Pod "pod-subpath-test-secret-hns9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.061467721s
    STEP: Saw pod success
    Sep  3 21:08:07.481: INFO: Pod "pod-subpath-test-secret-hns9" satisfied condition "Succeeded or Failed"

    Sep  3 21:08:07.485: INFO: Trying to get logs from node k8s-upgrade-and-conformance-uljqkb-worker-gvulve pod pod-subpath-test-secret-hns9 container test-container-subpath-secret-hns9: <nil>
    STEP: delete the pod
    Sep  3 21:08:07.504: INFO: Waiting for pod pod-subpath-test-secret-hns9 to disappear
    Sep  3 21:08:07.507: INFO: Pod pod-subpath-test-secret-hns9 no longer exists
    STEP: Deleting pod pod-subpath-test-secret-hns9
    Sep  3 21:08:07.507: INFO: Deleting pod "pod-subpath-test-secret-hns9" in namespace "subpath-9937"
    [AfterEach] [sig-storage] Subpath
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 21:08:07.510: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "subpath-9937" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":-1,"completed":122,"skipped":2088,"failed":1,"failures":["[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]"]}

    
    SSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Security Context
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  3 21:08:07.539: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename security-context
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser
    Sep  3 21:08:07.577: INFO: Waiting up to 5m0s for pod "security-context-1f52ef0e-f46c-4001-a53b-1547b7c45ab3" in namespace "security-context-6308" to be "Succeeded or Failed"

    Sep  3 21:08:07.580: INFO: Pod "security-context-1f52ef0e-f46c-4001-a53b-1547b7c45ab3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.785194ms
    Sep  3 21:08:09.584: INFO: Pod "security-context-1f52ef0e-f46c-4001-a53b-1547b7c45ab3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006679012s
    STEP: Saw pod success
    Sep  3 21:08:09.584: INFO: Pod "security-context-1f52ef0e-f46c-4001-a53b-1547b7c45ab3" satisfied condition "Succeeded or Failed"

    Sep  3 21:08:09.587: INFO: Trying to get logs from node k8s-upgrade-and-conformance-uljqkb-worker-gvulve pod security-context-1f52ef0e-f46c-4001-a53b-1547b7c45ab3 container test-container: <nil>
    STEP: delete the pod
    Sep  3 21:08:09.600: INFO: Waiting for pod security-context-1f52ef0e-f46c-4001-a53b-1547b7c45ab3 to disappear
    Sep  3 21:08:09.603: INFO: Pod security-context-1f52ef0e-f46c-4001-a53b-1547b7c45ab3 no longer exists
    [AfterEach] [sig-node] Security Context
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 21:08:09.603: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "security-context-6308" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","total":-1,"completed":123,"skipped":2103,"failed":1,"failures":["[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]"]}

    
    SSSS
    ------------------------------
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
    [It] should provide container's cpu limit [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep  3 21:08:09.649: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9677fb91-cdcc-4baa-b130-69d3e0cd8fd9" in namespace "downward-api-7272" to be "Succeeded or Failed"

    Sep  3 21:08:09.652: INFO: Pod "downwardapi-volume-9677fb91-cdcc-4baa-b130-69d3e0cd8fd9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.691734ms
    Sep  3 21:08:11.657: INFO: Pod "downwardapi-volume-9677fb91-cdcc-4baa-b130-69d3e0cd8fd9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007503351s
    STEP: Saw pod success
    Sep  3 21:08:11.657: INFO: Pod "downwardapi-volume-9677fb91-cdcc-4baa-b130-69d3e0cd8fd9" satisfied condition "Succeeded or Failed"

    Sep  3 21:08:11.660: INFO: Trying to get logs from node k8s-upgrade-and-conformance-uljqkb-worker-gvulve pod downwardapi-volume-9677fb91-cdcc-4baa-b130-69d3e0cd8fd9 container client-container: <nil>
    STEP: delete the pod
    Sep  3 21:08:11.675: INFO: Waiting for pod downwardapi-volume-9677fb91-cdcc-4baa-b130-69d3e0cd8fd9 to disappear
    Sep  3 21:08:11.678: INFO: Pod downwardapi-volume-9677fb91-cdcc-4baa-b130-69d3e0cd8fd9 no longer exists
    [AfterEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 21:08:11.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "downward-api-7272" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":-1,"completed":124,"skipped":2107,"failed":1,"failures":["[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]"]}

    
    SSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
    [It] should provide container's memory request [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep  3 21:08:11.738: INFO: Waiting up to 5m0s for pod "downwardapi-volume-aeeb49c8-184f-4268-92a6-96293f9721ad" in namespace "projected-8192" to be "Succeeded or Failed"

    Sep  3 21:08:11.742: INFO: Pod "downwardapi-volume-aeeb49c8-184f-4268-92a6-96293f9721ad": Phase="Pending", Reason="", readiness=false. Elapsed: 3.428044ms
    Sep  3 21:08:13.749: INFO: Pod "downwardapi-volume-aeeb49c8-184f-4268-92a6-96293f9721ad": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009700797s
    STEP: Saw pod success
    Sep  3 21:08:13.749: INFO: Pod "downwardapi-volume-aeeb49c8-184f-4268-92a6-96293f9721ad" satisfied condition "Succeeded or Failed"

    Sep  3 21:08:13.755: INFO: Trying to get logs from node k8s-upgrade-and-conformance-uljqkb-worker-gvulve pod downwardapi-volume-aeeb49c8-184f-4268-92a6-96293f9721ad container client-container: <nil>
    STEP: delete the pod
    Sep  3 21:08:13.776: INFO: Waiting for pod downwardapi-volume-aeeb49c8-184f-4268-92a6-96293f9721ad to disappear
    Sep  3 21:08:13.780: INFO: Pod downwardapi-volume-aeeb49c8-184f-4268-92a6-96293f9721ad no longer exists
    [AfterEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 21:08:13.780: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-8192" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":-1,"completed":125,"skipped":2120,"failed":1,"failures":["[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]"]}

    
    SSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected configMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  3 21:08:13.816: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename projected
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating configMap with name projected-configmap-test-volume-map-edbfa2a1-2dd2-444d-a615-863c8793c0ac
    STEP: Creating a pod to test consume configMaps
    Sep  3 21:08:13.862: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-5d863eb9-8b3b-4954-81f2-c7f60cf6c5b0" in namespace "projected-387" to be "Succeeded or Failed"

    Sep  3 21:08:13.867: INFO: Pod "pod-projected-configmaps-5d863eb9-8b3b-4954-81f2-c7f60cf6c5b0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.404169ms
    Sep  3 21:08:15.871: INFO: Pod "pod-projected-configmaps-5d863eb9-8b3b-4954-81f2-c7f60cf6c5b0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009149798s
    STEP: Saw pod success
    Sep  3 21:08:15.872: INFO: Pod "pod-projected-configmaps-5d863eb9-8b3b-4954-81f2-c7f60cf6c5b0" satisfied condition "Succeeded or Failed"

    Sep  3 21:08:15.875: INFO: Trying to get logs from node k8s-upgrade-and-conformance-uljqkb-md-0-rg248-796ff9996-j7vhm pod pod-projected-configmaps-5d863eb9-8b3b-4954-81f2-c7f60cf6c5b0 container agnhost-container: <nil>
    STEP: delete the pod
    Sep  3 21:08:15.889: INFO: Waiting for pod pod-projected-configmaps-5d863eb9-8b3b-4954-81f2-c7f60cf6c5b0 to disappear
    Sep  3 21:08:15.893: INFO: Pod pod-projected-configmaps-5d863eb9-8b3b-4954-81f2-c7f60cf6c5b0 no longer exists
    [AfterEach] [sig-storage] Projected configMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 21:08:15.893: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-387" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":-1,"completed":126,"skipped":2137,"failed":1,"failures":["[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] ReplicaSet
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 14 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 21:08:20.035: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "replicaset-6870" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":-1,"completed":127,"skipped":2170,"failed":1,"failures":["[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]"]}

    
    S
    ------------------------------
    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 147 lines ...
    STEP: Building a namespace api object, basename secrets
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating secret with name secret-test-fd73462c-584b-4d5c-b547-eb8be29e64e9
    STEP: Creating a pod to test consume secrets
    Sep  3 21:08:20.087: INFO: Waiting up to 5m0s for pod "pod-secrets-f2651336-799f-4b07-ad4d-15fcd39bd175" in namespace "secrets-340" to be "Succeeded or Failed"

    Sep  3 21:08:20.090: INFO: Pod "pod-secrets-f2651336-799f-4b07-ad4d-15fcd39bd175": Phase="Pending", Reason="", readiness=false. Elapsed: 2.919585ms
    Sep  3 21:08:22.095: INFO: Pod "pod-secrets-f2651336-799f-4b07-ad4d-15fcd39bd175": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008124577s
    STEP: Saw pod success
    Sep  3 21:08:22.095: INFO: Pod "pod-secrets-f2651336-799f-4b07-ad4d-15fcd39bd175" satisfied condition "Succeeded or Failed"

    Sep  3 21:08:22.099: INFO: Trying to get logs from node k8s-upgrade-and-conformance-uljqkb-worker-gvulve pod pod-secrets-f2651336-799f-4b07-ad4d-15fcd39bd175 container secret-volume-test: <nil>
    STEP: delete the pod
    Sep  3 21:08:22.116: INFO: Waiting for pod pod-secrets-f2651336-799f-4b07-ad4d-15fcd39bd175 to disappear
    Sep  3 21:08:22.119: INFO: Pod pod-secrets-f2651336-799f-4b07-ad4d-15fcd39bd175 no longer exists
    [AfterEach] [sig-storage] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 21:08:22.119: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "secrets-340" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":-1,"completed":128,"skipped":2171,"failed":1,"failures":["[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]"]}

    
    S
    ------------------------------
    [BeforeEach] [sig-storage] Projected configMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  3 21:08:22.132: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename projected
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating configMap with name projected-configmap-test-volume-088ac151-1fd3-4829-8707-fa7f86e5a1ab
    STEP: Creating a pod to test consume configMaps
    Sep  3 21:08:22.182: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-a39a5984-0f31-41eb-a485-5b51940c508e" in namespace "projected-7343" to be "Succeeded or Failed"

    Sep  3 21:08:22.186: INFO: Pod "pod-projected-configmaps-a39a5984-0f31-41eb-a485-5b51940c508e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.541202ms
    Sep  3 21:08:24.191: INFO: Pod "pod-projected-configmaps-a39a5984-0f31-41eb-a485-5b51940c508e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008232168s
    STEP: Saw pod success
    Sep  3 21:08:24.191: INFO: Pod "pod-projected-configmaps-a39a5984-0f31-41eb-a485-5b51940c508e" satisfied condition "Succeeded or Failed"

    Sep  3 21:08:24.193: INFO: Trying to get logs from node k8s-upgrade-and-conformance-uljqkb-worker-gvulve pod pod-projected-configmaps-a39a5984-0f31-41eb-a485-5b51940c508e container projected-configmap-volume-test: <nil>
    STEP: delete the pod
    Sep  3 21:08:24.209: INFO: Waiting for pod pod-projected-configmaps-a39a5984-0f31-41eb-a485-5b51940c508e to disappear
    Sep  3 21:08:24.212: INFO: Pod pod-projected-configmaps-a39a5984-0f31-41eb-a485-5b51940c508e no longer exists
    [AfterEach] [sig-storage] Projected configMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 21:08:24.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-7343" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":-1,"completed":129,"skipped":2172,"failed":1,"failures":["[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","total":-1,"completed":83,"skipped":1582,"failed":0}

    [BeforeEach] [sig-network] Networking
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  3 21:08:21.175: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename pod-network-test
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 44 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 21:08:43.699: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "pod-network-test-5263" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":-1,"completed":84,"skipped":1582,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 11 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 21:08:43.822: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "secrets-3382" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Secrets should patch a secret [Conformance]","total":-1,"completed":85,"skipped":1613,"failed":0}

    
    SSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] ReplicationController
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 15 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 21:08:53.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "replication-controller-7098" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]","total":-1,"completed":86,"skipped":1625,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 7 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 21:09:00.216: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "custom-resource-definition-8460" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works  [Conformance]","total":-1,"completed":87,"skipped":1663,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] IngressClass API
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 22 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 21:09:00.349: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "ingressclass-8299" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] IngressClass API  should support creating IngressClass API operations [Conformance]","total":-1,"completed":88,"skipped":1688,"failed":0}

    
    SSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] Deployment
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 86 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 21:09:05.951: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "deployment-5967" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","total":-1,"completed":89,"skipped":1700,"failed":0}

    
    SSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Kubelet
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 10 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 21:09:08.060: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubelet-test-4490" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":-1,"completed":90,"skipped":1712,"failed":0}

    
    S
    ------------------------------
    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 35 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 21:09:15.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-9561" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc  [Conformance]","total":-1,"completed":91,"skipped":1713,"failed":0}

    
    SSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected secret
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  3 21:09:15.641: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename projected
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating secret with name projected-secret-test-44f28542-1538-40e4-b5b1-59561e153498
    STEP: Creating a pod to test consume secrets
    Sep  3 21:09:15.685: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-caffa53c-bf91-40d8-98ae-2fc8aa46651c" in namespace "projected-2911" to be "Succeeded or Failed"

    Sep  3 21:09:15.691: INFO: Pod "pod-projected-secrets-caffa53c-bf91-40d8-98ae-2fc8aa46651c": Phase="Pending", Reason="", readiness=false. Elapsed: 5.744103ms
    Sep  3 21:09:17.695: INFO: Pod "pod-projected-secrets-caffa53c-bf91-40d8-98ae-2fc8aa46651c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009892469s
    STEP: Saw pod success
    Sep  3 21:09:17.695: INFO: Pod "pod-projected-secrets-caffa53c-bf91-40d8-98ae-2fc8aa46651c" satisfied condition "Succeeded or Failed"

    Sep  3 21:09:17.698: INFO: Trying to get logs from node k8s-upgrade-and-conformance-uljqkb-md-0-rg248-796ff9996-j7vhm pod pod-projected-secrets-caffa53c-bf91-40d8-98ae-2fc8aa46651c container secret-volume-test: <nil>
    STEP: delete the pod
    Sep  3 21:09:17.717: INFO: Waiting for pod pod-projected-secrets-caffa53c-bf91-40d8-98ae-2fc8aa46651c to disappear
    Sep  3 21:09:17.720: INFO: Pod pod-projected-secrets-caffa53c-bf91-40d8-98ae-2fc8aa46651c no longer exists
    [AfterEach] [sig-storage] Projected secret
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 21:09:17.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-2911" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":-1,"completed":92,"skipped":1726,"failed":0}

    
    SSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 15 lines ...
    STEP: Registering the crd webhook via the AdmissionRegistration API
    Sep  3 21:08:37.898: INFO: Waiting for webhook configuration to be ready...
    Sep  3 21:08:48.009: INFO: Waiting for webhook configuration to be ready...
    Sep  3 21:08:58.112: INFO: Waiting for webhook configuration to be ready...
    Sep  3 21:09:08.217: INFO: Waiting for webhook configuration to be ready...
    Sep  3 21:09:18.227: INFO: Waiting for webhook configuration to be ready...
    Sep  3 21:09:18.227: FAIL: waiting for webhook configuration to be ready

    Unexpected error:

        <*errors.errorString | 0xc000242290>: {
            s: "timed out waiting for the condition",
        }
        timed out waiting for the condition
    occurred
    
... skipping 23 lines ...
    [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
      should deny crd creation [Conformance] [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    
      Sep  3 21:09:18.227: waiting for webhook configuration to be ready
      Unexpected error:

          <*errors.errorString | 0xc000242290>: {
              s: "timed out waiting for the condition",
          }
          timed out waiting for the condition
      occurred
    
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:2059
    ------------------------------
    {"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":-1,"completed":129,"skipped":2193,"failed":2,"failures":["[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]"]}

    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  3 21:09:18.326: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename webhook
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 19 lines ...
    STEP: Destroying namespace "webhook-3878-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":-1,"completed":130,"skipped":2193,"failed":2,"failures":["[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] ResourceQuota
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 17 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 21:09:30.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "resourcequota-3587" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":-1,"completed":93,"skipped":1732,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 51 lines ...
    STEP: Destroying namespace "services-9246" for this suite.
    [AfterEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","total":-1,"completed":131,"skipped":2219,"failed":2,"failures":["[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] Garbage collector
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 34 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 21:09:55.526: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "gc-2951" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":-1,"completed":132,"skipped":2242,"failed":2,"failures":["[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]"]}

    
    SSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 22 lines ...
    STEP: Destroying namespace "webhook-3983-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":-1,"completed":133,"skipped":2245,"failed":2,"failures":["[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 21:10:08.709: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-3981" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":-1,"completed":134,"skipped":2265,"failed":2,"failures":["[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]"]}

    
    SSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] DNS
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 17 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 21:10:11.056: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "dns-8918" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":-1,"completed":135,"skipped":2281,"failed":2,"failures":["[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]"]}

    
    S
    ------------------------------
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 20 lines ...
    STEP: Destroying namespace "services-6420" for this suite.
    [AfterEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Services should test the lifecycle of an Endpoint [Conformance]","total":-1,"completed":136,"skipped":2282,"failed":2,"failures":["[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]"]}

    
    SSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  3 21:10:11.172: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename secrets
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable via the environment [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: creating secret secrets-962/secret-test-0b00b57b-b468-4899-bb9f-b52c8b08731b
    STEP: Creating a pod to test consume secrets
    Sep  3 21:10:11.228: INFO: Waiting up to 5m0s for pod "pod-configmaps-f50a70ca-6ef4-475a-8cc0-166ef744a9a7" in namespace "secrets-962" to be "Succeeded or Failed"

    Sep  3 21:10:11.235: INFO: Pod "pod-configmaps-f50a70ca-6ef4-475a-8cc0-166ef744a9a7": Phase="Pending", Reason="", readiness=false. Elapsed: 7.251712ms
    Sep  3 21:10:13.239: INFO: Pod "pod-configmaps-f50a70ca-6ef4-475a-8cc0-166ef744a9a7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.011217091s
    STEP: Saw pod success
    Sep  3 21:10:13.239: INFO: Pod "pod-configmaps-f50a70ca-6ef4-475a-8cc0-166ef744a9a7" satisfied condition "Succeeded or Failed"

    Sep  3 21:10:13.242: INFO: Trying to get logs from node k8s-upgrade-and-conformance-uljqkb-md-0-rg248-796ff9996-j7vhm pod pod-configmaps-f50a70ca-6ef4-475a-8cc0-166ef744a9a7 container env-test: <nil>
    STEP: delete the pod
    Sep  3 21:10:13.256: INFO: Waiting for pod pod-configmaps-f50a70ca-6ef4-475a-8cc0-166ef744a9a7 to disappear
    Sep  3 21:10:13.259: INFO: Pod pod-configmaps-f50a70ca-6ef4-475a-8cc0-166ef744a9a7 no longer exists
    [AfterEach] [sig-node] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 21:10:13.259: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "secrets-962" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":-1,"completed":137,"skipped":2291,"failed":2,"failures":["[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 10 lines ...
    STEP: Destroying namespace "services-5328" for this suite.
    [AfterEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Services should provide secure master service  [Conformance]","total":-1,"completed":138,"skipped":2311,"failed":2,"failures":["[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 21:10:19.962: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-4994" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":-1,"completed":139,"skipped":2334,"failed":2,"failures":["[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]"]}

    
    SSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Probing container
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 14 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 21:10:23.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "container-probe-1652" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":-1,"completed":94,"skipped":1810,"failed":0}

    
    S
    ------------------------------
    [BeforeEach] [sig-node] Docker Containers
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  3 21:10:23.155: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename containers
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test override command
    Sep  3 21:10:23.199: INFO: Waiting up to 5m0s for pod "client-containers-1b9e69e5-9e8a-42c3-acda-667f80218225" in namespace "containers-5591" to be "Succeeded or Failed"

    Sep  3 21:10:23.203: INFO: Pod "client-containers-1b9e69e5-9e8a-42c3-acda-667f80218225": Phase="Pending", Reason="", readiness=false. Elapsed: 4.344677ms
    Sep  3 21:10:25.207: INFO: Pod "client-containers-1b9e69e5-9e8a-42c3-acda-667f80218225": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.0085391s
    STEP: Saw pod success
    Sep  3 21:10:25.208: INFO: Pod "client-containers-1b9e69e5-9e8a-42c3-acda-667f80218225" satisfied condition "Succeeded or Failed"

    Sep  3 21:10:25.211: INFO: Trying to get logs from node k8s-upgrade-and-conformance-uljqkb-worker-gvulve pod client-containers-1b9e69e5-9e8a-42c3-acda-667f80218225 container agnhost-container: <nil>
    STEP: delete the pod
    Sep  3 21:10:25.225: INFO: Waiting for pod client-containers-1b9e69e5-9e8a-42c3-acda-667f80218225 to disappear
    Sep  3 21:10:25.228: INFO: Pod client-containers-1b9e69e5-9e8a-42c3-acda-667f80218225 no longer exists
    [AfterEach] [sig-node] Docker Containers
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 21:10:25.228: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "containers-5591" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":-1,"completed":95,"skipped":1811,"failed":0}

    [BeforeEach] [sig-storage] Projected secret
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  3 21:10:25.240: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename projected
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 21:10:31.385: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-7401" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":96,"skipped":1811,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Variable Expansion
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  3 21:10:19.986: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename var-expansion
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should fail substituting values in a volume subpath with backticks [Slow] [Conformance]

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    Sep  3 21:10:22.032: INFO: Deleting pod "var-expansion-5f052a6b-6f61-4180-8999-eb3f3c6a2eb0" in namespace "var-expansion-2818"
    Sep  3 21:10:22.037: INFO: Wait up to 5m0s for pod "var-expansion-5f052a6b-6f61-4180-8999-eb3f3c6a2eb0" to be fully deleted
    [AfterEach] [sig-node] Variable Expansion
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 21:10:34.047: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "var-expansion-2818" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Variable Expansion should fail substituting values in a volume subpath with backticks [Slow] [Conformance]","total":-1,"completed":140,"skipped":2342,"failed":2,"failures":["[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]"]}

    
    SSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 61 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 21:10:41.321: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-4396" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller  [Conformance]","total":-1,"completed":141,"skipped":2353,"failed":2,"failures":["[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  3 21:10:41.373: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename configmap
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating configMap with name configmap-test-volume-def0a5b9-bdd1-44d2-9f42-d9dd805757b5
    STEP: Creating a pod to test consume configMaps
    Sep  3 21:10:41.411: INFO: Waiting up to 5m0s for pod "pod-configmaps-de3c51a3-b191-4b42-bffa-635c48196453" in namespace "configmap-4473" to be "Succeeded or Failed"

    Sep  3 21:10:41.414: INFO: Pod "pod-configmaps-de3c51a3-b191-4b42-bffa-635c48196453": Phase="Pending", Reason="", readiness=false. Elapsed: 2.665778ms
    Sep  3 21:10:43.419: INFO: Pod "pod-configmaps-de3c51a3-b191-4b42-bffa-635c48196453": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007268792s
    STEP: Saw pod success
    Sep  3 21:10:43.419: INFO: Pod "pod-configmaps-de3c51a3-b191-4b42-bffa-635c48196453" satisfied condition "Succeeded or Failed"

    Sep  3 21:10:43.422: INFO: Trying to get logs from node k8s-upgrade-and-conformance-uljqkb-worker-gvulve pod pod-configmaps-de3c51a3-b191-4b42-bffa-635c48196453 container agnhost-container: <nil>
    STEP: delete the pod
    Sep  3 21:10:43.440: INFO: Waiting for pod pod-configmaps-de3c51a3-b191-4b42-bffa-635c48196453 to disappear
    Sep  3 21:10:43.443: INFO: Pod pod-configmaps-de3c51a3-b191-4b42-bffa-635c48196453 no longer exists
    [AfterEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 21:10:43.443: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "configmap-4473" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":142,"skipped":2387,"failed":2,"failures":["[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]"]}

    
    S
    ------------------------------
    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 21 lines ...
    STEP: Destroying namespace "webhook-860-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":-1,"completed":143,"skipped":2388,"failed":2,"failures":["[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]"]}

    
    SSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-auth] ServiceAccounts
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 11 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 21:10:47.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "svcaccounts-4384" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance]","total":-1,"completed":144,"skipped":2402,"failed":2,"failures":["[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]"]}

    
    S
    ------------------------------
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 25 lines ...
    STEP: Destroying namespace "services-2712" for this suite.
    [AfterEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":-1,"completed":97,"skipped":1847,"failed":0}

    
    SSSSSS
    ------------------------------
    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 11 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 21:10:50.341: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-4565" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed  [Conformance]","total":-1,"completed":98,"skipped":1853,"failed":0}

    
    S
    ------------------------------
    [BeforeEach] [sig-network] Networking
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 271 lines ...
      ----    ------     ----  ----               -------
      Normal  Scheduled  29s   default-scheduler  Successfully assigned pod-network-test-7946/netserver-3 to k8s-upgrade-and-conformance-uljqkb-worker-tpmotr
      Normal  Pulled     29s   kubelet            Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine
      Normal  Created    29s   kubelet            Created container webserver
      Normal  Started    28s   kubelet            Started container webserver
    
    Sep  3 21:00:43.892: INFO: encountered error during dial (did not find expected responses... 

    Tries 1
    Command curl -g -q -s 'http://192.168.2.28:9080/dial?request=hostname&protocol=http&host=192.168.0.54&port=8080&tries=1'
    retrieved map[]
    expected map[netserver-0:{}])
    Sep  3 21:00:43.892: INFO: ...failed...will try again in next pass

    Sep  3 21:00:43.892: INFO: Breadth first check of 192.168.1.44 on host 172.18.0.7...
    Sep  3 21:00:43.896: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.2.28:9080/dial?request=hostname&protocol=http&host=192.168.1.44&port=8080&tries=1'] Namespace:pod-network-test-7946 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep  3 21:00:43.896: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  3 21:00:48.988: INFO: Waiting for responses: map[netserver-1:{}]
    Sep  3 21:00:50.990: INFO: 
    Output of kubectl describe pod pod-network-test-7946/netserver-0:
... skipping 240 lines ...
      ----    ------     ----  ----               -------
      Normal  Scheduled  37s   default-scheduler  Successfully assigned pod-network-test-7946/netserver-3 to k8s-upgrade-and-conformance-uljqkb-worker-tpmotr
      Normal  Pulled     37s   kubelet            Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine
      Normal  Created    37s   kubelet            Created container webserver
      Normal  Started    36s   kubelet            Started container webserver
    
    Sep  3 21:00:51.538: INFO: encountered error during dial (did not find expected responses... 

    Tries 1
    Command curl -g -q -s 'http://192.168.2.28:9080/dial?request=hostname&protocol=http&host=192.168.1.44&port=8080&tries=1'
    retrieved map[]
    expected map[netserver-1:{}])
    Sep  3 21:00:51.538: INFO: ...failed...will try again in next pass

    Sep  3 21:00:51.538: INFO: Breadth first check of 192.168.6.107 on host 172.18.0.5...
    Sep  3 21:00:51.592: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.2.28:9080/dial?request=hostname&protocol=http&host=192.168.6.107&port=8080&tries=1'] Namespace:pod-network-test-7946 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep  3 21:00:51.592: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  3 21:00:51.685: INFO: Waiting for responses: map[]
    Sep  3 21:00:51.686: INFO: reached 192.168.6.107 after 0/1 tries
    Sep  3 21:00:51.686: INFO: Breadth first check of 192.168.2.27 on host 172.18.0.6...
... skipping 387 lines ...
      ----    ------     ----  ----               -------
      Normal  Scheduled  6m4s  default-scheduler  Successfully assigned pod-network-test-7946/netserver-3 to k8s-upgrade-and-conformance-uljqkb-worker-tpmotr
      Normal  Pulled     6m4s  kubelet            Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine
      Normal  Created    6m4s  kubelet            Created container webserver
      Normal  Started    6m3s  kubelet            Started container webserver
    
    Sep  3 21:06:18.606: INFO: encountered error during dial (did not find expected responses... 

    Tries 46
    Command curl -g -q -s 'http://192.168.2.28:9080/dial?request=hostname&protocol=http&host=192.168.1.44&port=8080&tries=1'
    retrieved map[]
    expected map[netserver-1:{}])
    Sep  3 21:06:18.606: INFO: ... Done probing pod [[[ 192.168.1.44 ]]]
    Sep  3 21:06:18.606: INFO: succeeded at polling 3 out of 4 connections
... skipping 382 lines ...
      ----    ------     ----  ----               -------
      Normal  Scheduled  11m   default-scheduler  Successfully assigned pod-network-test-7946/netserver-3 to k8s-upgrade-and-conformance-uljqkb-worker-tpmotr
      Normal  Pulled     11m   kubelet            Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine
      Normal  Created    11m   kubelet            Created container webserver
      Normal  Started    11m   kubelet            Started container webserver
    
    Sep  3 21:11:45.452: INFO: encountered error during dial (did not find expected responses... 

    Tries 46
    Command curl -g -q -s 'http://192.168.2.28:9080/dial?request=hostname&protocol=http&host=192.168.0.54&port=8080&tries=1'
    retrieved map[]
    expected map[netserver-0:{}])
    Sep  3 21:11:45.452: INFO: ... Done probing pod [[[ 192.168.0.54 ]]]
    Sep  3 21:11:45.452: INFO: succeeded at polling 2 out of 4 connections
    Sep  3 21:11:45.452: INFO: pod polling failure summary:
    Sep  3 21:11:45.452: INFO: Collected error: did not find expected responses... 

    Tries 46
    Command curl -g -q -s 'http://192.168.2.28:9080/dial?request=hostname&protocol=http&host=192.168.1.44&port=8080&tries=1'
    retrieved map[]
    expected map[netserver-1:{}]
    Sep  3 21:11:45.452: INFO: Collected error: did not find expected responses... 

    Tries 46
    Command curl -g -q -s 'http://192.168.2.28:9080/dial?request=hostname&protocol=http&host=192.168.0.54&port=8080&tries=1'
    retrieved map[]
    expected map[netserver-0:{}]
    Sep  3 21:11:45.453: FAIL: failed,  2 out of 4 connections failed

    
    Full Stack Trace
    k8s.io/kubernetes/test/e2e/common/network.glob..func1.1.2()
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:82 +0x69
    k8s.io/kubernetes/test/e2e.RunE2ETests(0xc00112ad80)
    	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c
... skipping 14 lines ...
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23
      Granular Checks: Pods
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30
        should function for intra-pod communication: http [NodeConformance] [Conformance] [It]
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    
        Sep  3 21:11:45.453: failed,  2 out of 4 connections failed

    
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:82
    ------------------------------
    [BeforeEach] [sig-apps] StatefulSet
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 134 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 21:11:51.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "statefulset-91" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":-1,"completed":99,"skipped":1854,"failed":0}

    
    SS
    ------------------------------
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
    [It] should provide container's cpu request [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep  3 21:11:52.103: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7720c4cc-062a-4e6d-85f5-892c5d393e3b" in namespace "projected-748" to be "Succeeded or Failed"

    Sep  3 21:11:52.112: INFO: Pod "downwardapi-volume-7720c4cc-062a-4e6d-85f5-892c5d393e3b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.617755ms
    Sep  3 21:11:54.116: INFO: Pod "downwardapi-volume-7720c4cc-062a-4e6d-85f5-892c5d393e3b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.013348504s
    STEP: Saw pod success
    Sep  3 21:11:54.116: INFO: Pod "downwardapi-volume-7720c4cc-062a-4e6d-85f5-892c5d393e3b" satisfied condition "Succeeded or Failed"

    Sep  3 21:11:54.119: INFO: Trying to get logs from node k8s-upgrade-and-conformance-uljqkb-md-0-rg248-796ff9996-j7vhm pod downwardapi-volume-7720c4cc-062a-4e6d-85f5-892c5d393e3b container client-container: <nil>
    STEP: delete the pod
    Sep  3 21:11:54.143: INFO: Waiting for pod downwardapi-volume-7720c4cc-062a-4e6d-85f5-892c5d393e3b to disappear
    Sep  3 21:11:54.146: INFO: Pod downwardapi-volume-7720c4cc-062a-4e6d-85f5-892c5d393e3b no longer exists
    [AfterEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 21:11:54.146: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-748" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":-1,"completed":100,"skipped":1856,"failed":0}

    
    SSSSSS
    ------------------------------
    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 158 lines ...
    Sep  3 21:04:47.069: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-4595 create -f -'
    Sep  3 21:04:47.423: INFO: stderr: ""
    Sep  3 21:04:47.423: INFO: stdout: "deployment.apps/agnhost-replica created\n"
    STEP: validating guestbook app
    Sep  3 21:04:47.423: INFO: Waiting for all frontend pods to be Running.
    Sep  3 21:04:52.474: INFO: Waiting for frontend to serve content.
    Sep  3 21:08:25.536: INFO: Failed to get response from guestbook. err: the server is currently unable to handle the request (get services frontend), response: k8s

    
    v1Status�
    
    Failureierror trying to reach service: read tcp 172.18.0.9:52042->192.168.2.41:80: read: connection reset by peer"ServiceUnavailable0�"
    Sep  3 21:08:30.545: INFO: Trying to add a new entry to the guestbook.
    Sep  3 21:08:35.554: INFO: Failed to get response from guestbook. err: the server responded with the status code 417 but did not return more information (get services frontend), response: 

    Sep  3 21:12:14.916: INFO: Failed to get response from guestbook. err: the server is currently unable to handle the request (get services frontend), response: k8s

    
    v1Status�
    
    Failureierror trying to reach service: read tcp 172.18.0.9:59096->192.168.2.41:80: read: connection reset by peer"ServiceUnavailable0�"
    Sep  3 21:12:19.917: FAIL: Cannot added new entry in 180 seconds.

    
    Full Stack Trace
    k8s.io/kubernetes/test/e2e/kubectl.glob..func1.7.2()
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:372 +0x159
    k8s.io/kubernetes/test/e2e.RunE2ETests(0xc003ab3500)
    	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c
... skipping 42 lines ...
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    
        Sep  3 21:12:19.917: Cannot added new entry in 180 seconds.
    
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:372
    ------------------------------
    {"msg":"FAILED [sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","total":-1,"completed":48,"skipped":837,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}

    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  3 21:12:20.732: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename kubectl
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 155 lines ...
    Sep  3 21:12:22.265: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-411 create -f -'
    Sep  3 21:12:22.687: INFO: stderr: ""
    Sep  3 21:12:22.687: INFO: stdout: "deployment.apps/agnhost-replica created\n"
    STEP: validating guestbook app
    Sep  3 21:12:22.687: INFO: Waiting for all frontend pods to be Running.
    Sep  3 21:12:27.738: INFO: Waiting for frontend to serve content.
    Sep  3 21:12:32.751: INFO: Failed to get response from guestbook. err: the server responded with the status code 417 but did not return more information (get services frontend), response: 

    Sep  3 21:12:37.762: INFO: Trying to add a new entry to the guestbook.
    Sep  3 21:12:37.772: INFO: Verifying that added entry can be retrieved.
    Sep  3 21:12:42.780: INFO: Failed to get response from guestbook. err: the server responded with the status code 417 but did not return more information (get services frontend), response: 

    STEP: using delete to clean up resources
    Sep  3 21:12:47.797: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-411 delete --grace-period=0 --force -f -'
    Sep  3 21:12:48.085: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
    Sep  3 21:12:48.085: INFO: stdout: "service \"agnhost-replica\" force deleted\n"
    STEP: using delete to clean up resources
    Sep  3 21:12:48.085: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-411 delete --grace-period=0 --force -f -'
... skipping 19 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 21:12:49.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-411" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","total":-1,"completed":49,"skipped":837,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}

    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  3 21:12:49.523: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename projected
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
    [It] should provide container's memory limit [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep  3 21:12:49.680: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e32857e3-3921-4b56-b3b7-a91eb2ba8037" in namespace "projected-2880" to be "Succeeded or Failed"

    Sep  3 21:12:49.689: INFO: Pod "downwardapi-volume-e32857e3-3921-4b56-b3b7-a91eb2ba8037": Phase="Pending", Reason="", readiness=false. Elapsed: 9.062123ms
    Sep  3 21:12:51.696: INFO: Pod "downwardapi-volume-e32857e3-3921-4b56-b3b7-a91eb2ba8037": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.016152947s
    STEP: Saw pod success
    Sep  3 21:12:51.697: INFO: Pod "downwardapi-volume-e32857e3-3921-4b56-b3b7-a91eb2ba8037" satisfied condition "Succeeded or Failed"

    Sep  3 21:12:51.702: INFO: Trying to get logs from node k8s-upgrade-and-conformance-uljqkb-md-0-rg248-796ff9996-wkqbk pod downwardapi-volume-e32857e3-3921-4b56-b3b7-a91eb2ba8037 container client-container: <nil>
    STEP: delete the pod
    Sep  3 21:12:51.746: INFO: Waiting for pod downwardapi-volume-e32857e3-3921-4b56-b3b7-a91eb2ba8037 to disappear
    Sep  3 21:12:51.749: INFO: Pod downwardapi-volume-e32857e3-3921-4b56-b3b7-a91eb2ba8037 no longer exists
    [AfterEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 21:12:51.750: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-2880" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":-1,"completed":50,"skipped":837,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 7 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 21:12:55.121: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "custom-resource-definition-9564" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works  [Conformance]","total":-1,"completed":51,"skipped":860,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}

    
    SSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Probing container
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 18 lines ...
    • [SLOW TEST:242.987 seconds]
    [sig-node] Probing container
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
      should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    ------------------------------
    {"msg":"PASSED [sig-node] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":-1,"completed":52,"skipped":867,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] DNS
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 8 lines ...
    
    STEP: creating a pod to probe /etc/hosts
    STEP: submitting the pod to kubernetes
    STEP: retrieving the pod
    STEP: looking for the results for each expected name from probers
    Sep  3 21:15:31.520: INFO: Unable to read wheezy_hosts@dns-querier-1.dns-test-service.dns-1495.svc.cluster.local from pod dns-1495/dns-test-f9ad57e1-e9e4-43b5-877d-5327238d7ae4: the server is currently unable to handle the request (get pods dns-test-f9ad57e1-e9e4-43b5-877d-5327238d7ae4)
    Sep  3 21:16:58.221: FAIL: Unable to read wheezy_hosts@dns-querier-1 from pod dns-1495/dns-test-f9ad57e1-e9e4-43b5-877d-5327238d7ae4: Get "https://172.18.0.3:6443/api/v1/namespaces/dns-1495/pods/dns-test-f9ad57e1-e9e4-43b5-877d-5327238d7ae4/proxy/results/wheezy_hosts@dns-querier-1": context deadline exceeded

    
    Full Stack Trace
    k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc002af1d68, 0x29a3500, 0x0, 0x0)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211 +0x69
    k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollImmediateInternal(0xc00629d410, 0xc002af1d68, 0xc00629d410, 0xc002af1d68)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 +0x2f
... skipping 13 lines ...
    	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b
    testing.tRunner(0xc000f2b200, 0x70fea78)
    	/usr/local/go/src/testing/testing.go:1203 +0xe5
    created by testing.(*T).Run
    	/usr/local/go/src/testing/testing.go:1248 +0x2b3
    E0903 21:16:58.222245      19 runtime.go:78] Observed a panic: ginkgowrapper.FailurePanic{Message:"Sep  3 21:16:58.221: Unable to read wheezy_hosts@dns-querier-1 from pod dns-1495/dns-test-f9ad57e1-e9e4-43b5-877d-5327238d7ae4: Get \"https://172.18.0.3:6443/api/v1/namespaces/dns-1495/pods/dns-test-f9ad57e1-e9e4-43b5-877d-5327238d7ae4/proxy/results/wheezy_hosts@dns-querier-1\": context deadline exceeded", Filename:"/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go", Line:211, FullStackTrace:"k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc002af1d68, 0x29a3500, 0x0, 0x0)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211 +0x69\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollImmediateInternal(0xc00629d410, 0xc002af1d68, 0xc00629d410, 0xc002af1d68)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 +0x2f\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x12a05f200, 0x8bb2c97000, 0xc002af1d68, 0x4a, 0x0)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:441 +0x4d\nk8s.io/kubernetes/test/e2e/network.assertFilesContain(0xc00606cd80, 0x8, 0x8, 0x6ee63d3, 0x7, 0xc000581c00, 0x77b8c18, 0xc0021e8b00, 0x0, 0x0, ...)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:463 +0x158\nk8s.io/kubernetes/test/e2e/network.assertFilesExist(...)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:457\nk8s.io/kubernetes/test/e2e/network.validateDNSResults(0xc000da7e40, 0xc000581c00, 0xc00606cd80, 0x8, 0x8)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:520 +0x365\nk8s.io/kubernetes/test/e2e/network.glob..func2.4()\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:127 +0x62a\nk8s.io/kubernetes/test/e2e.RunE2ETests(0xc000f2b200)\n\t_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c\nk8s.io/kubernetes/test/e2e.TestE2E(0xc000f2b200)\n\t_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b\ntesting.tRunner(0xc000f2b200, 0x70fea78)\n\t/usr/local/go/src/testing/testing.go:1203 +0xe5\ncreated by testing.(*T).Run\n\t/usr/local/go/src/testing/testing.go:1248 +0x2b3"} (
    Your test failed.

    Ginkgo panics to prevent subsequent assertions from running.
    Normally Ginkgo rescues this panic so you shouldn't see it.
    
    But, if you make an assertion in a goroutine, Ginkgo can't capture the panic.
    To circumvent this, you should call
    
... skipping 5 lines ...
    k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic(0x6a84100, 0xc0031100c0)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0x95
    k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x86
    panic(0x6a84100, 0xc0031100c0)
    	/usr/local/go/src/runtime/panic.go:965 +0x1b9
    k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1(0xc0040d6140, 0x12f, 0x86a5e60, 0x7d, 0xd3, 0xc000ff9000, 0x7fc)

    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:63 +0xa5
    panic(0x61dbcc0, 0x75da840)
    	/usr/local/go/src/runtime/panic.go:965 +0x1b9
    k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.Fail(0xc0040d6140, 0x12f, 0xc002af17a8, 0x1, 0x1)

    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:267 +0xc8
    k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail(0xc0040d6140, 0x12f, 0xc002af1890, 0x1, 0x1)

    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:67 +0x1b5
    k8s.io/kubernetes/test/e2e/framework.Failf(0x6f89b47, 0x24, 0xc002af1af0, 0x4, 0x4)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/log.go:51 +0x219
    k8s.io/kubernetes/test/e2e/network.assertFilesContain.func1(0xc00629d400, 0x0, 0x0)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:480 +0xab1
    k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc002af1d68, 0x29a3500, 0x0, 0x0)
... skipping 94 lines ...
    STEP: Destroying namespace "services-9339" for this suite.
    [AfterEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods  [Conformance]","total":-1,"completed":53,"skipped":903,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}

    [BeforeEach] [sig-apps] ReplicaSet
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  3 21:17:03.968: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename replicaset
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 10 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 21:17:14.102: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "replicaset-5944" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image  [Conformance]","total":-1,"completed":54,"skipped":903,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 21:17:20.895: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "downward-api-7860" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":-1,"completed":55,"skipped":950,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}

    
    SSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 23 lines ...
    STEP: Destroying namespace "webhook-5811-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":-1,"completed":56,"skipped":953,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}

    
    SS
    ------------------------------
    {"msg":"FAILED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":116,"failed":4,"failures":["[sig-network] DNS should provide DNS for the cluster  [Conformance]","[sig-network] DNS should provide DNS for the cluster  [Conformance]","[sig-network] DNS should provide DNS for the cluster  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]}

    [BeforeEach] [sig-network] Networking
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  3 21:11:45.475: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename pod-network-test
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 283 lines ...
      ----    ------     ----  ----               -------
      Normal  Scheduled  29s   default-scheduler  Successfully assigned pod-network-test-5998/netserver-3 to k8s-upgrade-and-conformance-uljqkb-worker-tpmotr
      Normal  Pulled     29s   kubelet            Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine
      Normal  Created    29s   kubelet            Created container webserver
      Normal  Started    29s   kubelet            Started container webserver
    
    Sep  3 21:12:15.455: INFO: encountered error during dial (did not find expected responses... 

    Tries 1
    Command curl -g -q -s 'http://192.168.0.102:9080/dial?request=hostname&protocol=http&host=192.168.2.50&port=8080&tries=1'
    retrieved map[]
    expected map[netserver-3:{}])
    Sep  3 21:12:15.455: INFO: ...failed...will try again in next pass

    Sep  3 21:12:15.455: INFO: Going to retry 1 out of 4 pods....
    Sep  3 21:12:15.455: INFO: Doublechecking 1 pods in host 172.18.0.6 which werent seen the first time.
    Sep  3 21:12:15.455: INFO: Now attempting to probe pod [[[ 192.168.2.50 ]]]
    Sep  3 21:12:15.458: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.0.102:9080/dial?request=hostname&protocol=http&host=192.168.2.50&port=8080&tries=1'] Namespace:pod-network-test-5998 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep  3 21:12:15.458: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  3 21:12:20.546: INFO: Waiting for responses: map[netserver-3:{}]
... skipping 377 lines ...
      ----    ------     ----   ----               -------
      Normal  Scheduled  5m59s  default-scheduler  Successfully assigned pod-network-test-5998/netserver-3 to k8s-upgrade-and-conformance-uljqkb-worker-tpmotr
      Normal  Pulled     5m59s  kubelet            Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine
      Normal  Created    5m59s  kubelet            Created container webserver
      Normal  Started    5m59s  kubelet            Started container webserver
    
    Sep  3 21:17:45.368: INFO: encountered error during dial (did not find expected responses... 

    Tries 46
    Command curl -g -q -s 'http://192.168.0.102:9080/dial?request=hostname&protocol=http&host=192.168.2.50&port=8080&tries=1'
    retrieved map[]
    expected map[netserver-3:{}])
    Sep  3 21:17:45.368: INFO: ... Done probing pod [[[ 192.168.2.50 ]]]
    Sep  3 21:17:45.368: INFO: succeeded at polling 3 out of 4 connections
    Sep  3 21:17:45.368: INFO: pod polling failure summary:
    Sep  3 21:17:45.368: INFO: Collected error: did not find expected responses... 

    Tries 46
    Command curl -g -q -s 'http://192.168.0.102:9080/dial?request=hostname&protocol=http&host=192.168.2.50&port=8080&tries=1'
    retrieved map[]
    expected map[netserver-3:{}]
    Sep  3 21:17:45.368: FAIL: failed,  1 out of 4 connections failed

    
    Full Stack Trace
    k8s.io/kubernetes/test/e2e/common/network.glob..func1.1.2()
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:82 +0x69
    k8s.io/kubernetes/test/e2e.RunE2ETests(0xc00112ad80)
    	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c
... skipping 14 lines ...
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23
      Granular Checks: Pods
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30
        should function for intra-pod communication: http [NodeConformance] [Conformance] [It]
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    
        Sep  3 21:17:45.368: failed,  1 out of 4 connections failed

    
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:82
    ------------------------------
    [BeforeEach] [sig-node] Variable Expansion
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 20 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 21:18:02.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "var-expansion-2286" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Variable Expansion should succeed in writing subpaths in container [Slow] [Conformance]","total":-1,"completed":57,"skipped":955,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}

    
    SS
    ------------------------------
    [BeforeEach] version v1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 39 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 21:18:04.319: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "proxy-5982" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Proxy version v1 A set of valid responses are returned for both pod and service ProxyWithPath [Conformance]","total":-1,"completed":58,"skipped":957,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] EndpointSlice
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 8 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 21:18:08.673: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "endpointslice-1075" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","total":-1,"completed":59,"skipped":977,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}

    
    S
    ------------------------------
    [BeforeEach] [sig-instrumentation] Events
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 21:18:08.848: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "events-5974" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-instrumentation] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":-1,"completed":60,"skipped":978,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}

    
    SSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Downward API
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  3 21:18:08.899: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename downward-api
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should provide host IP as an env var [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward api env vars
    Sep  3 21:18:08.962: INFO: Waiting up to 5m0s for pod "downward-api-d18fa856-ee07-4b7f-a0ab-b5720c51f79c" in namespace "downward-api-4674" to be "Succeeded or Failed"

    Sep  3 21:18:08.969: INFO: Pod "downward-api-d18fa856-ee07-4b7f-a0ab-b5720c51f79c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.523354ms
    Sep  3 21:18:10.976: INFO: Pod "downward-api-d18fa856-ee07-4b7f-a0ab-b5720c51f79c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.013063979s
    STEP: Saw pod success
    Sep  3 21:18:10.976: INFO: Pod "downward-api-d18fa856-ee07-4b7f-a0ab-b5720c51f79c" satisfied condition "Succeeded or Failed"

    Sep  3 21:18:10.981: INFO: Trying to get logs from node k8s-upgrade-and-conformance-uljqkb-md-0-rg248-796ff9996-j7vhm pod downward-api-d18fa856-ee07-4b7f-a0ab-b5720c51f79c container dapi-container: <nil>
    STEP: delete the pod
    Sep  3 21:18:11.024: INFO: Waiting for pod downward-api-d18fa856-ee07-4b7f-a0ab-b5720c51f79c to disappear
    Sep  3 21:18:11.033: INFO: Pod downward-api-d18fa856-ee07-4b7f-a0ab-b5720c51f79c no longer exists
    [AfterEach] [sig-node] Downward API
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 21:18:11.033: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "downward-api-4674" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":-1,"completed":61,"skipped":988,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected secret
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  3 21:18:11.188: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename projected
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating projection with secret that has name projected-secret-test-c1eeceac-8b22-4f75-97c1-2806484e0b28
    STEP: Creating a pod to test consume secrets
    Sep  3 21:18:11.264: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-e307f8c1-7901-43f5-9570-dbcfde48752a" in namespace "projected-2605" to be "Succeeded or Failed"

    Sep  3 21:18:11.268: INFO: Pod "pod-projected-secrets-e307f8c1-7901-43f5-9570-dbcfde48752a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.962416ms
    Sep  3 21:18:13.274: INFO: Pod "pod-projected-secrets-e307f8c1-7901-43f5-9570-dbcfde48752a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009924662s
    STEP: Saw pod success
    Sep  3 21:18:13.274: INFO: Pod "pod-projected-secrets-e307f8c1-7901-43f5-9570-dbcfde48752a" satisfied condition "Succeeded or Failed"

    Sep  3 21:18:13.279: INFO: Trying to get logs from node k8s-upgrade-and-conformance-uljqkb-worker-gvulve pod pod-projected-secrets-e307f8c1-7901-43f5-9570-dbcfde48752a container projected-secret-volume-test: <nil>
    STEP: delete the pod
    Sep  3 21:18:13.326: INFO: Waiting for pod pod-projected-secrets-e307f8c1-7901-43f5-9570-dbcfde48752a to disappear
    Sep  3 21:18:13.332: INFO: Pod pod-projected-secrets-e307f8c1-7901-43f5-9570-dbcfde48752a no longer exists
    [AfterEach] [sig-storage] Projected secret
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 21:18:13.332: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-2605" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":62,"skipped":1032,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}

    
    SSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  3 21:18:13.369: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename configmap
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should fail to create ConfigMap with empty key [Conformance]

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating configMap that has name configmap-test-emptyKey-995f1f98-dba6-43b3-969c-50e8a390ea60
    [AfterEach] [sig-node] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 21:18:13.430: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "configmap-667" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":-1,"completed":63,"skipped":1039,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}

    
    SSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  3 21:18:13.472: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename emptydir
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test emptydir 0777 on tmpfs
    Sep  3 21:18:13.538: INFO: Waiting up to 5m0s for pod "pod-be002bd0-b2bd-40a3-b47d-560801e22c19" in namespace "emptydir-5221" to be "Succeeded or Failed"

    Sep  3 21:18:13.544: INFO: Pod "pod-be002bd0-b2bd-40a3-b47d-560801e22c19": Phase="Pending", Reason="", readiness=false. Elapsed: 6.2702ms
    Sep  3 21:18:15.552: INFO: Pod "pod-be002bd0-b2bd-40a3-b47d-560801e22c19": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.014285225s
    STEP: Saw pod success
    Sep  3 21:18:15.553: INFO: Pod "pod-be002bd0-b2bd-40a3-b47d-560801e22c19" satisfied condition "Succeeded or Failed"

    Sep  3 21:18:15.556: INFO: Trying to get logs from node k8s-upgrade-and-conformance-uljqkb-worker-gvulve pod pod-be002bd0-b2bd-40a3-b47d-560801e22c19 container test-container: <nil>
    STEP: delete the pod
    Sep  3 21:18:15.575: INFO: Waiting for pod pod-be002bd0-b2bd-40a3-b47d-560801e22c19 to disappear
    Sep  3 21:18:15.584: INFO: Pod pod-be002bd0-b2bd-40a3-b47d-560801e22c19 no longer exists
    [AfterEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 21:18:15.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-5221" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":64,"skipped":1045,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}

    [BeforeEach] [sig-apps] Deployment
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  3 21:18:15.600: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename deployment
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 22 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 21:18:17.744: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "deployment-1739" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] Deployment Deployment should have a working scale subresource [Conformance]","total":-1,"completed":65,"skipped":1045,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}

    
    S
    ------------------------------
    [BeforeEach] [sig-storage] Subpath
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 4 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
    STEP: Setting up data
    [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating pod pod-subpath-test-configmap-f5xg
    STEP: Creating a pod to test atomic-volume-subpath
    Sep  3 21:18:17.862: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-f5xg" in namespace "subpath-1329" to be "Succeeded or Failed"

    Sep  3 21:18:17.867: INFO: Pod "pod-subpath-test-configmap-f5xg": Phase="Pending", Reason="", readiness=false. Elapsed: 4.694585ms
    Sep  3 21:18:19.874: INFO: Pod "pod-subpath-test-configmap-f5xg": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011268701s
    Sep  3 21:18:21.881: INFO: Pod "pod-subpath-test-configmap-f5xg": Phase="Running", Reason="", readiness=true. Elapsed: 4.018179353s
    Sep  3 21:18:23.886: INFO: Pod "pod-subpath-test-configmap-f5xg": Phase="Running", Reason="", readiness=true. Elapsed: 6.023669452s
    Sep  3 21:18:25.893: INFO: Pod "pod-subpath-test-configmap-f5xg": Phase="Running", Reason="", readiness=true. Elapsed: 8.030117342s
    Sep  3 21:18:27.899: INFO: Pod "pod-subpath-test-configmap-f5xg": Phase="Running", Reason="", readiness=true. Elapsed: 10.036506879s
... skipping 2 lines ...
    Sep  3 21:18:33.919: INFO: Pod "pod-subpath-test-configmap-f5xg": Phase="Running", Reason="", readiness=true. Elapsed: 16.056702818s
    Sep  3 21:18:35.925: INFO: Pod "pod-subpath-test-configmap-f5xg": Phase="Running", Reason="", readiness=true. Elapsed: 18.062294873s
    Sep  3 21:18:37.931: INFO: Pod "pod-subpath-test-configmap-f5xg": Phase="Running", Reason="", readiness=true. Elapsed: 20.06900345s
    Sep  3 21:18:39.938: INFO: Pod "pod-subpath-test-configmap-f5xg": Phase="Running", Reason="", readiness=true. Elapsed: 22.075205157s
    Sep  3 21:18:41.945: INFO: Pod "pod-subpath-test-configmap-f5xg": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.08252521s
    STEP: Saw pod success
    Sep  3 21:18:41.945: INFO: Pod "pod-subpath-test-configmap-f5xg" satisfied condition "Succeeded or Failed"

    Sep  3 21:18:41.951: INFO: Trying to get logs from node k8s-upgrade-and-conformance-uljqkb-md-0-rg248-796ff9996-j7vhm pod pod-subpath-test-configmap-f5xg container test-container-subpath-configmap-f5xg: <nil>
    STEP: delete the pod
    Sep  3 21:18:41.979: INFO: Waiting for pod pod-subpath-test-configmap-f5xg to disappear
    Sep  3 21:18:41.984: INFO: Pod pod-subpath-test-configmap-f5xg no longer exists
    STEP: Deleting pod pod-subpath-test-configmap-f5xg
    Sep  3 21:18:41.984: INFO: Deleting pod "pod-subpath-test-configmap-f5xg" in namespace "subpath-1329"
    [AfterEach] [sig-storage] Subpath
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 21:18:41.990: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "subpath-1329" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":-1,"completed":66,"skipped":1046,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}

    
    SSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 26 lines ...
    STEP: Destroying namespace "webhook-2090-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":-1,"completed":67,"skipped":1059,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}

    
    SSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
    [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep  3 21:18:45.939: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5fa680be-ab75-4c1d-a04f-ed6b17806426" in namespace "projected-9961" to be "Succeeded or Failed"

    Sep  3 21:18:45.951: INFO: Pod "downwardapi-volume-5fa680be-ab75-4c1d-a04f-ed6b17806426": Phase="Pending", Reason="", readiness=false. Elapsed: 12.199557ms
    Sep  3 21:18:47.960: INFO: Pod "downwardapi-volume-5fa680be-ab75-4c1d-a04f-ed6b17806426": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.020743129s
    STEP: Saw pod success
    Sep  3 21:18:47.960: INFO: Pod "downwardapi-volume-5fa680be-ab75-4c1d-a04f-ed6b17806426" satisfied condition "Succeeded or Failed"

    Sep  3 21:18:47.970: INFO: Trying to get logs from node k8s-upgrade-and-conformance-uljqkb-worker-gvulve pod downwardapi-volume-5fa680be-ab75-4c1d-a04f-ed6b17806426 container client-container: <nil>
    STEP: delete the pod
    Sep  3 21:18:47.999: INFO: Waiting for pod downwardapi-volume-5fa680be-ab75-4c1d-a04f-ed6b17806426 to disappear
    Sep  3 21:18:48.009: INFO: Pod downwardapi-volume-5fa680be-ab75-4c1d-a04f-ed6b17806426 no longer exists
    [AfterEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  3 21:18:48.009: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-9961" for this suite.
    
    •
    ------------------------------
    {"msg":"FAILED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":-1,"completed":100,"skipped":1862,"failed":1,"failures":["[sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]"]}

    [BeforeEach] [sig-network] DNS
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  3 21:16:58.265: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename dns
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 5 lines ...
    
    STEP: creating a pod to probe /etc/hosts
    STEP: submitting the pod to kubernetes
    STEP: retrieving the pod
    STEP: looking for the results for each expected name from probers
    Sep  3 21:20:34.624: INFO: Unable to read wheezy_hosts@dns-querier-1.dns-test-service.dns-8049.svc.cluster.local from pod dns-8049/dns-test-2381a876-6c31-40f6-a1a1-ebc917bcf1b5: the server is currently unable to handle the request (get pods dns-test-2381a876-6c31-40f6-a1a1-ebc917bcf1b5)
    Sep  3 21:22:00.439: FAIL: Unable to read wheezy_hosts@dns-querier-1 from pod dns-8049/dns-test-2381a876-6c31-40f6-a1a1-ebc917bcf1b5: Get "https://172.18.0.3:6443/api/v1/namespaces/dns-8049/pods/dns-test-2381a876-6c31-40f6-a1a1-ebc917bcf1b5/proxy/results/wheezy_hosts@dns-querier-1": context deadline exceeded

    
    Full Stack Trace
    k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc001addd68, 0x29a3500, 0x0, 0x0)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211 +0x69
    k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollImmediateInternal(0xc0060383c0, 0xc001addd68, 0xc0060383c0, 0xc001addd68)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 +0x2f
... skipping 13 lines ...
    	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b
    testing.tRunner(0xc000f2b200, 0x70fea78)
    	/usr/local/go/src/testing/testing.go:1203 +0xe5
    created by testing.(*T).Run
    	/usr/local/go/src/testing/testing.go:1248 +0x2b3
    E0903 21:22:00.439714      19 runtime.go:78] Observed a panic: ginkgowrapper.FailurePanic{Message:"Sep  3 21:22:00.439: Unable to read wheezy_hosts@dns-querier-1 from pod dns-8049/dns-test-2381a876-6c31-40f6-a1a1-ebc917bcf1b5: Get \"https://172.18.0.3:6443/api/v1/namespaces/dns-8049/pods/dns-test-2381a876-6c31-40f6-a1a1-ebc917bcf1b5/proxy/results/wheezy_hosts@dns-querier-1\": context deadline exceeded", Filename:"/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go", Line:211, FullStackTrace:"k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc001addd68, 0x29a3500, 0x0, 0x0)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211 +0x69\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollImmediateInternal(0xc0060383c0, 0xc001addd68, 0xc0060383c0, 0xc001addd68)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 +0x2f\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x12a05f200, 0x8bb2c97000, 0xc001addd68, 0x4a, 0x0)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:441 +0x4d\nk8s.io/kubernetes/test/e2e/network.assertFilesContain(0xc005fb0000, 0x8, 0x8, 0x6ee63d3, 0x7, 0xc000077400, 0x77b8c18, 0xc00276a580, 0x0, 0x0, ...)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:463 +0x158\nk8s.io/kubernetes/test/e2e/network.assertFilesExist(...)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:457\nk8s.io/kubernetes/test/e2e/network.validateDNSResults(0xc000da7e40, 0xc000077400, 0xc005fb0000, 0x8, 0x8)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:520 +0x365\nk8s.io/kubernetes/test/e2e/network.glob..func2.4()\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:127 +0x62a\nk8s.io/kubernetes/test/e2e.RunE2ETests(0xc000f2b200)\n\t_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c\nk8s.io/kubernetes/test/e2e.TestE2E(0xc000f2b200)\n\t_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b\ntesting.tRunner(0xc000f2b200, 0x70fea78)\n\t/usr/local/go/src/testing/testing.go:1203 +0xe5\ncreated by testing.(*T).Run\n\t/usr/local/go/src/testing/testing.go:1248 +0x2b3"} (
    Your test failed.

    Ginkgo panics to prevent subsequent assertions from running.
    Normally Ginkgo rescues this panic so you shouldn't see it.
    
    But, if you make an assertion in a goroutine, Ginkgo can't capture the panic.
    To circumvent this, you should call
    
... skipping 5 lines ...
    k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic(0x6a84100, 0xc002a3f600)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0x95
    k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x86
    panic(0x6a84100, 0xc002a3f600)
    	/usr/local/go/src/runtime/panic.go:965 +0x1b9
    k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1(0xc003ed8140, 0x12f, 0x86a5e60, 0x7d, 0xd3, 0xc00110c000, 0x7fc)

    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:63 +0xa5
    panic(0x61dbcc0, 0x75da840)
    	/usr/local/go/src/runtime/panic.go:965 +0x1b9
    k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.Fail(0xc003ed8140, 0x12f, 0xc001add7a8, 0x1, 0x1)

    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:267 +0xc8
    k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail(0xc003ed8140, 0x12f, 0xc001add890, 0x1, 0x1)

    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:67 +0x1b5
    k8s.io/kubernetes/test/e2e/framework.Failf(0x6f89b47, 0x24, 0xc001addaf0, 0x4, 0x4)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/log.go:51 +0x219
    k8s.io/kubernetes/test/e2e/network.assertFilesContain.func1(0xc006038300, 0x0, 0x0)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:480 +0xab1
    k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc001addd68, 0x29a3500, 0x0, 0x0)
... skipping 54 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    
      Sep  3 21:22:00.439: Unable to read wheezy_hosts@dns-querier-1 from pod dns-8049/dns-test-2381a876-6c31-40f6-a1a1-ebc917bcf1b5: Get "https://172.18.0.3:6443/api/v1/namespaces/dns-8049/pods/dns-test-2381a876-6c31-40f6-a1a1-ebc917bcf1b5/proxy/results/wheezy_hosts@dns-querier-1": context deadline exceeded
    
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211
    ------------------------------
    {"msg":"FAILED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":116,"failed":5,"failures":["[sig-network] DNS should provide DNS for the cluster  [Conformance]","[sig-network] DNS should provide DNS for the cluster  [Conformance]","[sig-network] DNS should provide DNS for the cluster  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]}

    [BeforeEach] [sig-network] Networking
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  3 21:17:45.396: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename pod-network-test
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 283 lines ...
      ----    ------     ----  ----               -------
      Normal  Scheduled  32s   default-scheduler  Successfully assigned pod-network-test-364/netserver-3 to k8s-upgrade-and-conformance-uljqkb-worker-tpmotr
      Normal  Pulled     32s   kubelet            Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine
      Normal  Created    32s   kubelet            Created container webserver
      Normal  Started    32s   kubelet            Started container webserver
    
    Sep  3 21:18:18.521: INFO: encountered error during dial (did not find expected responses... 

    Tries 1
    Command curl -g -q -s 'http://192.168.0.109:9080/dial?request=hostname&protocol=http&host=192.168.2.54&port=8080&tries=1'
    retrieved map[]
    expected map[netserver-3:{}])
    Sep  3 21:18:18.521: INFO: ...failed...will try again in next pass

    Sep  3 21:18:18.521: INFO: Going to retry 1 out of 4 pods....
    Sep  3 21:18:18.521: INFO: Doublechecking 1 pods in host 172.18.0.6 which werent seen the first time.
    Sep  3 21:18:18.521: INFO: Now attempting to probe pod [[[ 192.168.2.54 ]]]
    Sep  3 21:18:18.533: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.0.109:9080/dial?request=hostname&protocol=http&host=192.168.2.54&port=8080&tries=1'] Namespace:pod-network-test-364 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
    Sep  3 21:18:18.533: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  3 21:18:23.728: INFO: Waiting for responses: map[netserver-3:{}]
... skipping 377 lines ...
      ----    ------     ----   ----               -------
      Normal  Scheduled  6m     default-scheduler  Successfully assigned pod-network-test-364/netserver-3 to k8s-upgrade-and-conformance-uljqkb-worker-tpmotr
      Normal  Pulled     5m59s  kubelet            Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine
      Normal  Created    5m59s  kubelet            Created container webserver
      Normal  Started    5m59s  kubelet            Started container webserver
    
    Sep  3 21:23:45.865: INFO: encountered error during dial (did not find expected responses... 

    Tries 46
    Command curl -g -q -s 'http://192.168.0.109:9080/dial?request=hostname&protocol=http&host=192.168.2.54&port=8080&tries=1'
    retrieved map[]
    expected map[netserver-3:{}])
    Sep  3 21:23:45.865: INFO: ... Done probing pod [[[ 192.168.2.54 ]]]
    Sep  3 21:23:45.865: INFO: succeeded at polling 3 out of 4 connections
    Sep  3 21:23:45.865: INFO: pod polling failure summary:
    Sep  3 21:23:45.865: INFO: Collected error: did not find expected responses... 

    Tries 46
    Command curl -g -q -s 'http://192.168.0.109:9080/dial?request=hostname&protocol=http&host=192.168.2.54&port=8080&tries=1'
    retrieved map[]
    expected map[netserver-3:{}]
    Sep  3 21:23:45.865: FAIL: failed,  1 out of 4 connections failed

    
    Full Stack Trace
    k8s.io/kubernetes/test/e2e/common/network.glob..func1.1.2()
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:82 +0x69
    k8s.io/kubernetes/test/e2e.RunE2ETests(0xc00112ad80)
    	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c
... skipping 14 lines ...
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23
      Granular Checks: Pods
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30
        should function for intra-pod communication: http [NodeConformance] [Conformance] [It]
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    
        Sep  3 21:23:45.865: failed,  1 out of 4 connections failed

    
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:82
    ------------------------------
    {"msg":"FAILED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":116,"failed":6,"failures":["[sig-network] DNS should provide DNS for the cluster  [Conformance]","[sig-network] DNS should provide DNS for the cluster  [Conformance]","[sig-network] DNS should provide DNS for the cluster  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]}

    Sep  3 21:23:45.881: INFO: Running AfterSuite actions on all nodes
    
    
    {"msg":"FAILED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":-1,"completed":100,"skipped":1862,"failed":2,"failures":["[sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","[sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]"]}

    [BeforeEach] [sig-network] DNS
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  3 21:22:00.469: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename dns
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 5 lines ...
    
    STEP: creating a pod to probe /etc/hosts
    STEP: submitting the pod to kubernetes
    STEP: retrieving the pod
    STEP: looking for the results for each expected name from probers
    Sep  3 21:25:35.684: INFO: Unable to read wheezy_hosts@dns-querier-1.dns-test-service.dns-9116.svc.cluster.local from pod dns-9116/dns-test-610494b3-3afb-44a4-a3cd-deeb5f9c3216: the server is currently unable to handle the request (get pods dns-test-610494b3-3afb-44a4-a3cd-deeb5f9c3216)
    Sep  3 21:27:02.531: FAIL: Unable to read wheezy_hosts@dns-querier-1 from pod dns-9116/dns-test-610494b3-3afb-44a4-a3cd-deeb5f9c3216: Get "https://172.18.0.3:6443/api/v1/namespaces/dns-9116/pods/dns-test-610494b3-3afb-44a4-a3cd-deeb5f9c3216/proxy/results/wheezy_hosts@dns-querier-1": context deadline exceeded

    
    Full Stack Trace
    k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc001addd68, 0x29a3500, 0x0, 0x0)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211 +0x69
    k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollImmediateInternal(0xc00629c5e8, 0xc001addd68, 0xc00629c5e8, 0xc001addd68)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 +0x2f
... skipping 13 lines ...
    	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b
    testing.tRunner(0xc000f2b200, 0x70fea78)
    	/usr/local/go/src/testing/testing.go:1203 +0xe5
    created by testing.(*T).Run
    	/usr/local/go/src/testing/testing.go:1248 +0x2b3
    E0903 21:27:02.532524      19 runtime.go:78] Observed a panic: ginkgowrapper.FailurePanic{Message:"Sep  3 21:27:02.531: Unable to read wheezy_hosts@dns-querier-1 from pod dns-9116/dns-test-610494b3-3afb-44a4-a3cd-deeb5f9c3216: Get \"https://172.18.0.3:6443/api/v1/namespaces/dns-9116/pods/dns-test-610494b3-3afb-44a4-a3cd-deeb5f9c3216/proxy/results/wheezy_hosts@dns-querier-1\": context deadline exceeded", Filename:"/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go", Line:211, FullStackTrace:"k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc001addd68, 0x29a3500, 0x0, 0x0)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211 +0x69\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollImmediateInternal(0xc00629c5e8, 0xc001addd68, 0xc00629c5e8, 0xc001addd68)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 +0x2f\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x12a05f200, 0x8bb2c97000, 0xc001addd68, 0x4a, 0x0)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:441 +0x4d\nk8s.io/kubernetes/test/e2e/network.assertFilesContain(0xc00625c780, 0x8, 0x8, 0x6ee63d3, 0x7, 0xc002c77c00, 0x77b8c18, 0xc002c04160, 0x0, 0x0, ...)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:463 +0x158\nk8s.io/kubernetes/test/e2e/network.assertFilesExist(...)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:457\nk8s.io/kubernetes/test/e2e/network.validateDNSResults(0xc000da7e40, 0xc002c77c00, 0xc00625c780, 0x8, 0x8)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:520 +0x365\nk8s.io/kubernetes/test/e2e/network.glob..func2.4()\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:127 +0x62a\nk8s.io/kubernetes/test/e2e.RunE2ETests(0xc000f2b200)\n\t_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c\nk8s.io/kubernetes/test/e2e.TestE2E(0xc000f2b200)\n\t_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b\ntesting.tRunner(0xc000f2b200, 0x70fea78)\n\t/usr/local/go/src/testing/testing.go:1203 +0xe5\ncreated by testing.(*T).Run\n\t/usr/local/go/src/testing/testing.go:1248 +0x2b3"} (
    Your test failed.

    Ginkgo panics to prevent subsequent assertions from running.
    Normally Ginkgo rescues this panic so you shouldn't see it.
    
    But, if you make an assertion in a goroutine, Ginkgo can't capture the panic.
    To circumvent this, you should call
    
... skipping 5 lines ...
    k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic(0x6a84100, 0xc001952700)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0x95
    k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x86
    panic(0x6a84100, 0xc001952700)
    	/usr/local/go/src/runtime/panic.go:965 +0x1b9
    k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1(0xc0040d6140, 0x12f, 0x86a5e60, 0x7d, 0xd3, 0xc002a27000, 0x7fc)

    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:63 +0xa5
    panic(0x61dbcc0, 0x75da840)
    	/usr/local/go/src/runtime/panic.go:965 +0x1b9
    k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.Fail(0xc0040d6140, 0x12f, 0xc001add7a8, 0x1, 0x1)

    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:267 +0xc8
    k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail(0xc0040d6140, 0x12f, 0xc001add890, 0x1, 0x1)

    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:67 +0x1b5
    k8s.io/kubernetes/test/e2e/framework.Failf(0x6f89b47, 0x24, 0xc001addaf0, 0x4, 0x4)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/log.go:51 +0x219
    k8s.io/kubernetes/test/e2e/network.assertFilesContain.func1(0xc00629c500, 0x0, 0x0)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:480 +0xab1
    k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc001addd68, 0x29a3500, 0x0, 0x0)
... skipping 54 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    
      Sep  3 21:27:02.531: Unable to read wheezy_hosts@dns-querier-1 from pod dns-9116/dns-test-610494b3-3afb-44a4-a3cd-deeb5f9c3216: Get "https://172.18.0.3:6443/api/v1/namespaces/dns-9116/pods/dns-test-610494b3-3afb-44a4-a3cd-deeb5f9c3216/proxy/results/wheezy_hosts@dns-querier-1": context deadline exceeded
    
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211
    ------------------------------
    {"msg":"FAILED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":-1,"completed":100,"skipped":1862,"failed":3,"failures":["[sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","[sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","[sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]"]}

    Sep  3 21:27:02.560: INFO: Running AfterSuite actions on all nodes
    
    STEP: Dumping logs from the "k8s-upgrade-and-conformance-uljqkb" workload cluster 09/03/22 21:31:15.196
    STEP: Dumping all the Cluster API resources in the "k8s-upgrade-and-conformance-ie185u" namespace 09/03/22 21:31:18.597
    STEP: Deleting cluster k8s-upgrade-and-conformance-ie185u/k8s-upgrade-and-conformance-uljqkb 09/03/22 21:31:18.944
    STEP: Deleting cluster k8s-upgrade-and-conformance-uljqkb 09/03/22 21:31:18.97
... skipping 621 lines ...
  [INTERRUPTED] When upgrading a workload cluster using ClusterClass and testing K8S conformance [Conformance] [K8s-Upgrade] [ClusterClass] [It] Should create and upgrade a workload cluster and eventually run kubetest
  /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/cluster_upgrade.go:118
  [INTERRUPTED] [SynchronizedAfterSuite] 
  /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/e2e_suite_test.go:169

Ran 1 of 21 Specs in 3503.502 seconds
FAIL! - Interrupted by Other Ginkgo Process -- 0 Passed | 1 Failed | 0 Pending | 20 Skipped


Ginkgo ran 1 suite in 1h0m14.682617425s

Test Suite Failed
make: *** [Makefile:129: run] Error 1
make: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e'
+ cleanup
++ pgrep -f 'docker events'
+ kill 26139
++ pgrep -f 'ctr -n moby events'
+ kill 26140
... skipping 23 lines ...