This job view page is being replaced by Spyglass soon. Check out the new job view.
PRArvinderpal: Initialize FeatureGate map for KubeProxy config. #1929
ResultFAILURE
Tests 116 failed / 347 succeeded
Started2019-11-22 09:43
Elapsed1h0m
Revisionc025cfcd9c98746b7f6f3dd86ef16490381ac9d5
Refs 85524

Test Failures


Kubernetes e2e suite [k8s.io] Container Runtime blackbox test when running a container with a new image should be able to pull image [NodeConformance] 15m1s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sContainer\sRuntime\sblackbox\stest\swhen\srunning\sa\scontainer\swith\sa\snew\simage\sshould\sbe\sable\sto\spull\simage\s\[NodeConformance\]$'
test/e2e/common/runtime.go:374
Nov 22 10:22:27.206: All 3 attempts failed: expected container state: Running, got: "Waiting"
test/e2e/common/runtime.go:364
				
				Click to see stdout/stderrfrom junit_22.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Container Runtime blackbox test when running a container with a new image should be able to pull image [NodeConformance] 15m2s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sContainer\sRuntime\sblackbox\stest\swhen\srunning\sa\scontainer\swith\sa\snew\simage\sshould\sbe\sable\sto\spull\simage\s\[NodeConformance\]$'
test/e2e/common/runtime.go:374
Nov 22 10:07:25.238: All 3 attempts failed: expected container state: Running, got: "Waiting"
test/e2e/common/runtime.go:364
				
				Click to see stdout/stderrfrom junit_22.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Container Runtime blackbox test when running a container with a new image should not be able to pull image from invalid registry [NodeConformance] 15m1s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sContainer\sRuntime\sblackbox\stest\swhen\srunning\sa\scontainer\swith\sa\snew\simage\sshould\snot\sbe\sable\sto\spull\simage\sfrom\sinvalid\sregistry\s\[NodeConformance\]$'
test/e2e/common/runtime.go:369
Nov 22 10:20:50.851: All 3 attempts failed: unexpected waiting reason: "ContainerCreating"
test/e2e/common/runtime.go:364
				
				Click to see stdout/stderrfrom junit_17.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Container Runtime blackbox test when running a container with a new image should not be able to pull image from invalid registry [NodeConformance] 15m2s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sContainer\sRuntime\sblackbox\stest\swhen\srunning\sa\scontainer\swith\sa\snew\simage\sshould\snot\sbe\sable\sto\spull\simage\sfrom\sinvalid\sregistry\s\[NodeConformance\]$'
test/e2e/common/runtime.go:369
Nov 22 10:05:48.888: All 3 attempts failed: unexpected waiting reason: "ContainerCreating"
test/e2e/common/runtime.go:364
				
				Click to see stdout/stderrfrom junit_17.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] 5m0s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sProbing\scontainer\sshould\s\*not\*\sbe\srestarted\swith\sa\s\/healthz\shttp\sliveness\sprobe\s\[NodeConformance\]\s\[Conformance\]$'
test/e2e/framework/framework.go:721
Nov 22 09:54:53.604: starting pod test-webserver-82800f5c-eacb-4e2a-ac25-193bd4df9d7c in namespace container-probe-738
Unexpected error:
    <*errors.errorString | 0xc0000a1960>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
test/e2e/common/container_probe.go:422
				
				Click to see stdout/stderrfrom junit_13.xml

Find test-webserver-82800f5c-eacb-4e2a-ac25-193bd4df9d7c mentions in log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] 5m0s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sProbing\scontainer\sshould\s\*not\*\sbe\srestarted\swith\sa\s\/healthz\shttp\sliveness\sprobe\s\[NodeConformance\]\s\[Conformance\]$'
test/e2e/framework/framework.go:721
Nov 22 09:59:54.004: starting pod test-webserver-de7bafc7-6ecb-4b3c-98db-affb6927a411 in namespace container-probe-3023
Unexpected error:
    <*errors.errorString | 0xc0000a1960>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
test/e2e/common/container_probe.go:422
				
				Click to see stdout/stderrfrom junit_13.xml

Find test-webserver-de7bafc7-6ecb-4b3c-98db-affb6927a411 mentions in log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] 5m0s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sProbing\scontainer\swith\sreadiness\sprobe\sshould\snot\sbe\sready\sbefore\sinitial\sdelay\sand\snever\srestart\s\[NodeConformance\]\s\[Conformance\]$'
test/e2e/framework/framework.go:721
Nov 22 10:35:11.307: Unexpected error:
    <*errors.errorString | 0xc00263a590>: {
        s: "want pod 'test-webserver-63fc4e32-4dfe-4978-92a5-daf22a3fe0bc' on 'kind-worker' to be 'Running' but was 'Pending'",
    }
    want pod 'test-webserver-63fc4e32-4dfe-4978-92a5-daf22a3fe0bc' on 'kind-worker' to be 'Running' but was 'Pending'
occurred
test/e2e/common/container_probe.go:68
				
				Click to see stdout/stderrfrom junit_02.xml

Find test-webserver-63fc4e32-4dfe-4978-92a5-daf22a3fe0bc mentions in log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] 5m0s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sProbing\scontainer\swith\sreadiness\sprobe\sshould\snot\sbe\sready\sbefore\sinitial\sdelay\sand\snever\srestart\s\[NodeConformance\]\s\[Conformance\]$'
test/e2e/framework/framework.go:721
Nov 22 10:30:10.920: Unexpected error:
    <*errors.errorString | 0xc000212f40>: {
        s: "want pod 'test-webserver-fc3d6ef1-815b-4d87-b92a-20039f2b4cf3' on 'kind-worker' to be 'Running' but was 'Pending'",
    }
    want pod 'test-webserver-fc3d6ef1-815b-4d87-b92a-20039f2b4cf3' on 'kind-worker' to be 'Running' but was 'Pending'
occurred
test/e2e/common/container_probe.go:68
				
				Click to see stdout/stderrfrom junit_02.xml

Find test-webserver-fc3d6ef1-815b-4d87-b92a-20039f2b4cf3 mentions in log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Security Context When creating a container with runAsNonRoot should not run with an explicit root user ID [LinuxOnly] 5m0s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sSecurity\sContext\sWhen\screating\sa\scontainer\swith\srunAsNonRoot\sshould\snot\srun\swith\san\sexplicit\sroot\suser\sID\s\[LinuxOnly\]$'
test/e2e/common/security_context.go:132
Nov 22 10:31:20.518: Unexpected error:
    <*errors.errorString | 0xc0000a1960>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
test/e2e/common/security_context.go:140
				
				Click to see stdout/stderrfrom junit_03.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Security Context When creating a container with runAsNonRoot should not run with an explicit root user ID [LinuxOnly] 5m0s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sSecurity\sContext\sWhen\screating\sa\scontainer\swith\srunAsNonRoot\sshould\snot\srun\swith\san\sexplicit\sroot\suser\sID\s\[LinuxOnly\]$'
test/e2e/common/security_context.go:132
Nov 22 10:26:20.188: Unexpected error:
    <*errors.errorString | 0xc0000a1960>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
test/e2e/common/security_context.go:140
				
				Click to see stdout/stderrfrom junit_03.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance] 5m1s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sSecurity\sContext\swhen\screating\scontainers\swith\sAllowPrivilegeEscalation\sshould\sallow\sprivilege\sescalation\swhen\snot\sexplicitly\sset\sand\suid\s\!\=\s0\s\[LinuxOnly\]\s\[NodeConformance\]$'
test/e2e/common/security_context.go:328
Nov 22 10:32:04.475: wait for pod "alpine-nnp-nil-386b7e2a-764c-4ce4-a94c-f73e28150199" to success
Expected success, but got an error:
    <*errors.errorString | 0xc001bce130>: {
        s: "Gave up after waiting 5m0s for pod \"alpine-nnp-nil-386b7e2a-764c-4ce4-a94c-f73e28150199\" to be \"success or failure\"",
    }
    Gave up after waiting 5m0s for pod "alpine-nnp-nil-386b7e2a-764c-4ce4-a94c-f73e28150199" to be "success or failure"
test/e2e/framework/pods.go:208
				
				Click to see stdout/stderrfrom junit_06.xml

Find alpine-nnp-nil-386b7e2a-764c-4ce4-a94c-f73e28150199 mentions in log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance] 5m1s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sSecurity\sContext\swhen\screating\scontainers\swith\sAllowPrivilegeEscalation\sshould\sallow\sprivilege\sescalation\swhen\snot\sexplicitly\sset\sand\suid\s\!\=\s0\s\[LinuxOnly\]\s\[NodeConformance\]$'
test/e2e/common/security_context.go:328
Nov 22 10:27:03.300: wait for pod "alpine-nnp-nil-4b66db63-2f77-478a-a5f2-12c7250d087c" to success
Expected success, but got an error:
    <*errors.errorString | 0xc000e35110>: {
        s: "Gave up after waiting 5m0s for pod \"alpine-nnp-nil-4b66db63-2f77-478a-a5f2-12c7250d087c\" to be \"success or failure\"",
    }
    Gave up after waiting 5m0s for pod "alpine-nnp-nil-4b66db63-2f77-478a-a5f2-12c7250d087c" to be "success or failure"
test/e2e/framework/pods.go:208
				
				Click to see stdout/stderrfrom junit_06.xml

Find alpine-nnp-nil-4b66db63-2f77-478a-a5f2-12c7250d087c mentions in log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when true [LinuxOnly] [NodeConformance] 5m1s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sSecurity\sContext\swhen\screating\scontainers\swith\sAllowPrivilegeEscalation\sshould\sallow\sprivilege\sescalation\swhen\strue\s\[LinuxOnly\]\s\[NodeConformance\]$'
test/e2e/common/security_context.go:360
Nov 22 10:12:09.280: wait for pod "alpine-nnp-true-2514bff3-56d3-4800-9a2f-a5ffcf46c720" to success
Expected success, but got an error:
    <*errors.errorString | 0xc0021ece60>: {
        s: "Gave up after waiting 5m0s for pod \"alpine-nnp-true-2514bff3-56d3-4800-9a2f-a5ffcf46c720\" to be \"success or failure\"",
    }
    Gave up after waiting 5m0s for pod "alpine-nnp-true-2514bff3-56d3-4800-9a2f-a5ffcf46c720" to be "success or failure"
test/e2e/framework/pods.go:208
				
				Click to see stdout/stderrfrom junit_13.xml

Find alpine-nnp-true-2514bff3-56d3-4800-9a2f-a5ffcf46c720 mentions in log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when true [LinuxOnly] [NodeConformance] 5m1s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sSecurity\sContext\swhen\screating\scontainers\swith\sAllowPrivilegeEscalation\sshould\sallow\sprivilege\sescalation\swhen\strue\s\[LinuxOnly\]\s\[NodeConformance\]$'
test/e2e/common/security_context.go:360
Nov 22 10:07:08.227: wait for pod "alpine-nnp-true-a9c7b278-6f0a-4f8f-8f24-97213b06a9d9" to success
Expected success, but got an error:
    <*errors.errorString | 0xc002276a40>: {
        s: "Gave up after waiting 5m0s for pod \"alpine-nnp-true-a9c7b278-6f0a-4f8f-8f24-97213b06a9d9\" to be \"success or failure\"",
    }
    Gave up after waiting 5m0s for pod "alpine-nnp-true-a9c7b278-6f0a-4f8f-8f24-97213b06a9d9" to be "success or failure"
test/e2e/framework/pods.go:208
				
				Click to see stdout/stderrfrom junit_13.xml

Find alpine-nnp-true-a9c7b278-6f0a-4f8f-8f24-97213b06a9d9 mentions in log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] 5m1s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sSecurity\sContext\swhen\screating\scontainers\swith\sAllowPrivilegeEscalation\sshould\snot\sallow\sprivilege\sescalation\swhen\sfalse\s\[LinuxOnly\]\s\[NodeConformance\]\s\[Conformance\]$'
test/e2e/framework/framework.go:721
Nov 22 10:07:34.037: wait for pod "alpine-nnp-false-b29e9558-7eda-4881-bccb-514614e1fba6" to success
Expected success, but got an error:
    <*errors.errorString | 0xc000430120>: {
        s: "Gave up after waiting 5m0s for pod \"alpine-nnp-false-b29e9558-7eda-4881-bccb-514614e1fba6\" to be \"success or failure\"",
    }
    Gave up after waiting 5m0s for pod "alpine-nnp-false-b29e9558-7eda-4881-bccb-514614e1fba6" to be "success or failure"
test/e2e/framework/pods.go:208
				
				Click to see stdout/stderrfrom junit_12.xml

Find alpine-nnp-false-b29e9558-7eda-4881-bccb-514614e1fba6 mentions in log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] 5m1s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sSecurity\sContext\swhen\screating\scontainers\swith\sAllowPrivilegeEscalation\sshould\snot\sallow\sprivilege\sescalation\swhen\sfalse\s\[LinuxOnly\]\s\[NodeConformance\]\s\[Conformance\]$'
test/e2e/framework/framework.go:721
Nov 22 10:02:32.572: wait for pod "alpine-nnp-false-5afad539-5bd3-4006-b407-4ddd091446ae" to success
Expected success, but got an error:
    <*errors.errorString | 0xc00042ead0>: {
        s: "Gave up after waiting 5m0s for pod \"alpine-nnp-false-5afad539-5bd3-4006-b407-4ddd091446ae\" to be \"success or failure\"",
    }
    Gave up after waiting 5m0s for pod "alpine-nnp-false-5afad539-5bd3-4006-b407-4ddd091446ae" to be "success or failure"
test/e2e/framework/pods.go:208
				
				Click to see stdout/stderrfrom junit_12.xml

Find alpine-nnp-false-5afad539-5bd3-4006-b407-4ddd091446ae mentions in log files | View test history on testgrid


Kubernetes e2e suite [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] 5m3s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-api\-machinery\]\sAggregator\sShould\sbe\sable\sto\ssupport\sthe\s1\.10\sSample\sAPI\sServer\susing\sthe\scurrent\sAggregator\s\[Conformance\]$'
test/e2e/framework/framework.go:721
Nov 22 09:57:10.135: deploying extension apiserver in namespace aggregator-7365
Unexpected error:
    <*errors.errorString | 0xc0005acfa0>: {
        s: "error waiting for deployment \"sample-apiserver-deployment\" status to match expectation: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:\"Available\", Status:\"False\", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63710013128, loc:(*time.Location)(0x7ce0220)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63710013128, loc:(*time.Location)(0x7ce0220)}}, Reason:\"MinimumReplicasUnavailable\", Message:\"Deployment does not have minimum availability.\"}, v1.DeploymentCondition{Type:\"Progressing\", Status:\"True\", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63710013128, loc:(*time.Location)(0x7ce0220)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63710013128, loc:(*time.Location)(0x7ce0220)}}, Reason:\"ReplicaSetUpdated\", Message:\"ReplicaSet \\\"sample-apiserver-deployment-867766ffc6\\\" is progressing.\"}}, CollisionCount:(*int32)(nil)}",
    }
    error waiting for deployment "sample-apiserver-deployment" status to match expectation: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63710013128, loc:(*time.Location)(0x7ce0220)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63710013128, loc:(*time.Location)(0x7ce0220)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63710013128, loc:(*time.Location)(0x7ce0220)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63710013128, loc:(*time.Location)(0x7ce0220)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)}
occurred
test/e2e/apimachinery/aggregator.go:327
				
				Click to see stdout/stderrfrom junit_04.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] 5m5s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-api\-machinery\]\sAggregator\sShould\sbe\sable\sto\ssupport\sthe\s1\.10\sSample\sAPI\sServer\susing\sthe\scurrent\sAggregator\s\[Conformance\]$'
test/e2e/framework/framework.go:721
Nov 22 10:02:13.418: deploying extension apiserver in namespace aggregator-4449
Unexpected error:
    <*errors.errorString | 0xc0022b7270>: {
        s: "error waiting for deployment \"sample-apiserver-deployment\" status to match expectation: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:\"Available\", Status:\"False\", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63710013431, loc:(*time.Location)(0x7ce0220)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63710013431, loc:(*time.Location)(0x7ce0220)}}, Reason:\"MinimumReplicasUnavailable\", Message:\"Deployment does not have minimum availability.\"}, v1.DeploymentCondition{Type:\"Progressing\", Status:\"True\", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63710013431, loc:(*time.Location)(0x7ce0220)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63710013431, loc:(*time.Location)(0x7ce0220)}}, Reason:\"ReplicaSetUpdated\", Message:\"ReplicaSet \\\"sample-apiserver-deployment-867766ffc6\\\" is progressing.\"}}, CollisionCount:(*int32)(nil)}",
    }
    error waiting for deployment "sample-apiserver-deployment" status to match expectation: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63710013431, loc:(*time.Location)(0x7ce0220)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63710013431, loc:(*time.Location)(0x7ce0220)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63710013431, loc:(*time.Location)(0x7ce0220)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63710013431, loc:(*time.Location)(0x7ce0220)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)}
occurred
test/e2e/apimachinery/aggregator.go:327