Recent runs || View in Spyglass
PR | berry2012: Update dynamic_provisioning README.md |
Result | ABORTED |
Tests | 0 failed / 0 succeeded |
Started | |
Elapsed | 37m31s |
Revision | 2cf9c51d5afef33de5c32192cca513ee3bb7504a |
Refs |
290 |
... skipping 337 lines ... #9 9.159 Installing : 7:device-mapper-libs-1.02.170-6.amzn2.5.x86_64 21/29 #9 9.414 Installing : cryptsetup-libs-1.7.4-4.amzn2.x86_64 22/29 #9 9.622 Installing : elfutils-libs-0.176-2.amzn2.x86_64 23/29 #9 9.903 Installing : systemd-libs-219-78.amzn2.0.21.x86_64 24/29 #9 10.01 Installing : 1:dbus-libs-1.10.24-7.amzn2.0.2.x86_64 25/29 #9 11.87 Installing : systemd-219-78.amzn2.0.21.x86_64 26/29 #9 12.65 Failed to get D-Bus connection: Operation not permitted #9 12.71 Installing : elfutils-default-yama-scope-0.176-2.amzn2.noarch 27/29 #9 13.03 Installing : 1:dbus-1.10.24-7.amzn2.0.2.x86_64 28/29 #9 13.15 Installing : libyaml-0.1.4-11.amzn2.0.2.x86_64 29/29 #9 13.33 Verifying : lz4-1.7.5-2.amzn2.0.1.x86_64 1/29 #9 13.36 Verifying : elfutils-default-yama-scope-0.176-2.amzn2.noarch 2/29 #9 13.40 Verifying : libfdisk-2.30.2-2.amzn2.0.10.x86_64 3/29 ... skipping 667 lines ... ## Validating cluster test-cluster-32598.k8s.local # Using cluster from kubectl context: test-cluster-32598.k8s.local Validating cluster test-cluster-32598.k8s.local W0111 16:12:00.576233 7810 validate_cluster.go:173] (will retry): unexpected error during validation: error listing nodes: Get "https://api-test-cluster-32598-k8-mh1rj8-388857657.us-west-2.elb.amazonaws.com/api/v1/nodes": dial tcp: lookup api-test-cluster-32598-k8-mh1rj8-388857657.us-west-2.elb.amazonaws.com on 10.63.240.10:53: no such host W0111 16:12:10.616304 7810 validate_cluster.go:173] (will retry): unexpected error during validation: error listing nodes: Get "https://api-test-cluster-32598-k8-mh1rj8-388857657.us-west-2.elb.amazonaws.com/api/v1/nodes": dial tcp: lookup api-test-cluster-32598-k8-mh1rj8-388857657.us-west-2.elb.amazonaws.com on 10.63.240.10:53: no such host W0111 16:12:20.658424 7810 validate_cluster.go:173] (will retry): unexpected error during validation: error listing nodes: Get "https://api-test-cluster-32598-k8-mh1rj8-388857657.us-west-2.elb.amazonaws.com/api/v1/nodes": dial tcp: lookup api-test-cluster-32598-k8-mh1rj8-388857657.us-west-2.elb.amazonaws.com on 10.63.240.10:53: no such host W0111 16:12:30.700069 7810 validate_cluster.go:173] (will retry): unexpected error during validation: error listing nodes: Get "https://api-test-cluster-32598-k8-mh1rj8-388857657.us-west-2.elb.amazonaws.com/api/v1/nodes": dial tcp: lookup api-test-cluster-32598-k8-mh1rj8-388857657.us-west-2.elb.amazonaws.com on 10.63.240.10:53: no such host W0111 16:12:44.213674 7810 validate_cluster.go:173] (will retry): unexpected error during validation: error listing nodes: Get "https://api-test-cluster-32598-k8-mh1rj8-388857657.us-west-2.elb.amazonaws.com/api/v1/nodes": dial tcp: lookup api-test-cluster-32598-k8-mh1rj8-388857657.us-west-2.elb.amazonaws.com on 10.63.240.10:53: no such host W0111 16:13:05.912550 7810 validate_cluster.go:173] (will retry): unexpected error during validation: error listing nodes: an error on the server ("") has prevented the request from succeeding (get nodes) W0111 16:13:27.502872 7810 validate_cluster.go:173] (will retry): unexpected error during validation: error listing nodes: an error on the server ("") has prevented the request from succeeding (get nodes) W0111 16:13:49.210250 7810 validate_cluster.go:173] (will retry): unexpected error during validation: error listing nodes: an error on the server ("") has prevented the request from succeeding (get nodes) W0111 16:14:10.874503 7810 validate_cluster.go:173] (will retry): unexpected error during validation: error listing nodes: an error on the server ("") has prevented the request from succeeding (get nodes) W0111 16:14:32.541870 7810 validate_cluster.go:173] (will retry): unexpected error during validation: error listing nodes: an error on the server ("") has prevented the request from succeeding (get nodes) INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-us-west-2a Master t3.medium 1 1 us-west-2a nodes-us-west-2a Node c5.large 1 1 us-west-2a NODE STATUS ... skipping 3 lines ... VALIDATION ERRORS KIND NAME MESSAGE Machine i-0ea5135b2e3b21d35 machine "i-0ea5135b2e3b21d35" has not yet joined cluster Node ip-172-20-35-125.us-west-2.compute.internal node "ip-172-20-35-125.us-west-2.compute.internal" of role "master" is not ready Pod kube-system/kops-controller-s7qtj system-node-critical pod "kops-controller-s7qtj" is pending Validation Failed W0111 16:14:45.553651 7810 validate_cluster.go:221] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-us-west-2a Master t3.medium 1 1 us-west-2a nodes-us-west-2a Node c5.large 1 1 us-west-2a ... skipping 8 lines ... Node ip-172-20-35-125.us-west-2.compute.internal master "ip-172-20-35-125.us-west-2.compute.internal" is missing kube-controller-manager pod Node ip-172-20-35-125.us-west-2.compute.internal master "ip-172-20-35-125.us-west-2.compute.internal" is missing kube-scheduler pod Pod kube-system/coredns-8f5559c9b-nc25j system-cluster-critical pod "coredns-8f5559c9b-nc25j" is pending Pod kube-system/coredns-autoscaler-6f594f4c58-tgnhl system-cluster-critical pod "coredns-autoscaler-6f594f4c58-tgnhl" is pending Pod kube-system/dns-controller-5d59c585d8-qdm2q system-cluster-critical pod "dns-controller-5d59c585d8-qdm2q" is pending Validation Failed W0111 16:14:57.034695 7810 validate_cluster.go:221] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-us-west-2a Master t3.medium 1 1 us-west-2a nodes-us-west-2a Node c5.large 1 1 us-west-2a ... skipping 7 lines ... Node ip-172-20-35-125.us-west-2.compute.internal master "ip-172-20-35-125.us-west-2.compute.internal" is missing kube-apiserver pod Node ip-172-20-35-125.us-west-2.compute.internal master "ip-172-20-35-125.us-west-2.compute.internal" is missing kube-controller-manager pod Node ip-172-20-35-125.us-west-2.compute.internal master "ip-172-20-35-125.us-west-2.compute.internal" is missing kube-scheduler pod Pod kube-system/coredns-8f5559c9b-nc25j system-cluster-critical pod "coredns-8f5559c9b-nc25j" is pending Pod kube-system/coredns-autoscaler-6f594f4c58-tgnhl system-cluster-critical pod "coredns-autoscaler-6f594f4c58-tgnhl" is pending Validation Failed W0111 16:15:08.407554 7810 validate_cluster.go:221] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-us-west-2a Master t3.medium 1 1 us-west-2a nodes-us-west-2a Node c5.large 1 1 us-west-2a ... skipping 7 lines ... Node ip-172-20-35-125.us-west-2.compute.internal master "ip-172-20-35-125.us-west-2.compute.internal" is missing kube-apiserver pod Node ip-172-20-35-125.us-west-2.compute.internal master "ip-172-20-35-125.us-west-2.compute.internal" is missing kube-controller-manager pod Node ip-172-20-35-125.us-west-2.compute.internal master "ip-172-20-35-125.us-west-2.compute.internal" is missing kube-scheduler pod Pod kube-system/coredns-8f5559c9b-nc25j system-cluster-critical pod "coredns-8f5559c9b-nc25j" is pending Pod kube-system/coredns-autoscaler-6f594f4c58-tgnhl system-cluster-critical pod "coredns-autoscaler-6f594f4c58-tgnhl" is pending Validation Failed W0111 16:15:19.829018 7810 validate_cluster.go:221] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-us-west-2a Master t3.medium 1 1 us-west-2a nodes-us-west-2a Node c5.large 1 1 us-west-2a ... skipping 9 lines ... Pod kube-system/coredns-8f5559c9b-nc25j system-cluster-critical pod "coredns-8f5559c9b-nc25j" is pending Pod kube-system/coredns-autoscaler-6f594f4c58-tgnhl system-cluster-critical pod "coredns-autoscaler-6f594f4c58-tgnhl" is pending Pod kube-system/etcd-manager-events-ip-172-20-35-125.us-west-2.compute.internal system-cluster-critical pod "etcd-manager-events-ip-172-20-35-125.us-west-2.compute.internal" is pending Pod kube-system/etcd-manager-main-ip-172-20-35-125.us-west-2.compute.internal system-cluster-critical pod "etcd-manager-main-ip-172-20-35-125.us-west-2.compute.internal" is pending Pod kube-system/kube-proxy-ip-172-20-35-125.us-west-2.compute.internal system-node-critical pod "kube-proxy-ip-172-20-35-125.us-west-2.compute.internal" is pending Validation Failed W0111 16:15:31.277690 7810 validate_cluster.go:221] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-us-west-2a Master t3.medium 1 1 us-west-2a nodes-us-west-2a Node c5.large 1 1 us-west-2a ... skipping 6 lines ... Machine i-0ea5135b2e3b21d35 machine "i-0ea5135b2e3b21d35" has not yet joined cluster Node ip-172-20-35-125.us-west-2.compute.internal master "ip-172-20-35-125.us-west-2.compute.internal" is missing kube-controller-manager pod Pod kube-system/coredns-8f5559c9b-nc25j system-cluster-critical pod "coredns-8f5559c9b-nc25j" is pending Pod kube-system/coredns-autoscaler-6f594f4c58-tgnhl system-cluster-critical pod "coredns-autoscaler-6f594f4c58-tgnhl" is pending Pod kube-system/kube-controller-manager-ip-172-20-35-125.us-west-2.compute.internal system-cluster-critical pod "kube-controller-manager-ip-172-20-35-125.us-west-2.compute.internal" is pending Validation Failed W0111 16:15:42.605045 7810 validate_cluster.go:221] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-us-west-2a Master t3.medium 1 1 us-west-2a nodes-us-west-2a Node c5.large 1 1 us-west-2a ... skipping 5 lines ... VALIDATION ERRORS KIND NAME MESSAGE Node ip-172-20-57-107.us-west-2.compute.internal node "ip-172-20-57-107.us-west-2.compute.internal" of role "node" is not ready Pod kube-system/coredns-8f5559c9b-nc25j system-cluster-critical pod "coredns-8f5559c9b-nc25j" is pending Pod kube-system/coredns-autoscaler-6f594f4c58-tgnhl system-cluster-critical pod "coredns-autoscaler-6f594f4c58-tgnhl" is pending Validation Failed W0111 16:15:54.101889 7810 validate_cluster.go:221] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-us-west-2a Master t3.medium 1 1 us-west-2a nodes-us-west-2a Node c5.large 1 1 us-west-2a ... skipping 4 lines ... VALIDATION ERRORS KIND NAME MESSAGE Pod kube-system/coredns-8f5559c9b-nc25j system-cluster-critical pod "coredns-8f5559c9b-nc25j" is pending Pod kube-system/coredns-autoscaler-6f594f4c58-tgnhl system-cluster-critical pod "coredns-autoscaler-6f594f4c58-tgnhl" is pending Validation Failed W0111 16:16:05.491224 7810 validate_cluster.go:221] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-us-west-2a Master t3.medium 1 1 us-west-2a nodes-us-west-2a Node c5.large 1 1 us-west-2a ... skipping 4 lines ... VALIDATION ERRORS KIND NAME MESSAGE Pod kube-system/coredns-8f5559c9b-nc25j system-cluster-critical pod "coredns-8f5559c9b-nc25j" is pending Pod kube-system/coredns-autoscaler-6f594f4c58-tgnhl system-cluster-critical pod "coredns-autoscaler-6f594f4c58-tgnhl" is pending Validation Failed W0111 16:16:16.885455 7810 validate_cluster.go:221] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-us-west-2a Master t3.medium 1 1 us-west-2a nodes-us-west-2a Node c5.large 1 1 us-west-2a ... skipping 4 lines ... VALIDATION ERRORS KIND NAME MESSAGE Pod kube-system/coredns-8f5559c9b-nc25j system-cluster-critical pod "coredns-8f5559c9b-nc25j" is pending Pod kube-system/coredns-autoscaler-6f594f4c58-tgnhl system-cluster-critical pod "coredns-autoscaler-6f594f4c58-tgnhl" is pending Validation Failed W0111 16:16:28.235229 7810 validate_cluster.go:221] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-us-west-2a Master t3.medium 1 1 us-west-2a nodes-us-west-2a Node c5.large 1 1 us-west-2a ... skipping 3 lines ... ip-172-20-57-107.us-west-2.compute.internal node True VALIDATION ERRORS KIND NAME MESSAGE Pod kube-system/coredns-8f5559c9b-r4dp4 system-cluster-critical pod "coredns-8f5559c9b-r4dp4" is pending Validation Failed W0111 16:16:39.639316 7810 validate_cluster.go:221] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-us-west-2a Master t3.medium 1 1 us-west-2a nodes-us-west-2a Node c5.large 1 1 us-west-2a ... skipping 3 lines ... ip-172-20-57-107.us-west-2.compute.internal node True VALIDATION ERRORS KIND NAME MESSAGE Pod kube-system/kube-proxy-ip-172-20-57-107.us-west-2.compute.internal system-node-critical pod "kube-proxy-ip-172-20-57-107.us-west-2.compute.internal" is pending Validation Failed W0111 16:16:50.983765 7810 validate_cluster.go:221] (will retry): cluster not yet healthy INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-us-west-2a Master t3.medium 1 1 us-west-2a nodes-us-west-2a Node c5.large 1 1 us-west-2a ... skipping 169 lines ... FSx CSI Driver Conformance [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-fsx-csi-driver/tests/e2e/conformance_test.go:183[0m [Driver: fsx.csi.aws.com] [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-fsx-csi-driver/tests/e2e/conformance_test.go:185[0m [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-fsx-csi-driver/tests/e2e/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m [36m[1mshould fail if subpath file is outside the volume [Slow][LinuxOnly] [BeforeEach][0m [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-fsx-csi-driver/tests/e2e/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:256[0m [36mDriver fsx.csi.aws.com doesn't support DynamicPV -- skipping[0m /home/prow/go/src/github.com/kubernetes-sigs/aws-fsx-csi-driver/tests/e2e/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116 [90m------------------------------[0m ... skipping 71 lines ... FSx CSI Driver Conformance [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-fsx-csi-driver/tests/e2e/conformance_test.go:183[0m [Driver: fsx.csi.aws.com] [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-fsx-csi-driver/tests/e2e/conformance_test.go:185[0m [Testpattern: Dynamic PV (block volmode)] volumeMode [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-fsx-csi-driver/tests/e2e/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m [36m[1mshould fail in binding dynamic provisioned PV to PVC [Slow][LinuxOnly] [BeforeEach][0m [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-fsx-csi-driver/tests/e2e/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:258[0m [36mDriver fsx.csi.aws.com doesn't support DynamicPV -- skipping[0m /home/prow/go/src/github.com/kubernetes-sigs/aws-fsx-csi-driver/tests/e2e/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116 [90m------------------------------[0m ... skipping 92 lines ... FSx CSI Driver Conformance [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-fsx-csi-driver/tests/e2e/conformance_test.go:183[0m [Driver: fsx.csi.aws.com] [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-fsx-csi-driver/tests/e2e/conformance_test.go:185[0m [Testpattern: Inline-volume (default fs)] subPath [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-fsx-csi-driver/tests/e2e/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m [36m[1mshould fail if non-existent subpath is outside the volume [Slow][LinuxOnly] [BeforeEach][0m [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-fsx-csi-driver/tests/e2e/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:267[0m [36mDriver fsx.csi.aws.com doesn't support InlineVolume -- skipping[0m /home/prow/go/src/github.com/kubernetes-sigs/aws-fsx-csi-driver/tests/e2e/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116 [90m------------------------------[0m ... skipping 10 lines ... FSx CSI Driver Conformance [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-fsx-csi-driver/tests/e2e/conformance_test.go:183[0m [Driver: fsx.csi.aws.com] [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-fsx-csi-driver/tests/e2e/conformance_test.go:185[0m [Testpattern: Dynamic PV (default fs)] subPath [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-fsx-csi-driver/tests/e2e/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m [36m[1mshould fail if subpath with backstepping is outside the volume [Slow][LinuxOnly] [BeforeEach][0m [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-fsx-csi-driver/tests/e2e/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:278[0m [36mDriver fsx.csi.aws.com doesn't support DynamicPV -- skipping[0m /home/prow/go/src/github.com/kubernetes-sigs/aws-fsx-csi-driver/tests/e2e/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116 [90m------------------------------[0m ... skipping 352 lines ... Jan 11 16:24:13.850: INFO: PersistentVolumeClaim pvc-frfk4 found but phase is Pending instead of Bound. Jan 11 16:24:15.917: INFO: PersistentVolumeClaim pvc-frfk4 found and phase=Bound (5m33.162728734s) [1mSTEP[0m: checking the PVC [1mSTEP[0m: validating provisioned PV [1mSTEP[0m: checking the PV [1mSTEP[0m: deploying the pod [1mSTEP[0m: checking that the pods command exits with no error Jan 11 16:24:16.120: INFO: Waiting up to 15m0s for pod "fsx-volume-tester-7bv5q" in namespace "fsx-2170" to be "Succeeded or Failed" Jan 11 16:24:16.187: INFO: Pod "fsx-volume-tester-7bv5q": Phase="Pending", Reason="", readiness=false. Elapsed: 67.163935ms Jan 11 16:24:18.254: INFO: Pod "fsx-volume-tester-7bv5q": Phase="Pending", Reason="", readiness=false. Elapsed: 2.134629616s Jan 11 16:24:20.322: INFO: Pod "fsx-volume-tester-7bv5q": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.202009232s [1mSTEP[0m: Saw pod success Jan 11 16:24:20.322: INFO: Pod "fsx-volume-tester-7bv5q" satisfied condition "Succeeded or Failed" Jan 11 16:24:20.322: INFO: deleting Pod "fsx-2170"/"fsx-volume-tester-7bv5q" Jan 11 16:24:20.404: INFO: Pod fsx-volume-tester-7bv5q has the following logs: hello world [1mSTEP[0m: Deleting pod fsx-volume-tester-7bv5q in namespace fsx-2170 Jan 11 16:24:20.479: INFO: deleting PVC "fsx-2170"/"pvc-frfk4" Jan 11 16:24:20.479: INFO: Deleting PersistentVolumeClaim "pvc-frfk4" ... skipping 84 lines ... Jan 11 16:24:44.544: INFO: PersistentVolumeClaim pvc-nqml2 found but phase is Pending instead of Bound. Jan 11 16:24:46.608: INFO: PersistentVolumeClaim pvc-nqml2 found and phase=Bound (2.128136984s) Jan 11 16:24:46.608: INFO: Waiting up to 3m0s for PersistentVolume fsx.csi.aws.com-jdzjg to have phase Bound Jan 11 16:24:46.672: INFO: PersistentVolume fsx.csi.aws.com-jdzjg found and phase=Bound (64.112141ms) [1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-dj5s [1mSTEP[0m: Creating a pod to test multi_subpath Jan 11 16:24:46.870: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-dj5s" in namespace "provisioning-3032" to be "Succeeded or Failed" Jan 11 16:24:46.990: INFO: Pod "pod-subpath-test-preprovisionedpv-dj5s": Phase="Pending", Reason="", readiness=false. Elapsed: 120.12511ms Jan 11 16:24:49.055: INFO: Pod "pod-subpath-test-preprovisionedpv-dj5s": Phase="Pending", Reason="", readiness=false. Elapsed: 2.185155303s Jan 11 16:24:51.125: INFO: Pod "pod-subpath-test-preprovisionedpv-dj5s": Phase="Pending", Reason="", readiness=false. Elapsed: 4.254509622s Jan 11 16:24:53.190: INFO: Pod "pod-subpath-test-preprovisionedpv-dj5s": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.319762893s [1mSTEP[0m: Saw pod success Jan 11 16:24:53.190: INFO: Pod "pod-subpath-test-preprovisionedpv-dj5s" satisfied condition "Succeeded or Failed" Jan 11 16:24:53.254: INFO: Trying to get logs from node ip-172-20-57-107.us-west-2.compute.internal pod pod-subpath-test-preprovisionedpv-dj5s container test-container-subpath-preprovisionedpv-dj5s: <nil> [1mSTEP[0m: delete the pod Jan 11 16:24:53.404: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-dj5s to disappear Jan 11 16:24:53.467: INFO: Pod pod-subpath-test-preprovisionedpv-dj5s no longer exists [1mSTEP[0m: Deleting pod Jan 11 16:24:53.467: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-dj5s" in namespace "provisioning-3032" ... skipping 123 lines ... Jan 11 16:25:29.980: INFO: PersistentVolumeClaim pvc-9sksj found but phase is Pending instead of Bound. Jan 11 16:25:32.043: INFO: PersistentVolumeClaim pvc-9sksj found and phase=Bound (16.583249068s) Jan 11 16:25:32.043: INFO: Waiting up to 3m0s for PersistentVolume fsx.csi.aws.com-f7rcw to have phase Bound Jan 11 16:25:32.106: INFO: PersistentVolume fsx.csi.aws.com-f7rcw found and phase=Bound (62.540999ms) [1mSTEP[0m: Creating pod exec-volume-test-preprovisionedpv-rrl7 [1mSTEP[0m: Creating a pod to test exec-volume-test Jan 11 16:25:32.301: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-rrl7" in namespace "volume-756" to be "Succeeded or Failed" Jan 11 16:25:32.364: INFO: Pod "exec-volume-test-preprovisionedpv-rrl7": Phase="Pending", Reason="", readiness=false. Elapsed: 63.596402ms Jan 11 16:25:34.428: INFO: Pod "exec-volume-test-preprovisionedpv-rrl7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.127291895s Jan 11 16:25:36.523: INFO: Pod "exec-volume-test-preprovisionedpv-rrl7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.222121795s [1mSTEP[0m: Saw pod success Jan 11 16:25:36.523: INFO: Pod "exec-volume-test-preprovisionedpv-rrl7" satisfied condition "Succeeded or Failed" Jan 11 16:25:36.585: INFO: Trying to get logs from node ip-172-20-57-107.us-west-2.compute.internal pod exec-volume-test-preprovisionedpv-rrl7 container exec-container-preprovisionedpv-rrl7: <nil> [1mSTEP[0m: delete the pod Jan 11 16:25:36.722: INFO: Waiting for pod exec-volume-test-preprovisionedpv-rrl7 to disappear Jan 11 16:25:36.784: INFO: Pod exec-volume-test-preprovisionedpv-rrl7 no longer exists [1mSTEP[0m: Deleting pod exec-volume-test-preprovisionedpv-rrl7 Jan 11 16:25:36.784: INFO: Deleting pod "exec-volume-test-preprovisionedpv-rrl7" in namespace "volume-756" ... skipping 524 lines ... Jan 11 16:29:58.688: INFO: PersistentVolumeClaim pvc-lxtkr found but phase is Pending instead of Bound. Jan 11 16:30:00.756: INFO: PersistentVolumeClaim pvc-lxtkr found and phase=Bound (2.134741352s) Jan 11 16:30:00.756: INFO: Waiting up to 3m0s for PersistentVolume fsx.csi.aws.com-8nh8d to have phase Bound Jan 11 16:30:00.822: INFO: PersistentVolume fsx.csi.aws.com-8nh8d found and phase=Bound (65.509417ms) [1mSTEP[0m: Creating pod pod-subpath-test-preprovisionedpv-rchb [1mSTEP[0m: Creating a pod to test atomic-volume-subpath Jan 11 16:30:01.022: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-rchb" in namespace "provisioning-189" to be "Succeeded or Failed" Jan 11 16:30:01.091: INFO: Pod "pod-subpath-test-preprovisionedpv-rchb": Phase="Pending", Reason="", readiness=false. Elapsed: 69.554189ms Jan 11 16:30:03.164: INFO: Pod "pod-subpath-test-preprovisionedpv-rchb": Phase="Running", Reason="", readiness=true. Elapsed: 2.142242149s Jan 11 16:30:05.231: INFO: Pod "pod-subpath-test-preprovisionedpv-rchb": Phase="Running", Reason="", readiness=true. Elapsed: 4.208920034s Jan 11 16:30:07.298: INFO: Pod "pod-subpath-test-preprovisionedpv-rchb": Phase="Running", Reason="", readiness=true. Elapsed: 6.276346582s Jan 11 16:30:09.365: INFO: Pod "pod-subpath-test-preprovisionedpv-rchb": Phase="Running", Reason="", readiness=true. Elapsed: 8.342756858s Jan 11 16:30:11.430: INFO: Pod "pod-subpath-test-preprovisionedpv-rchb": Phase="Running", Reason="", readiness=true. Elapsed: 10.40854209s Jan 11 16:30:13.497: INFO: Pod "pod-subpath-test-preprovisionedpv-rchb": Phase="Running", Reason="", readiness=true. Elapsed: 12.474930657s Jan 11 16:30:15.564: INFO: Pod "pod-subpath-test-preprovisionedpv-rchb": Phase="Running", Reason="", readiness=true. Elapsed: 14.542235932s Jan 11 16:30:17.632: INFO: Pod "pod-subpath-test-preprovisionedpv-rchb": Phase="Running", Reason="", readiness=true. Elapsed: 16.610193868s Jan 11 16:30:19.699: INFO: Pod "pod-subpath-test-preprovisionedpv-rchb": Phase="Running", Reason="", readiness=true. Elapsed: 18.676971797s Jan 11 16:30:21.765: INFO: Pod "pod-subpath-test-preprovisionedpv-rchb": Phase="Running", Reason="", readiness=true. Elapsed: 20.743533572s Jan 11 16:30:23.848: INFO: Pod "pod-subpath-test-preprovisionedpv-rchb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.826249502s [1mSTEP[0m: Saw pod success Jan 11 16:30:23.848: INFO: Pod "pod-subpath-test-preprovisionedpv-rchb" satisfied condition "Succeeded or Failed" Jan 11 16:30:23.914: INFO: Trying to get logs from node ip-172-20-57-107.us-west-2.compute.internal pod pod-subpath-test-preprovisionedpv-rchb container test-container-subpath-preprovisionedpv-rchb: <nil> [1mSTEP[0m: delete the pod Jan 11 16:30:24.059: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-rchb to disappear Jan 11 16:30:24.125: INFO: Pod pod-subpath-test-preprovisionedpv-rchb no longer exists [1mSTEP[0m: Deleting pod pod-subpath-test-preprovisionedpv-rchb Jan 11 16:30:24.125: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-rchb" in namespace "provisioning-189" ... skipping 51 lines ... FSx CSI Driver Conformance [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-fsx-csi-driver/tests/e2e/conformance_test.go:183[0m [Driver: fsx.csi.aws.com] [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-fsx-csi-driver/tests/e2e/conformance_test.go:185[0m [Testpattern: Dynamic PV (default fs)] subPath [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-fsx-csi-driver/tests/e2e/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50[0m [36m[1mshould fail if non-existent subpath is outside the volume [Slow][LinuxOnly] [BeforeEach][0m [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-fsx-csi-driver/tests/e2e/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:267[0m [36mDriver fsx.csi.aws.com doesn't support DynamicPV -- skipping[0m /home/prow/go/src/github.com/kubernetes-sigs/aws-fsx-csi-driver/tests/e2e/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116 [90m------------------------------[0m ... skipping 130 lines ... [90m/home/prow/go/src/github.com/kubernetes-sigs/aws-fsx-csi-driver/tests/e2e/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159[0m [36mDriver fsx.csi.aws.com doesn't support DynamicPV -- skipping[0m /home/prow/go/src/github.com/kubernetes-sigs/aws-fsx-csi-driver/tests/e2e/vendor/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116 [90m------------------------------[0m {"component":"entrypoint","file":"k8s.io/test-infra/prow/entrypoint/run.go:168","func":"k8s.io/test-infra/prow/entrypoint.Options.ExecuteProcess","level":"error","msg":"Entrypoint received interrupt: terminated","severity":"error","time":"2023-01-11T16:30:49Z"} ++ early_exit_handler ++ '[' -n 166 ']' ++ kill -TERM 166 ++ cleanup_dind ++ [[ true == \t\r\u\e ]] ++ echo 'Cleaning up after docker' ... skipping 5 lines ...