go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-api\-machinery\]\sServers\swith\ssupport\sfor\sAPI\schunking\sshould\ssupport\scontinue\slisting\sfrom\sthe\slast\skey\sif\sthe\soriginal\sversion\shas\sbeen\scompacted\saway\,\sthough\sthe\slist\sis\sinconsistent\s\[Slow\]$'
test/e2e/apimachinery/chunking.go:177 k8s.io/kubernetes/test/e2e/apimachinery.glob..func4.3() test/e2e/apimachinery/chunking.go:177 +0x7fc There were additional failures detected after the initial failure: [FAILED] Nov 26 06:21:16.219: failed to list events in namespace "chunking-989": Get "https://34.83.17.181/api/v1/namespaces/chunking-989/events": dial tcp 34.83.17.181:443: connect: connection refused In [DeferCleanup (Each)] at: test/e2e/framework/debug/dump.go:44 ---------- [FAILED] Nov 26 06:21:16.259: Couldn't delete ns: "chunking-989": Delete "https://34.83.17.181/api/v1/namespaces/chunking-989": dial tcp 34.83.17.181:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://34.83.17.181/api/v1/namespaces/chunking-989", Err:(*net.OpError)(0xc002d6a0a0)}) In [DeferCleanup (Each)] at: test/e2e/framework/framework.go:370from junit_01.xml
[BeforeEach] [sig-api-machinery] Servers with support for API chunking set up framework | framework.go:178 STEP: Creating a kubernetes client 11/26/22 06:14:15.094 Nov 26 06:14:15.095: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename chunking 11/26/22 06:14:15.096 STEP: Waiting for a default service account to be provisioned in namespace 11/26/22 06:14:15.495 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/26/22 06:14:15.699 [BeforeEach] [sig-api-machinery] Servers with support for API chunking test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-api-machinery] Servers with support for API chunking test/e2e/apimachinery/chunking.go:51 STEP: creating a large number of resources 11/26/22 06:14:15.864 [It] should support continue listing from the last key if the original version has been compacted away, though the list is inconsistent [Slow] test/e2e/apimachinery/chunking.go:126 STEP: retrieving the first page 11/26/22 06:14:33.454 Nov 26 06:14:33.526: INFO: Retrieved 40/40 results with rv 6394 and continue eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6NjM5NCwic3RhcnQiOiJ0ZW1wbGF0ZS0wMDM5XHUwMDAwIn0 STEP: retrieving the second page until the token expires 11/26/22 06:14:33.526 Nov 26 06:14:53.585: INFO: Token eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6NjM5NCwic3RhcnQiOiJ0ZW1wbGF0ZS0wMDM5XHUwMDAwIn0 has not expired yet Nov 26 06:15:13.589: INFO: Token eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6NjM5NCwic3RhcnQiOiJ0ZW1wbGF0ZS0wMDM5XHUwMDAwIn0 has not expired yet Nov 26 06:15:33.608: INFO: Token eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6NjM5NCwic3RhcnQiOiJ0ZW1wbGF0ZS0wMDM5XHUwMDAwIn0 has not expired yet Nov 26 06:15:53.608: INFO: Token eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6NjM5NCwic3RhcnQiOiJ0ZW1wbGF0ZS0wMDM5XHUwMDAwIn0 has not expired yet Nov 26 06:16:13.616: INFO: Token eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6NjM5NCwic3RhcnQiOiJ0ZW1wbGF0ZS0wMDM5XHUwMDAwIn0 has not expired yet Nov 26 06:16:33.600: INFO: Token eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6NjM5NCwic3RhcnQiOiJ0ZW1wbGF0ZS0wMDM5XHUwMDAwIn0 has not expired yet Nov 26 06:16:53.581: INFO: Token eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6NjM5NCwic3RhcnQiOiJ0ZW1wbGF0ZS0wMDM5XHUwMDAwIn0 has not expired yet Nov 26 06:17:13.585: INFO: Token eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6NjM5NCwic3RhcnQiOiJ0ZW1wbGF0ZS0wMDM5XHUwMDAwIn0 has not expired yet Nov 26 06:17:33.587: INFO: Token eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6NjM5NCwic3RhcnQiOiJ0ZW1wbGF0ZS0wMDM5XHUwMDAwIn0 has not expired yet Nov 26 06:17:53.611: INFO: Token eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6NjM5NCwic3RhcnQiOiJ0ZW1wbGF0ZS0wMDM5XHUwMDAwIn0 has not expired yet Nov 26 06:18:13.596: INFO: Token eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6NjM5NCwic3RhcnQiOiJ0ZW1wbGF0ZS0wMDM5XHUwMDAwIn0 has not expired yet Nov 26 06:18:33.576: INFO: Token eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6NjM5NCwic3RhcnQiOiJ0ZW1wbGF0ZS0wMDM5XHUwMDAwIn0 has not expired yet Nov 26 06:18:53.575: INFO: Token eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6NjM5NCwic3RhcnQiOiJ0ZW1wbGF0ZS0wMDM5XHUwMDAwIn0 has not expired yet Nov 26 06:19:24.327: INFO: Token eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6NjM5NCwic3RhcnQiOiJ0ZW1wbGF0ZS0wMDM5XHUwMDAwIn0 has not expired yet ------------------------------ Progress Report for Ginkgo Process #13 Automatically polling progress: [sig-api-machinery] Servers with support for API chunking should support continue listing from the last key if the original version has been compacted away, though the list is inconsistent [Slow] (Spec Runtime: 5m18.359s) test/e2e/apimachinery/chunking.go:126 In [It] (Node Runtime: 5m0s) test/e2e/apimachinery/chunking.go:126 At [By Step] retrieving the second page until the token expires (Step Runtime: 4m59.928s) test/e2e/apimachinery/chunking.go:149 Spec Goroutine goroutine 1856 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc00482f020, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xa0?, 0x2fd9d05?, 0x38?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x0?, 0xc0045c54f0?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x76e86f3?, 0x32?, 0x0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/apimachinery.glob..func4.3() test/e2e/apimachinery/chunking.go:152 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x0, 0x0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 06:19:33.573: INFO: Token eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6NjM5NCwic3RhcnQiOiJ0ZW1wbGF0ZS0wMDM5XHUwMDAwIn0 has not expired yet ------------------------------ Progress Report for Ginkgo Process #13 Automatically polling progress: [sig-api-machinery] Servers with support for API chunking should support continue listing from the last key if the original version has been compacted away, though the list is inconsistent [Slow] (Spec Runtime: 5m38.361s) test/e2e/apimachinery/chunking.go:126 In [It] (Node Runtime: 5m20.002s) test/e2e/apimachinery/chunking.go:126 At [By Step] retrieving the second page until the token expires (Step Runtime: 5m19.929s) test/e2e/apimachinery/chunking.go:149 Spec Goroutine goroutine 1856 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc00482f020, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xa0?, 0x2fd9d05?, 0x38?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x0?, 0xc0045c54f0?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x76e86f3?, 0x32?, 0x0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/apimachinery.glob..func4.3() test/e2e/apimachinery/chunking.go:152 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x0, 0x0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 06:19:53.573: INFO: Token eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6NjM5NCwic3RhcnQiOiJ0ZW1wbGF0ZS0wMDM5XHUwMDAwIn0 has not expired yet ------------------------------ Progress Report for Ginkgo Process #13 Automatically polling progress: [sig-api-machinery] Servers with support for API chunking should support continue listing from the last key if the original version has been compacted away, though the list is inconsistent [Slow] (Spec Runtime: 5m58.363s) test/e2e/apimachinery/chunking.go:126 In [It] (Node Runtime: 5m40.004s) test/e2e/apimachinery/chunking.go:126 At [By Step] retrieving the second page until the token expires (Step Runtime: 5m39.932s) test/e2e/apimachinery/chunking.go:149 Spec Goroutine goroutine 1856 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc00482f020, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xa0?, 0x2fd9d05?, 0x38?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x0?, 0xc0045c54f0?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x76e86f3?, 0x32?, 0x0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/apimachinery.glob..func4.3() test/e2e/apimachinery/chunking.go:152 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x0, 0x0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 06:20:13.586: INFO: Token eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6NjM5NCwic3RhcnQiOiJ0ZW1wbGF0ZS0wMDM5XHUwMDAwIn0 has not expired yet ------------------------------ Progress Report for Ginkgo Process #13 Automatically polling progress: [sig-api-machinery] Servers with support for API chunking should support continue listing from the last key if the original version has been compacted away, though the list is inconsistent [Slow] (Spec Runtime: 6m18.365s) test/e2e/apimachinery/chunking.go:126 In [It] (Node Runtime: 6m0.006s) test/e2e/apimachinery/chunking.go:126 At [By Step] retrieving the second page until the token expires (Step Runtime: 5m59.934s) test/e2e/apimachinery/chunking.go:149 Spec Goroutine goroutine 1856 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc00482f020, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xa0?, 0x2fd9d05?, 0x38?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x0?, 0xc0045c54f0?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x76e86f3?, 0x32?, 0x0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/apimachinery.glob..func4.3() test/e2e/apimachinery/chunking.go:152 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x0, 0x0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 06:20:33.630: INFO: Token eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6NjM5NCwic3RhcnQiOiJ0ZW1wbGF0ZS0wMDM5XHUwMDAwIn0 has not expired yet ------------------------------ Progress Report for Ginkgo Process #13 Automatically polling progress: [sig-api-machinery] Servers with support for API chunking should support continue listing from the last key if the original version has been compacted away, though the list is inconsistent [Slow] (Spec Runtime: 6m38.367s) test/e2e/apimachinery/chunking.go:126 In [It] (Node Runtime: 6m20.008s) test/e2e/apimachinery/chunking.go:126 At [By Step] retrieving the second page until the token expires (Step Runtime: 6m19.936s) test/e2e/apimachinery/chunking.go:149 Spec Goroutine goroutine 1856 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc00482f020, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xa0?, 0x2fd9d05?, 0x38?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x0?, 0xc0045c54f0?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x76e86f3?, 0x32?, 0x0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/apimachinery.glob..func4.3() test/e2e/apimachinery/chunking.go:152 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x0, 0x0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #13 Automatically polling progress: [sig-api-machinery] Servers with support for API chunking should support continue listing from the last key if the original version has been compacted away, though the list is inconsistent [Slow] (Spec Runtime: 6m58.369s) test/e2e/apimachinery/chunking.go:126 In [It] (Node Runtime: 6m40.01s) test/e2e/apimachinery/chunking.go:126 At [By Step] retrieving the second page until the token expires (Step Runtime: 6m39.937s) test/e2e/apimachinery/chunking.go:149 Spec Goroutine goroutine 1856 [select] k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*ClientConn).RoundTrip(0xc0010be000, 0xc00334e000) vendor/golang.org/x/net/http2/transport.go:1200 k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).RoundTripOpt(0xc0010f9b80, 0xc00334e000, {0xe0?}) vendor/golang.org/x/net/http2/transport.go:519 k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).RoundTrip(...) vendor/golang.org/x/net/http2/transport.go:480 k8s.io/kubernetes/vendor/golang.org/x/net/http2.noDialH2RoundTripper.RoundTrip({0xc002f54000?}, 0xc00334e000?) vendor/golang.org/x/net/http2/transport.go:3020 net/http.(*Transport).roundTrip(0xc002f54000, 0xc00334e000) /usr/local/go/src/net/http/transport.go:540 net/http.(*Transport).RoundTrip(0x6fe4b20?, 0xc00493eb10?) /usr/local/go/src/net/http/roundtrip.go:17 k8s.io/kubernetes/vendor/k8s.io/client-go/transport.(*bearerAuthRoundTripper).RoundTrip(0xc0032541e0, 0xc00022bc00) vendor/k8s.io/client-go/transport/round_trippers.go:317 k8s.io/kubernetes/vendor/k8s.io/client-go/transport.(*userAgentRoundTripper).RoundTrip(0xc001780a20, 0xc00022a900) vendor/k8s.io/client-go/transport/round_trippers.go:168 net/http.send(0xc00022a900, {0x7fad100, 0xc001780a20}, {0x74d54e0?, 0x1?, 0x0?}) /usr/local/go/src/net/http/client.go:251 net/http.(*Client).send(0xc003254210, 0xc00022a900, {0x0?, 0x100000100?, 0x0?}) /usr/local/go/src/net/http/client.go:175 net/http.(*Client).do(0xc003254210, 0xc00022a900) /usr/local/go/src/net/http/client.go:715 net/http.(*Client).Do(...) /usr/local/go/src/net/http/client.go:581 k8s.io/kubernetes/vendor/k8s.io/client-go/rest.(*Request).request(0xc00019b300, {0x7fe0bc8, 0xc0000820e0}, 0x0?) vendor/k8s.io/client-go/rest/request.go:964 k8s.io/kubernetes/vendor/k8s.io/client-go/rest.(*Request).Do(0xc00019b300, {0x7fe0bc8, 0xc0000820e0}) vendor/k8s.io/client-go/rest/request.go:1005 k8s.io/kubernetes/vendor/k8s.io/client-go/kubernetes/typed/core/v1.(*podTemplates).List(0xc0047f2d80, {0x7fe0bc8, 0xc0000820e0}, {{{0x0, 0x0}, {0x0, 0x0}}, {0x0, 0x0}, {0x0, ...}, ...}) vendor/k8s.io/client-go/kubernetes/typed/core/v1/podtemplate.go:95 > k8s.io/kubernetes/test/e2e/apimachinery.glob..func4.3.1() test/e2e/apimachinery/chunking.go:153 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0000820c8?}, 0x262a61f?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc00482f020, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0xa0?, 0x2fd9d05?, 0x38?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe0bc8, 0xc0000820c8}, 0x0?, 0xc0045c54f0?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x76e86f3?, 0x32?, 0x0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 > k8s.io/kubernetes/test/e2e/apimachinery.glob..func4.3() test/e2e/apimachinery/chunking.go:152 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x0, 0x0}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ STEP: retrieving the second page again with the token received with the error message 11/26/22 06:21:16.1 Nov 26 06:21:16.139: INFO: Unexpected error: failed to list pod templates in namespace: chunking-989, given inconsistent continue token and limit: 40: <*url.Error | 0xc00493eed0>: { Op: "Get", URL: "https://34.83.17.181/api/v1/namespaces/chunking-989/podtemplates?limit=40", Err: <*net.OpError | 0xc002e65b80>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc002a26c60>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 34, 83, 17, 181], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc00122c340>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Nov 26 06:21:16.139: FAIL: failed to list pod templates in namespace: chunking-989, given inconsistent continue token and limit: 40: Get "https://34.83.17.181/api/v1/namespaces/chunking-989/podtemplates?limit=40": dial tcp 34.83.17.181:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/apimachinery.glob..func4.3() test/e2e/apimachinery/chunking.go:177 +0x7fc [AfterEach] [sig-api-machinery] Servers with support for API chunking test/e2e/framework/node/init/init.go:32 Nov 26 06:21:16.139: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-api-machinery] Servers with support for API chunking test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-api-machinery] Servers with support for API chunking dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/26/22 06:21:16.18 STEP: Collecting events from namespace "chunking-989". 11/26/22 06:21:16.18 Nov 26 06:21:16.219: INFO: Unexpected error: failed to list events in namespace "chunking-989": <*url.Error | 0xc002a26c90>: { Op: "Get", URL: "https://34.83.17.181/api/v1/namespaces/chunking-989/events", Err: <*net.OpError | 0xc004812d70>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc00493f860>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 34, 83, 17, 181], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc0047f3e60>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Nov 26 06:21:16.219: FAIL: failed to list events in namespace "chunking-989": Get "https://34.83.17.181/api/v1/namespaces/chunking-989/events": dial tcp 34.83.17.181:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework/debug.dumpEventsInNamespace(0xc0045c45c0, {0xc00327ac70, 0xc}) test/e2e/framework/debug/dump.go:44 +0x191 k8s.io/kubernetes/test/e2e/framework/debug.DumpAllNamespaceInfo({0x801de88, 0xc00321a1a0}, {0xc00327ac70, 0xc}) test/e2e/framework/debug/dump.go:62 +0x8d k8s.io/kubernetes/test/e2e/framework/debug/init.init.0.func1.1(0xc0045c4650?, {0xc00327ac70?, 0x7fa7740?}) test/e2e/framework/debug/init/init.go:34 +0x32 k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo.func1() test/e2e/framework/framework.go:274 +0x6d k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo(0xc00121b770) test/e2e/framework/framework.go:271 +0x179 reflect.Value.call({0x6627cc0?, 0xc001965210?, 0xc0031a5fb0?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0xc00260bc28?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc001965210?, 0x29449fc?}, {0xae73300?, 0xc0031a5f80?, 0x2a6d786?}) /usr/local/go/src/reflect/value.go:368 +0xbc [DeferCleanup (Each)] [sig-api-machinery] Servers with support for API chunking tear down framework | framework.go:193 STEP: Destroying namespace "chunking-989" for this suite. 11/26/22 06:21:16.22 Nov 26 06:21:16.259: FAIL: Couldn't delete ns: "chunking-989": Delete "https://34.83.17.181/api/v1/namespaces/chunking-989": dial tcp 34.83.17.181:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://34.83.17.181/api/v1/namespaces/chunking-989", Err:(*net.OpError)(0xc002d6a0a0)}) Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach.func1() test/e2e/framework/framework.go:370 +0x4fe k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc00121b770) test/e2e/framework/framework.go:383 +0x1ca reflect.Value.call({0x6627cc0?, 0xc001965150?, 0x0?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0x0?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc001965150?, 0x0?}, {0xae73300?, 0x0?, 0x0?}) /usr/local/go/src/reflect/value.go:368 +0xbc
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-apps\]\sCronJob\sshould\snot\sschedule\snew\sjobs\swhen\sForbidConcurrent\s\[Slow\]\s\[Conformance\]$'
test/e2e/apps/cronjob.go:133 k8s.io/kubernetes/test/e2e/apps.glob..func2.3() test/e2e/apps/cronjob.go:133 +0x290 There were additional failures detected after the initial failure: [FAILED] Nov 26 06:26:50.883: failed to list events in namespace "cronjob-44": Get "https://34.83.17.181/api/v1/namespaces/cronjob-44/events": dial tcp 34.83.17.181:443: connect: connection refused In [DeferCleanup (Each)] at: test/e2e/framework/debug/dump.go:44 ---------- [FAILED] Nov 26 06:26:50.922: Couldn't delete ns: "cronjob-44": Delete "https://34.83.17.181/api/v1/namespaces/cronjob-44": dial tcp 34.83.17.181:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://34.83.17.181/api/v1/namespaces/cronjob-44", Err:(*net.OpError)(0xc00433d270)}) In [DeferCleanup (Each)] at: test/e2e/framework/framework.go:370from junit_01.xml
[BeforeEach] [sig-apps] CronJob set up framework | framework.go:178 STEP: Creating a kubernetes client 11/26/22 06:25:30.057 Nov 26 06:25:30.057: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename cronjob 11/26/22 06:25:30.059 STEP: Waiting for a default service account to be provisioned in namespace 11/26/22 06:25:30.432 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/26/22 06:25:30.54 [BeforeEach] [sig-apps] CronJob test/e2e/framework/metrics/init/init.go:31 [It] should not schedule new jobs when ForbidConcurrent [Slow] [Conformance] test/e2e/apps/cronjob.go:124 STEP: Creating a ForbidConcurrent cronjob 11/26/22 06:25:30.662 STEP: Ensuring a job is scheduled 11/26/22 06:25:30.763 ERROR: get pod list in provisioning-9532-7706: Get "https://34.83.17.181/api/v1/namespaces/provisioning-9532-7706/pods": dial tcp 34.83.17.181:443: connect: connection refused ERROR: get pod list in provisioning-9532-7706: Get "https://34.83.17.181/api/v1/namespaces/provisioning-9532-7706/pods": dial tcp 34.83.17.181:443: connect: connection refused ERROR: get pod list in provisioning-9532-7706: Get "https://34.83.17.181/api/v1/namespaces/provisioning-9532-7706/pods": dial tcp 34.83.17.181:443: connect: connection refused ERROR: get pod list in provisioning-9532-7706: Get "https://34.83.17.181/api/v1/namespaces/provisioning-9532-7706/pods": dial tcp 34.83.17.181:443: connect: connection refused ERROR: get pod list in provisioning-9532-7706: Get "https://34.83.17.181/api/v1/namespaces/provisioning-9532-7706/pods": dial tcp 34.83.17.181:443: connect: connection refused ERROR: get pod list in provisioning-9532-7706: Get "https://34.83.17.181/api/v1/namespaces/provisioning-9532-7706/pods": dial tcp 34.83.17.181:443: connect: connection refused ERROR: get pod list in provisioning-9532-7706: Get "https://34.83.17.181/api/v1/namespaces/provisioning-9532-7706/pods": dial tcp 34.83.17.181:443: connect: connection refused ERROR: get pod list in provisioning-9532-7706: Get "https://34.83.17.181/api/v1/namespaces/provisioning-9532-7706/pods": dial tcp 34.83.17.181:443: connect: connection refused ERROR: get pod list in provisioning-9532-7706: Get "https://34.83.17.181/api/v1/namespaces/provisioning-9532-7706/pods": dial tcp 34.83.17.181:443: connect: connection refused ERROR: get pod list in provisioning-9532-7706: Get "https://34.83.17.181/api/v1/namespaces/provisioning-9532-7706/pods": dial tcp 34.83.17.181:443: connect: connection refused ERROR: get pod list in provisioning-9532-7706: Get "https://34.83.17.181/api/v1/namespaces/provisioning-9532-7706/pods": dial tcp 34.83.17.181:443: connect: connection refused ERROR: get pod list in provisioning-9532-7706: Get "https://34.83.17.181/api/v1/namespaces/provisioning-9532-7706/pods": dial tcp 34.83.17.181:443: connect: connection refused ERROR: get pod list in provisioning-9532-7706: Get "https://34.83.17.181/api/v1/namespaces/provisioning-9532-7706/pods": dial tcp 34.83.17.181:443: connect: connection refused ERROR: get pod list in provisioning-9532-7706: Get "https://34.83.17.181/api/v1/namespaces/provisioning-9532-7706/pods": dial tcp 34.83.17.181:443: connect: connection refused ERROR: get pod list in provisioning-9532-7706: Get "https://34.83.17.181/api/v1/namespaces/provisioning-9532-7706/pods": dial tcp 34.83.17.181:443: connect: connection refused ERROR: get pod list in provisioning-9532-7706: Get "https://34.83.17.181/api/v1/namespaces/provisioning-9532-7706/pods": dial tcp 34.83.17.181:443: connect: connection refused ERROR: get pod list in provisioning-9532-7706: Get "https://34.83.17.181/api/v1/namespaces/provisioning-9532-7706/pods": dial tcp 34.83.17.181:443: connect: connection refused ERROR: get pod list in provisioning-9532-7706: Get "https://34.83.17.181/api/v1/namespaces/provisioning-9532-7706/pods": dial tcp 34.83.17.181:443: connect: connection refused ERROR: get pod list in provisioning-9532-7706: Get "https://34.83.17.181/api/v1/namespaces/provisioning-9532-7706/pods": dial tcp 34.83.17.181:443: connect: connection refused ERROR: get pod list in provisioning-9532-7706: Get "https://34.83.17.181/api/v1/namespaces/provisioning-9532-7706/pods": dial tcp 34.83.17.181:443: connect: connection refused ERROR: get pod list in provisioning-9532-7706: Get "https://34.83.17.181/api/v1/namespaces/provisioning-9532-7706/pods": dial tcp 34.83.17.181:443: connect: connection refused ERROR: get pod list in provisioning-9532-7706: Get "https://34.83.17.181/api/v1/namespaces/provisioning-9532-7706/pods": dial tcp 34.83.17.181:443: connect: connection refused ERROR: get pod list in provisioning-9532-7706: Get "https://34.83.17.181/api/v1/namespaces/provisioning-9532-7706/pods": dial tcp 34.83.17.181:443: connect: connection refused ERROR: get pod list in provisioning-9532-7706: Get "https://34.83.17.181/api/v1/namespaces/provisioning-9532-7706/pods": dial tcp 34.83.17.181:443: connect: connection refused ERROR: get pod list in provisioning-9532-7706: Get "https://34.83.17.181/api/v1/namespaces/provisioning-9532-7706/pods": dial tcp 34.83.17.181:443: connect: connection refused ERROR: get pod list in provisioning-9532-7706: Get "https://34.83.17.181/api/v1/namespaces/provisioning-9532-7706/pods": dial tcp 34.83.17.181:443: connect: connection refused ERROR: get pod list in provisioning-9532-7706: Get "https://34.83.17.181/api/v1/namespaces/provisioning-9532-7706/pods": dial tcp 34.83.17.181:443: connect: connection refused ERROR: get pod list in provisioning-9532-7706: Get "https://34.83.17.181/api/v1/namespaces/provisioning-9532-7706/pods": dial tcp 34.83.17.181:443: connect: connection refused ERROR: get pod list in provisioning-9532-7706: Get "https://34.83.17.181/api/v1/namespaces/provisioning-9532-7706/pods": dial tcp 34.83.17.181:443: connect: connection refused ERROR: get pod list in provisioning-9532-7706: Get "https://34.83.17.181/api/v1/namespaces/provisioning-9532-7706/pods": dial tcp 34.83.17.181:443: connect: connection refused ERROR: get pod list in provisioning-9532-7706: Get "https://34.83.17.181/api/v1/namespaces/provisioning-9532-7706/pods": dial tcp 34.83.17.181:443: connect: connection refused ERROR: get pod list in provisioning-9532-7706: Get "https://34.83.17.181/api/v1/namespaces/provisioning-9532-7706/pods": dial tcp 34.83.17.181:443: connect: connection refused ERROR: get pod list in provisioning-9532-7706: Get "https://34.83.17.181/api/v1/namespaces/provisioning-9532-7706/pods": dial tcp 34.83.17.181:443: connect: connection refused ERROR: get pod list in provisioning-9532-7706: Get "https://34.83.17.181/api/v1/namespaces/provisioning-9532-7706/pods": dial tcp 34.83.17.181:443: connect: connection refused ERROR: get pod list in provisioning-9532-7706: Get "https://34.83.17.181/api/v1/namespaces/provisioning-9532-7706/pods": dial tcp 34.83.17.181:443: connect: connection refused ERROR: get pod list in provisioning-9532-7706: Get "https://34.83.17.181/api/v1/namespaces/provisioning-9532-7706/pods": dial tcp 34.83.17.181:443: connect: connection refused Nov 26 06:26:50.803: INFO: Unexpected error: Failed to schedule CronJob forbid: <*url.Error | 0xc003e7ad20>: { Op: "Get", URL: "https://34.83.17.181/apis/batch/v1/namespaces/cronjob-44/cronjobs/forbid", Err: <*net.OpError | 0xc0041cd950>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc003e7acf0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 34, 83, 17, 181], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc004000440>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Nov 26 06:26:50.803: FAIL: Failed to schedule CronJob forbid: Get "https://34.83.17.181/apis/batch/v1/namespaces/cronjob-44/cronjobs/forbid": dial tcp 34.83.17.181:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/apps.glob..func2.3() test/e2e/apps/cronjob.go:133 +0x290 [AfterEach] [sig-apps] CronJob test/e2e/framework/node/init/init.go:32 Nov 26 06:26:50.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready ERROR: get pod list in provisioning-9532-7706: Get "https://34.83.17.181/api/v1/namespaces/provisioning-9532-7706/pods": dial tcp 34.83.17.181:443: connect: connection refused [DeferCleanup (Each)] [sig-apps] CronJob test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-apps] CronJob dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/26/22 06:26:50.843 STEP: Collecting events from namespace "cronjob-44". 11/26/22 06:26:50.843 Nov 26 06:26:50.882: INFO: Unexpected error: failed to list events in namespace "cronjob-44": <*url.Error | 0xc003f8f050>: { Op: "Get", URL: "https://34.83.17.181/api/v1/namespaces/cronjob-44/events", Err: <*net.OpError | 0xc00433cf50>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc003f8f020>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 34, 83, 17, 181], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc003eebaa0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } ERROR: get pod list in provisioning-9532-7706: Get "https://34.83.17.181/api/v1/namespaces/provisioning-9532-7706/pods": dial tcp 34.83.17.181:443: connect: connection refused Nov 26 06:26:50.883: FAIL: failed to list events in namespace "cronjob-44": Get "https://34.83.17.181/api/v1/namespaces/cronjob-44/events": dial tcp 34.83.17.181:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework/debug.dumpEventsInNamespace(0xc00400e5c0, {0xc004d05ef0, 0xa}) test/e2e/framework/debug/dump.go:44 +0x191 k8s.io/kubernetes/test/e2e/framework/debug.DumpAllNamespaceInfo({0x801de88, 0xc003a5b380}, {0xc004d05ef0, 0xa}) test/e2e/framework/debug/dump.go:62 +0x8d k8s.io/kubernetes/test/e2e/framework/debug/init.init.0.func1.1(0xc00400e650?, {0xc004d05ef0?, 0x7fa7740?}) test/e2e/framework/debug/init/init.go:34 +0x32 k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo.func1() test/e2e/framework/framework.go:274 +0x6d k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo(0xc00052b860) test/e2e/framework/framework.go:271 +0x179 reflect.Value.call({0x6627cc0?, 0xc004c63420?, 0xc0042e0fb0?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0xc002d11408?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc004c63420?, 0x29449fc?}, {0xae73300?, 0xc0042e0f80?, 0xc003e6df30?}) /usr/local/go/src/reflect/value.go:368 +0xbc [DeferCleanup (Each)] [sig-apps] CronJob tear down framework | framework.go:193 STEP: Destroying namespace "cronjob-44" for this suite. 11/26/22 06:26:50.883 Nov 26 06:26:50.922: FAIL: Couldn't delete ns: "cronjob-44": Delete "https://34.83.17.181/api/v1/namespaces/cronjob-44": dial tcp 34.83.17.181:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://34.83.17.181/api/v1/namespaces/cronjob-44", Err:(*net.OpError)(0xc00433d270)}) Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach.func1() test/e2e/framework/framework.go:370 +0x4fe k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc00052b860) test/e2e/framework/framework.go:383 +0x1ca reflect.Value.call({0x6627cc0?, 0xc004c633a0?, 0xc0042e0fb0?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0x0?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc004c633a0?, 0x0?}, {0xae73300?, 0x5?, 0xc003f98600?}) /usr/local/go/src/reflect/value.go:368 +0xbc
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-apps\]\sStatefulSet\sBasic\sStatefulSet\sfunctionality\s\[StatefulSetBasic\]\sBurst\sscaling\sshould\srun\sto\scompletion\seven\swith\sunhealthy\spods\s\[Slow\]\s\[Conformance\]$'
test/e2e/framework/framework.go:241 k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc000b481e0) test/e2e/framework/framework.go:241 +0x96ffrom junit_01.xml
[BeforeEach] [sig-apps] StatefulSet set up framework | framework.go:178 STEP: Creating a kubernetes client 11/26/22 06:31:23.041 Nov 26 06:31:23.041: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename statefulset 11/26/22 06:31:23.043 Nov 26 06:31:23.082: INFO: Unexpected error while creating namespace: Post "https://34.83.17.181/api/v1/namespaces": dial tcp 34.83.17.181:443: connect: connection refused Nov 26 06:31:25.122: INFO: Unexpected error while creating namespace: Post "https://34.83.17.181/api/v1/namespaces": dial tcp 34.83.17.181:443: connect: connection refused Nov 26 06:31:27.122: INFO: Unexpected error while creating namespace: Post "https://34.83.17.181/api/v1/namespaces": dial tcp 34.83.17.181:443: connect: connection refused Nov 26 06:31:29.122: INFO: Unexpected error while creating namespace: Post "https://34.83.17.181/api/v1/namespaces": dial tcp 34.83.17.181:443: connect: connection refused Nov 26 06:31:31.122: INFO: Unexpected error while creating namespace: Post "https://34.83.17.181/api/v1/namespaces": dial tcp 34.83.17.181:443: connect: connection refused Nov 26 06:31:33.122: INFO: Unexpected error while creating namespace: Post "https://34.83.17.181/api/v1/namespaces": dial tcp 34.83.17.181:443: connect: connection refused Nov 26 06:31:35.122: INFO: Unexpected error while creating namespace: Post "https://34.83.17.181/api/v1/namespaces": dial tcp 34.83.17.181:443: connect: connection refused Nov 26 06:31:37.122: INFO: Unexpected error while creating namespace: Post "https://34.83.17.181/api/v1/namespaces": dial tcp 34.83.17.181:443: connect: connection refused Nov 26 06:31:39.122: INFO: Unexpected error while creating namespace: Post "https://34.83.17.181/api/v1/namespaces": dial tcp 34.83.17.181:443: connect: connection refused Nov 26 06:31:41.122: INFO: Unexpected error while creating namespace: Post "https://34.83.17.181/api/v1/namespaces": dial tcp 34.83.17.181:443: connect: connection refused Nov 26 06:31:43.122: INFO: Unexpected error while creating namespace: Post "https://34.83.17.181/api/v1/namespaces": dial tcp 34.83.17.181:443: connect: connection refused Nov 26 06:31:45.122: INFO: Unexpected error while creating namespace: Post "https://34.83.17.181/api/v1/namespaces": dial tcp 34.83.17.181:443: connect: connection refused Nov 26 06:31:47.122: INFO: Unexpected error while creating namespace: Post "https://34.83.17.181/api/v1/namespaces": dial tcp 34.83.17.181:443: connect: connection refused Nov 26 06:31:49.122: INFO: Unexpected error while creating namespace: Post "https://34.83.17.181/api/v1/namespaces": dial tcp 34.83.17.181:443: connect: connection refused Nov 26 06:31:51.122: INFO: Unexpected error while creating namespace: Post "https://34.83.17.181/api/v1/namespaces": dial tcp 34.83.17.181:443: connect: connection refused Nov 26 06:31:53.122: INFO: Unexpected error while creating namespace: Post "https://34.83.17.181/api/v1/namespaces": dial tcp 34.83.17.181:443: connect: connection refused Nov 26 06:31:53.162: INFO: Unexpected error while creating namespace: Post "https://34.83.17.181/api/v1/namespaces": dial tcp 34.83.17.181:443: connect: connection refused Nov 26 06:31:53.162: INFO: Unexpected error: <*errors.errorString | 0xc000195d70>: { s: "timed out waiting for the condition", } Nov 26 06:31:53.162: FAIL: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc000b481e0) test/e2e/framework/framework.go:241 +0x96f [AfterEach] [sig-apps] StatefulSet test/e2e/framework/node/init/init.go:32 Nov 26 06:31:53.163: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-apps] StatefulSet dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/26/22 06:31:53.203 [DeferCleanup (Each)] [sig-apps] StatefulSet tear down framework | framework.go:193
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-apps\]\sStatefulSet\sBasic\sStatefulSet\sfunctionality\s\[StatefulSetBasic\]\sScaling\sshould\shappen\sin\spredictable\sorder\sand\shalt\sif\sany\sstateful\spod\sis\sunhealthy\s\[Slow\]\s\[Conformance\]$'
test/e2e/framework/statefulset/wait.go:120 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForStatusReadyReplicas({0x801de88?, 0xc001c00d00}, 0xc000738a00, 0x0) test/e2e/framework/statefulset/wait.go:120 +0x231 k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10() test/e2e/apps/statefulset.go:678 +0x975from junit_01.xml
[BeforeEach] [sig-apps] StatefulSet set up framework | framework.go:178 STEP: Creating a kubernetes client 11/26/22 06:08:34.439 Nov 26 06:08:34.439: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename statefulset 11/26/22 06:08:34.441 STEP: Waiting for a default service account to be provisioned in namespace 11/26/22 06:08:34.753 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/26/22 06:08:34.857 [BeforeEach] [sig-apps] StatefulSet test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-apps] StatefulSet test/e2e/apps/statefulset.go:98 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] test/e2e/apps/statefulset.go:113 STEP: Creating service test in namespace statefulset-6296 11/26/22 06:08:34.978 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] test/e2e/apps/statefulset.go:587 STEP: Initializing watcher for selector baz=blah,foo=bar 11/26/22 06:08:35.069 STEP: Creating stateful set ss in namespace statefulset-6296 11/26/22 06:08:35.147 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-6296 11/26/22 06:08:35.213 Nov 26 06:08:35.311: INFO: Found 0 stateful pods, waiting for 1 Nov 26 06:08:45.353: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false Nov 26 06:08:55.352: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod 11/26/22 06:08:55.352 Nov 26 06:08:55.394: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=statefulset-6296 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Nov 26 06:08:55.938: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Nov 26 06:08:55.938: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Nov 26 06:08:55.938: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Nov 26 06:08:55.983: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Nov 26 06:09:06.026: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Nov 26 06:09:06.026: INFO: Waiting for statefulset status.replicas updated to 0 Nov 26 06:09:06.196: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999534s Nov 26 06:09:07.239: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.957367492s Nov 26 06:09:08.282: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.91481115s Nov 26 06:09:09.325: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.872022401s Nov 26 06:09:10.383: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.82807673s Nov 26 06:09:11.432: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.770829168s Nov 26 06:09:12.477: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.72156795s Nov 26 06:09:13.521: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.676040444s Nov 26 06:09:14.624: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.633047787s Nov 26 06:09:15.692: INFO: Verifying statefulset ss doesn't scale past 1 for another 529.769117ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-6296 11/26/22 06:09:16.693 Nov 26 06:09:16.759: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=statefulset-6296 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 26 06:09:17.417: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Nov 26 06:09:17.417: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Nov 26 06:09:17.417: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Nov 26 06:09:17.478: INFO: Found 1 stateful pods, waiting for 3 Nov 26 06:09:27.561: INFO: Found 2 stateful pods, waiting for 3 Nov 26 06:09:37.542: INFO: Found 2 stateful pods, waiting for 3 Nov 26 06:09:47.542: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Nov 26 06:09:47.542: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=false Nov 26 06:09:57.564: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Nov 26 06:09:57.564: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Nov 26 06:09:57.564: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order 11/26/22 06:09:57.564 STEP: Scale down will halt with unhealthy stateful pod 11/26/22 06:09:57.564 Nov 26 06:09:57.703: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=statefulset-6296 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Nov 26 06:09:58.619: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Nov 26 06:09:58.619: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Nov 26 06:09:58.619: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Nov 26 06:09:58.619: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=statefulset-6296 exec ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Nov 26 06:09:59.378: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Nov 26 06:09:59.378: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Nov 26 06:09:59.378: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Nov 26 06:09:59.378: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=statefulset-6296 exec ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Nov 26 06:10:00.628: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Nov 26 06:10:00.628: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Nov 26 06:10:00.628: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Nov 26 06:10:00.628: INFO: Waiting for statefulset status.replicas updated to 0 Nov 26 06:10:00.693: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 Nov 26 06:10:10.771: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 Nov 26 06:10:21.022: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Nov 26 06:10:30.767: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Nov 26 06:10:40.753: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Nov 26 06:10:53.800: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Nov 26 06:11:00.758: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Nov 26 06:11:10.753: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 Nov 26 06:11:20.754: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Nov 26 06:11:30.748: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Nov 26 06:11:40.899: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Nov 26 06:11:50.769: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Nov 26 06:12:00.779: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Nov 26 06:12:10.757: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Nov 26 06:12:20.784: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Nov 26 06:12:30.751: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 Nov 26 06:12:40.735: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 Nov 26 06:12:50.735: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 Nov 26 06:13:00.735: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 Nov 26 06:13:10.735: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 Nov 26 06:13:20.735: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 Nov 26 06:13:30.737: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 ------------------------------ Progress Report for Ginkgo Process #21 Automatically polling progress: [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] (Spec Runtime: 5m0.63s) test/e2e/apps/statefulset.go:587 In [It] (Node Runtime: 5m0s) test/e2e/apps/statefulset.go:587 At [By Step] Scale down will halt with unhealthy stateful pod (Step Runtime: 3m37.505s) test/e2e/apps/statefulset.go:649 Spec Goroutine goroutine 223 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc002ef3fe0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x30?, 0x2fd9d05?, 0x40?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc0040bbd80?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x75b6f82?, 0x4?, 0x76fb525?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForStatusReadyReplicas({0x801de88?, 0xc001c00d00}, 0xc000738a00, 0x0) test/e2e/framework/statefulset/wait.go:104 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10() test/e2e/apps/statefulset.go:678 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x7fb8180, 0xc00409cb40}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 Goroutines of Interest goroutine 141 [select] k8s.io/kubernetes/vendor/k8s.io/client-go/tools/watch.UntilWithoutRetry({0x7fe0c00, 0xc003576cc0}, {0x7fbcaa0, 0xc000b4a2c0}, {0xc002cf1f38, 0x1, 0x2?}) vendor/k8s.io/client-go/tools/watch/until.go:73 k8s.io/kubernetes/vendor/k8s.io/client-go/tools/watch.Until({0x7fe0c00, 0xc003576cc0}, {0xc001a05108?, 0x75b5154?}, {0x7facee0?, 0xc001525c80?}, {0xc002cf1f38, 0x1, 0x1}) vendor/k8s.io/client-go/tools/watch/until.go:114 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10.3() test/e2e/apps/statefulset.go:665 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10 test/e2e/apps/statefulset.go:657 ------------------------------ Nov 26 06:13:40.735: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 Nov 26 06:13:51.002: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 ------------------------------ Progress Report for Ginkgo Process #21 Automatically polling progress: [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] (Spec Runtime: 5m20.632s) test/e2e/apps/statefulset.go:587 In [It] (Node Runtime: 5m20.003s) test/e2e/apps/statefulset.go:587 At [By Step] Scale down will halt with unhealthy stateful pod (Step Runtime: 3m57.507s) test/e2e/apps/statefulset.go:649 Spec Goroutine goroutine 223 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc002ef3fe0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x30?, 0x2fd9d05?, 0x40?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc0040bbd80?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x75b6f82?, 0x4?, 0x76fb525?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForStatusReadyReplicas({0x801de88?, 0xc001c00d00}, 0xc000738a00, 0x0) test/e2e/framework/statefulset/wait.go:104 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10() test/e2e/apps/statefulset.go:678 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x7fb8180, 0xc00409cb40}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 Goroutines of Interest goroutine 141 [select] k8s.io/kubernetes/vendor/k8s.io/client-go/tools/watch.UntilWithoutRetry({0x7fe0c00, 0xc003576cc0}, {0x7fbcaa0, 0xc000b4a2c0}, {0xc002cf1f38, 0x1, 0x2?}) vendor/k8s.io/client-go/tools/watch/until.go:73 k8s.io/kubernetes/vendor/k8s.io/client-go/tools/watch.Until({0x7fe0c00, 0xc003576cc0}, {0xc001a05108?, 0x75b5154?}, {0x7facee0?, 0xc001525c80?}, {0xc002cf1f38, 0x1, 0x1}) vendor/k8s.io/client-go/tools/watch/until.go:114 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10.3() test/e2e/apps/statefulset.go:665 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10 test/e2e/apps/statefulset.go:657 ------------------------------ Nov 26 06:14:00.780: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Nov 26 06:14:10.757: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 ------------------------------ Progress Report for Ginkgo Process #21 Automatically polling progress: [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] (Spec Runtime: 5m40.634s) test/e2e/apps/statefulset.go:587 In [It] (Node Runtime: 5m40.004s) test/e2e/apps/statefulset.go:587 At [By Step] Scale down will halt with unhealthy stateful pod (Step Runtime: 4m17.509s) test/e2e/apps/statefulset.go:649 Spec Goroutine goroutine 223 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc002ef3fe0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x30?, 0x2fd9d05?, 0x40?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc0040bbd80?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x75b6f82?, 0x4?, 0x76fb525?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForStatusReadyReplicas({0x801de88?, 0xc001c00d00}, 0xc000738a00, 0x0) test/e2e/framework/statefulset/wait.go:104 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10() test/e2e/apps/statefulset.go:678 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x7fb8180, 0xc00409cb40}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 Goroutines of Interest goroutine 141 [select] k8s.io/kubernetes/vendor/k8s.io/client-go/tools/watch.UntilWithoutRetry({0x7fe0c00, 0xc003576cc0}, {0x7fbcaa0, 0xc000b4a2c0}, {0xc002cf1f38, 0x1, 0x2?}) vendor/k8s.io/client-go/tools/watch/until.go:73 k8s.io/kubernetes/vendor/k8s.io/client-go/tools/watch.Until({0x7fe0c00, 0xc003576cc0}, {0xc001a05108?, 0x75b5154?}, {0x7facee0?, 0xc001525c80?}, {0xc002cf1f38, 0x1, 0x1}) vendor/k8s.io/client-go/tools/watch/until.go:114 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10.3() test/e2e/apps/statefulset.go:665 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10 test/e2e/apps/statefulset.go:657 ------------------------------ Nov 26 06:14:20.753: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Nov 26 06:14:30.797: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 ------------------------------ Progress Report for Ginkgo Process #21 Automatically polling progress: [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] (Spec Runtime: 6m0.637s) test/e2e/apps/statefulset.go:587 In [It] (Node Runtime: 6m0.007s) test/e2e/apps/statefulset.go:587 At [By Step] Scale down will halt with unhealthy stateful pod (Step Runtime: 4m37.512s) test/e2e/apps/statefulset.go:649 Spec Goroutine goroutine 223 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc002ef3fe0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x30?, 0x2fd9d05?, 0x40?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc0040bbd80?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x75b6f82?, 0x4?, 0x76fb525?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForStatusReadyReplicas({0x801de88?, 0xc001c00d00}, 0xc000738a00, 0x0) test/e2e/framework/statefulset/wait.go:104 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10() test/e2e/apps/statefulset.go:678 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x7fb8180, 0xc00409cb40}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 Goroutines of Interest goroutine 141 [select] k8s.io/kubernetes/vendor/k8s.io/client-go/tools/watch.UntilWithoutRetry({0x7fe0c00, 0xc003576cc0}, {0x7fbcaa0, 0xc000b4a2c0}, {0xc002cf1f38, 0x1, 0x2?}) vendor/k8s.io/client-go/tools/watch/until.go:73 k8s.io/kubernetes/vendor/k8s.io/client-go/tools/watch.Until({0x7fe0c00, 0xc003576cc0}, {0xc001a05108?, 0x75b5154?}, {0x7facee0?, 0xc001525c80?}, {0xc002cf1f38, 0x1, 0x1}) vendor/k8s.io/client-go/tools/watch/until.go:114 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10.3() test/e2e/apps/statefulset.go:665 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10 test/e2e/apps/statefulset.go:657 ------------------------------ Nov 26 06:14:40.746: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Nov 26 06:14:50.823: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 ------------------------------ Progress Report for Ginkgo Process #21 Automatically polling progress: [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] (Spec Runtime: 6m20.639s) test/e2e/apps/statefulset.go:587 In [It] (Node Runtime: 6m20.01s) test/e2e/apps/statefulset.go:587 At [By Step] Scale down will halt with unhealthy stateful pod (Step Runtime: 4m57.514s) test/e2e/apps/statefulset.go:649 Spec Goroutine goroutine 223 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc002ef3fe0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x30?, 0x2fd9d05?, 0x40?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc0040bbd80?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x75b6f82?, 0x4?, 0x76fb525?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForStatusReadyReplicas({0x801de88?, 0xc001c00d00}, 0xc000738a00, 0x0) test/e2e/framework/statefulset/wait.go:104 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10() test/e2e/apps/statefulset.go:678 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x7fb8180, 0xc00409cb40}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 Goroutines of Interest goroutine 141 [select, 2 minutes] k8s.io/kubernetes/vendor/k8s.io/client-go/tools/watch.UntilWithoutRetry({0x7fe0c00, 0xc003576cc0}, {0x7fbcaa0, 0xc000b4a2c0}, {0xc002cf1f38, 0x1, 0x2?}) vendor/k8s.io/client-go/tools/watch/until.go:73 k8s.io/kubernetes/vendor/k8s.io/client-go/tools/watch.Until({0x7fe0c00, 0xc003576cc0}, {0xc001a05108?, 0x75b5154?}, {0x7facee0?, 0xc001525c80?}, {0xc002cf1f38, 0x1, 0x1}) vendor/k8s.io/client-go/tools/watch/until.go:114 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10.3() test/e2e/apps/statefulset.go:665 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10 test/e2e/apps/statefulset.go:657 ------------------------------ Nov 26 06:15:00.780: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Nov 26 06:15:10.762: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 ------------------------------ Progress Report for Ginkgo Process #21 Automatically polling progress: [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] (Spec Runtime: 6m40.641s) test/e2e/apps/statefulset.go:587 In [It] (Node Runtime: 6m40.012s) test/e2e/apps/statefulset.go:587 At [By Step] Scale down will halt with unhealthy stateful pod (Step Runtime: 5m17.516s) test/e2e/apps/statefulset.go:649 Spec Goroutine goroutine 223 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc002ef3fe0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x30?, 0x2fd9d05?, 0x40?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc0040bbd80?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x75b6f82?, 0x4?, 0x76fb525?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForStatusReadyReplicas({0x801de88?, 0xc001c00d00}, 0xc000738a00, 0x0) test/e2e/framework/statefulset/wait.go:104 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10() test/e2e/apps/statefulset.go:678 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x7fb8180, 0xc00409cb40}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 Goroutines of Interest goroutine 141 [select, 2 minutes] k8s.io/kubernetes/vendor/k8s.io/client-go/tools/watch.UntilWithoutRetry({0x7fe0c00, 0xc003576cc0}, {0x7fbcaa0, 0xc000b4a2c0}, {0xc002cf1f38, 0x1, 0x2?}) vendor/k8s.io/client-go/tools/watch/until.go:73 k8s.io/kubernetes/vendor/k8s.io/client-go/tools/watch.Until({0x7fe0c00, 0xc003576cc0}, {0xc001a05108?, 0x75b5154?}, {0x7facee0?, 0xc001525c80?}, {0xc002cf1f38, 0x1, 0x1}) vendor/k8s.io/client-go/tools/watch/until.go:114 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10.3() test/e2e/apps/statefulset.go:665 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10 test/e2e/apps/statefulset.go:657 ------------------------------ Nov 26 06:15:20.815: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Nov 26 06:15:30.889: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 ------------------------------ Progress Report for Ginkgo Process #21 Automatically polling progress: [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] (Spec Runtime: 7m0.644s) test/e2e/apps/statefulset.go:587 In [It] (Node Runtime: 7m0.014s) test/e2e/apps/statefulset.go:587 At [By Step] Scale down will halt with unhealthy stateful pod (Step Runtime: 5m37.519s) test/e2e/apps/statefulset.go:649 Spec Goroutine goroutine 223 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc002ef3fe0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x30?, 0x2fd9d05?, 0x40?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc0040bbd80?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x75b6f82?, 0x4?, 0x76fb525?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForStatusReadyReplicas({0x801de88?, 0xc001c00d00}, 0xc000738a00, 0x0) test/e2e/framework/statefulset/wait.go:104 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10() test/e2e/apps/statefulset.go:678 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x7fb8180, 0xc00409cb40}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 Goroutines of Interest goroutine 141 [select] k8s.io/kubernetes/vendor/k8s.io/client-go/tools/watch.UntilWithoutRetry({0x7fe0c00, 0xc003576cc0}, {0x7fbcaa0, 0xc000b4a2c0}, {0xc002cf1f38, 0x1, 0x2?}) vendor/k8s.io/client-go/tools/watch/until.go:73 k8s.io/kubernetes/vendor/k8s.io/client-go/tools/watch.Until({0x7fe0c00, 0xc003576cc0}, {0xc001a05108?, 0x75b5154?}, {0x7facee0?, 0xc001525c80?}, {0xc002cf1f38, 0x1, 0x1}) vendor/k8s.io/client-go/tools/watch/until.go:114 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10.3() test/e2e/apps/statefulset.go:665 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10 test/e2e/apps/statefulset.go:657 ------------------------------ Nov 26 06:15:40.788: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Nov 26 06:15:50.776: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 ------------------------------ Progress Report for Ginkgo Process #21 Automatically polling progress: [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] (Spec Runtime: 7m20.647s) test/e2e/apps/statefulset.go:587 In [It] (Node Runtime: 7m20.017s) test/e2e/apps/statefulset.go:587 At [By Step] Scale down will halt with unhealthy stateful pod (Step Runtime: 5m57.522s) test/e2e/apps/statefulset.go:649 Spec Goroutine goroutine 223 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc002ef3fe0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x30?, 0x2fd9d05?, 0x40?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc0040bbd80?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x75b6f82?, 0x4?, 0x76fb525?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForStatusReadyReplicas({0x801de88?, 0xc001c00d00}, 0xc000738a00, 0x0) test/e2e/framework/statefulset/wait.go:104 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10() test/e2e/apps/statefulset.go:678 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x7fb8180, 0xc00409cb40}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 Goroutines of Interest goroutine 141 [select] k8s.io/kubernetes/vendor/k8s.io/client-go/tools/watch.UntilWithoutRetry({0x7fe0c00, 0xc003576cc0}, {0x7fbcaa0, 0xc000b4a2c0}, {0xc002cf1f38, 0x1, 0x2?}) vendor/k8s.io/client-go/tools/watch/until.go:73 k8s.io/kubernetes/vendor/k8s.io/client-go/tools/watch.Until({0x7fe0c00, 0xc003576cc0}, {0xc001a05108?, 0x75b5154?}, {0x7facee0?, 0xc001525c80?}, {0xc002cf1f38, 0x1, 0x1}) vendor/k8s.io/client-go/tools/watch/until.go:114 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10.3() test/e2e/apps/statefulset.go:665 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10 test/e2e/apps/statefulset.go:657 ------------------------------ Nov 26 06:16:00.812: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Nov 26 06:16:10.739: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 ------------------------------ Progress Report for Ginkgo Process #21 Automatically polling progress: [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] (Spec Runtime: 7m40.649s) test/e2e/apps/statefulset.go:587 In [It] (Node Runtime: 7m40.019s) test/e2e/apps/statefulset.go:587 At [By Step] Scale down will halt with unhealthy stateful pod (Step Runtime: 6m17.524s) test/e2e/apps/statefulset.go:649 Spec Goroutine goroutine 223 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc002ef3fe0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x30?, 0x2fd9d05?, 0x40?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc0040bbd80?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x75b6f82?, 0x4?, 0x76fb525?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForStatusReadyReplicas({0x801de88?, 0xc001c00d00}, 0xc000738a00, 0x0) test/e2e/framework/statefulset/wait.go:104 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10() test/e2e/apps/statefulset.go:678 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x7fb8180, 0xc00409cb40}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 Goroutines of Interest goroutine 141 [select] k8s.io/kubernetes/vendor/k8s.io/client-go/tools/watch.UntilWithoutRetry({0x7fe0c00, 0xc003576cc0}, {0x7fbcaa0, 0xc000b4a2c0}, {0xc002cf1f38, 0x1, 0x2?}) vendor/k8s.io/client-go/tools/watch/until.go:73 k8s.io/kubernetes/vendor/k8s.io/client-go/tools/watch.Until({0x7fe0c00, 0xc003576cc0}, {0xc001a05108?, 0x75b5154?}, {0x7facee0?, 0xc001525c80?}, {0xc002cf1f38, 0x1, 0x1}) vendor/k8s.io/client-go/tools/watch/until.go:114 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10.3() test/e2e/apps/statefulset.go:665 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10 test/e2e/apps/statefulset.go:657 ------------------------------ Nov 26 06:16:20.763: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 Nov 26 06:16:30.832: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 ------------------------------ Progress Report for Ginkgo Process #21 Automatically polling progress: [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] (Spec Runtime: 8m0.653s) test/e2e/apps/statefulset.go:587 In [It] (Node Runtime: 8m0.023s) test/e2e/apps/statefulset.go:587 At [By Step] Scale down will halt with unhealthy stateful pod (Step Runtime: 6m37.528s) test/e2e/apps/statefulset.go:649 Spec Goroutine goroutine 223 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc002ef3fe0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x30?, 0x2fd9d05?, 0x40?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc0040bbd80?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x75b6f82?, 0x4?, 0x76fb525?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForStatusReadyReplicas({0x801de88?, 0xc001c00d00}, 0xc000738a00, 0x0) test/e2e/framework/statefulset/wait.go:104 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10() test/e2e/apps/statefulset.go:678 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x7fb8180, 0xc00409cb40}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 Goroutines of Interest goroutine 141 [select] k8s.io/kubernetes/vendor/k8s.io/client-go/tools/watch.UntilWithoutRetry({0x7fe0c00, 0xc003576cc0}, {0x7fbcaa0, 0xc000b4a2c0}, {0xc002cf1f38, 0x1, 0x2?}) vendor/k8s.io/client-go/tools/watch/until.go:73 k8s.io/kubernetes/vendor/k8s.io/client-go/tools/watch.Until({0x7fe0c00, 0xc003576cc0}, {0xc001a05108?, 0x75b5154?}, {0x7facee0?, 0xc001525c80?}, {0xc002cf1f38, 0x1, 0x1}) vendor/k8s.io/client-go/tools/watch/until.go:114 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10.3() test/e2e/apps/statefulset.go:665 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10 test/e2e/apps/statefulset.go:657 ------------------------------ Nov 26 06:16:40.782: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Nov 26 06:16:50.787: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 ------------------------------ Progress Report for Ginkgo Process #21 Automatically polling progress: [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] (Spec Runtime: 8m20.655s) test/e2e/apps/statefulset.go:587 In [It] (Node Runtime: 8m20.025s) test/e2e/apps/statefulset.go:587 At [By Step] Scale down will halt with unhealthy stateful pod (Step Runtime: 6m57.53s) test/e2e/apps/statefulset.go:649 Spec Goroutine goroutine 223 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc002ef3fe0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x30?, 0x2fd9d05?, 0x40?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc0040bbd80?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x75b6f82?, 0x4?, 0x76fb525?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForStatusReadyReplicas({0x801de88?, 0xc001c00d00}, 0xc000738a00, 0x0) test/e2e/framework/statefulset/wait.go:104 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10() test/e2e/apps/statefulset.go:678 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x7fb8180, 0xc00409cb40}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 Goroutines of Interest goroutine 141 [select, 2 minutes] k8s.io/kubernetes/vendor/k8s.io/client-go/tools/watch.UntilWithoutRetry({0x7fe0c00, 0xc003576cc0}, {0x7fbcaa0, 0xc000b4a2c0}, {0xc002cf1f38, 0x1, 0x2?}) vendor/k8s.io/client-go/tools/watch/until.go:73 k8s.io/kubernetes/vendor/k8s.io/client-go/tools/watch.Until({0x7fe0c00, 0xc003576cc0}, {0xc001a05108?, 0x75b5154?}, {0x7facee0?, 0xc001525c80?}, {0xc002cf1f38, 0x1, 0x1}) vendor/k8s.io/client-go/tools/watch/until.go:114 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10.3() test/e2e/apps/statefulset.go:665 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10 test/e2e/apps/statefulset.go:657 ------------------------------ Nov 26 06:17:00.787: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Nov 26 06:17:10.818: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 ------------------------------ Progress Report for Ginkgo Process #21 Automatically polling progress: [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] (Spec Runtime: 8m40.657s) test/e2e/apps/statefulset.go:587 In [It] (Node Runtime: 8m40.027s) test/e2e/apps/statefulset.go:587 At [By Step] Scale down will halt with unhealthy stateful pod (Step Runtime: 7m17.532s) test/e2e/apps/statefulset.go:649 Spec Goroutine goroutine 223 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc002ef3fe0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x30?, 0x2fd9d05?, 0x40?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc0040bbd80?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x75b6f82?, 0x4?, 0x76fb525?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForStatusReadyReplicas({0x801de88?, 0xc001c00d00}, 0xc000738a00, 0x0) test/e2e/framework/statefulset/wait.go:104 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10() test/e2e/apps/statefulset.go:678 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x7fb8180, 0xc00409cb40}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 Goroutines of Interest goroutine 141 [select, 2 minutes] k8s.io/kubernetes/vendor/k8s.io/client-go/tools/watch.UntilWithoutRetry({0x7fe0c00, 0xc003576cc0}, {0x7fbcaa0, 0xc000b4a2c0}, {0xc002cf1f38, 0x1, 0x2?}) vendor/k8s.io/client-go/tools/watch/until.go:73 k8s.io/kubernetes/vendor/k8s.io/client-go/tools/watch.Until({0x7fe0c00, 0xc003576cc0}, {0xc001a05108?, 0x75b5154?}, {0x7facee0?, 0xc001525c80?}, {0xc002cf1f38, 0x1, 0x1}) vendor/k8s.io/client-go/tools/watch/until.go:114 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10.3() test/e2e/apps/statefulset.go:665 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10 test/e2e/apps/statefulset.go:657 ------------------------------ Nov 26 06:17:20.753: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Nov 26 06:17:30.800: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 ------------------------------ Progress Report for Ginkgo Process #21 Automatically polling progress: [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] (Spec Runtime: 9m0.659s) test/e2e/apps/statefulset.go:587 In [It] (Node Runtime: 9m0.029s) test/e2e/apps/statefulset.go:587 At [By Step] Scale down will halt with unhealthy stateful pod (Step Runtime: 7m37.534s) test/e2e/apps/statefulset.go:649 Spec Goroutine goroutine 223 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc002ef3fe0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x30?, 0x2fd9d05?, 0x40?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc0040bbd80?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x75b6f82?, 0x4?, 0x76fb525?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForStatusReadyReplicas({0x801de88?, 0xc001c00d00}, 0xc000738a00, 0x0) test/e2e/framework/statefulset/wait.go:104 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10() test/e2e/apps/statefulset.go:678 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x7fb8180, 0xc00409cb40}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 Goroutines of Interest goroutine 141 [select, 2 minutes] k8s.io/kubernetes/vendor/k8s.io/client-go/tools/watch.UntilWithoutRetry({0x7fe0c00, 0xc003576cc0}, {0x7fbcaa0, 0xc000b4a2c0}, {0xc002cf1f38, 0x1, 0x2?}) vendor/k8s.io/client-go/tools/watch/until.go:73 k8s.io/kubernetes/vendor/k8s.io/client-go/tools/watch.Until({0x7fe0c00, 0xc003576cc0}, {0xc001a05108?, 0x75b5154?}, {0x7facee0?, 0xc001525c80?}, {0xc002cf1f38, 0x1, 0x1}) vendor/k8s.io/client-go/tools/watch/until.go:114 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10.3() test/e2e/apps/statefulset.go:665 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10 test/e2e/apps/statefulset.go:657 ------------------------------ Nov 26 06:17:40.920: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Nov 26 06:17:50.764: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 ------------------------------ Progress Report for Ginkgo Process #21 Automatically polling progress: [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] (Spec Runtime: 9m20.66s) test/e2e/apps/statefulset.go:587 In [It] (Node Runtime: 9m20.031s) test/e2e/apps/statefulset.go:587 At [By Step] Scale down will halt with unhealthy stateful pod (Step Runtime: 7m57.535s) test/e2e/apps/statefulset.go:649 Spec Goroutine goroutine 223 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc002ef3fe0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x30?, 0x2fd9d05?, 0x40?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc0040bbd80?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x75b6f82?, 0x4?, 0x76fb525?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForStatusReadyReplicas({0x801de88?, 0xc001c00d00}, 0xc000738a00, 0x0) test/e2e/framework/statefulset/wait.go:104 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10() test/e2e/apps/statefulset.go:678 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x7fb8180, 0xc00409cb40}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 Goroutines of Interest goroutine 141 [select] k8s.io/kubernetes/vendor/k8s.io/client-go/tools/watch.UntilWithoutRetry({0x7fe0c00, 0xc003576cc0}, {0x7fbcaa0, 0xc000b4a2c0}, {0xc002cf1f38, 0x1, 0x2?}) vendor/k8s.io/client-go/tools/watch/until.go:73 k8s.io/kubernetes/vendor/k8s.io/client-go/tools/watch.Until({0x7fe0c00, 0xc003576cc0}, {0xc001a05108?, 0x75b5154?}, {0x7facee0?, 0xc001525c80?}, {0xc002cf1f38, 0x1, 0x1}) vendor/k8s.io/client-go/tools/watch/until.go:114 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10.3() test/e2e/apps/statefulset.go:665 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10 test/e2e/apps/statefulset.go:657 ------------------------------ Nov 26 06:18:00.757: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 Nov 26 06:18:10.761: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 ------------------------------ Progress Report for Ginkgo Process #21 Automatically polling progress: [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] (Spec Runtime: 9m40.663s) test/e2e/apps/statefulset.go:587 In [It] (Node Runtime: 9m40.033s) test/e2e/apps/statefulset.go:587 At [By Step] Scale down will halt with unhealthy stateful pod (Step Runtime: 8m17.538s) test/e2e/apps/statefulset.go:649 Spec Goroutine goroutine 223 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc002ef3fe0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x30?, 0x2fd9d05?, 0x40?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc0040bbd80?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x75b6f82?, 0x4?, 0x76fb525?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForStatusReadyReplicas({0x801de88?, 0xc001c00d00}, 0xc000738a00, 0x0) test/e2e/framework/statefulset/wait.go:104 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10() test/e2e/apps/statefulset.go:678 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x7fb8180, 0xc00409cb40}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 Goroutines of Interest goroutine 141 [select] k8s.io/kubernetes/vendor/k8s.io/client-go/tools/watch.UntilWithoutRetry({0x7fe0c00, 0xc003576cc0}, {0x7fbcaa0, 0xc000b4a2c0}, {0xc002cf1f38, 0x1, 0x2?}) vendor/k8s.io/client-go/tools/watch/until.go:73 k8s.io/kubernetes/vendor/k8s.io/client-go/tools/watch.Until({0x7fe0c00, 0xc003576cc0}, {0xc001a05108?, 0x75b5154?}, {0x7facee0?, 0xc001525c80?}, {0xc002cf1f38, 0x1, 0x1}) vendor/k8s.io/client-go/tools/watch/until.go:114 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10.3() test/e2e/apps/statefulset.go:665 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10 test/e2e/apps/statefulset.go:657 ------------------------------ Nov 26 06:18:20.750: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 Nov 26 06:18:30.773: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 ------------------------------ Progress Report for Ginkgo Process #21 Automatically polling progress: [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] (Spec Runtime: 10m0.665s) test/e2e/apps/statefulset.go:587 In [It] (Node Runtime: 10m0.035s) test/e2e/apps/statefulset.go:587 At [By Step] Scale down will halt with unhealthy stateful pod (Step Runtime: 8m37.54s) test/e2e/apps/statefulset.go:649 Spec Goroutine goroutine 223 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc002ef3fe0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x30?, 0x2fd9d05?, 0x40?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc0040bbd80?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x75b6f82?, 0x4?, 0x76fb525?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForStatusReadyReplicas({0x801de88?, 0xc001c00d00}, 0xc000738a00, 0x0) test/e2e/framework/statefulset/wait.go:104 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10() test/e2e/apps/statefulset.go:678 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x7fb8180, 0xc00409cb40}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 Goroutines of Interest goroutine 141 [select] k8s.io/kubernetes/vendor/k8s.io/client-go/tools/watch.UntilWithoutRetry({0x7fe0c00, 0xc003576cc0}, {0x7fbcaa0, 0xc000b4a2c0}, {0xc002cf1f38, 0x1, 0x2?}) vendor/k8s.io/client-go/tools/watch/until.go:73 k8s.io/kubernetes/vendor/k8s.io/client-go/tools/watch.Until({0x7fe0c00, 0xc003576cc0}, {0xc001a05108?, 0x75b5154?}, {0x7facee0?, 0xc001525c80?}, {0xc002cf1f38, 0x1, 0x1}) vendor/k8s.io/client-go/tools/watch/until.go:114 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10.3() test/e2e/apps/statefulset.go:665 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10 test/e2e/apps/statefulset.go:657 ------------------------------ Nov 26 06:18:40.734: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Nov 26 06:18:50.746: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 ------------------------------ Progress Report for Ginkgo Process #21 Automatically polling progress: [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] (Spec Runtime: 10m20.667s) test/e2e/apps/statefulset.go:587 In [It] (Node Runtime: 10m20.037s) test/e2e/apps/statefulset.go:587 At [By Step] Scale down will halt with unhealthy stateful pod (Step Runtime: 8m57.542s) test/e2e/apps/statefulset.go:649 Spec Goroutine goroutine 223 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc002ef3fe0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x30?, 0x2fd9d05?, 0x40?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc0040bbd80?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x75b6f82?, 0x4?, 0x76fb525?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForStatusReadyReplicas({0x801de88?, 0xc001c00d00}, 0xc000738a00, 0x0) test/e2e/framework/statefulset/wait.go:104 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10() test/e2e/apps/statefulset.go:678 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x7fb8180, 0xc00409cb40}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 Goroutines of Interest goroutine 141 [select, 2 minutes] k8s.io/kubernetes/vendor/k8s.io/client-go/tools/watch.UntilWithoutRetry({0x7fe0c00, 0xc003576cc0}, {0x7fbcaa0, 0xc000b4a2c0}, {0xc002cf1f38, 0x1, 0x2?}) vendor/k8s.io/client-go/tools/watch/until.go:73 k8s.io/kubernetes/vendor/k8s.io/client-go/tools/watch.Until({0x7fe0c00, 0xc003576cc0}, {0xc001a05108?, 0x75b5154?}, {0x7facee0?, 0xc001525c80?}, {0xc002cf1f38, 0x1, 0x1}) vendor/k8s.io/client-go/tools/watch/until.go:114 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10.3() test/e2e/apps/statefulset.go:665 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10 test/e2e/apps/statefulset.go:657 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #21 Automatically polling progress: [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] (Spec Runtime: 10m40.669s) test/e2e/apps/statefulset.go:587 In [It] (Node Runtime: 10m40.04s) test/e2e/apps/statefulset.go:587 At [By Step] Scale down will halt with unhealthy stateful pod (Step Runtime: 9m17.544s) test/e2e/apps/statefulset.go:649 Spec Goroutine goroutine 223 [select] k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*ClientConn).RoundTrip(0xc000a86300, 0xc001180a00) vendor/golang.org/x/net/http2/transport.go:1200 k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).RoundTripOpt(0xc0016e2300, 0xc001180a00, {0xe0?}) vendor/golang.org/x/net/http2/transport.go:519 k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).RoundTrip(...) vendor/golang.org/x/net/http2/transport.go:480 k8s.io/kubernetes/vendor/golang.org/x/net/http2.noDialH2RoundTripper.RoundTrip({0xc003fe4000?}, 0xc001180a00?) vendor/golang.org/x/net/http2/transport.go:3020 net/http.(*Transport).roundTrip(0xc003fe4000, 0xc001180a00) /usr/local/go/src/net/http/transport.go:540 net/http.(*Transport).RoundTrip(0x6fe4b20?, 0xc002ef8ff0?) /usr/local/go/src/net/http/roundtrip.go:17 k8s.io/kubernetes/vendor/k8s.io/client-go/transport.(*bearerAuthRoundTripper).RoundTrip(0xc002ef93e0, 0xc001180900) vendor/k8s.io/client-go/transport/round_trippers.go:317 k8s.io/kubernetes/vendor/k8s.io/client-go/transport.(*userAgentRoundTripper).RoundTrip(0xc00141d140, 0xc001180800) vendor/k8s.io/client-go/transport/round_trippers.go:168 net/http.send(0xc001180800, {0x7fad100, 0xc00141d140}, {0x74d54e0?, 0x1?, 0x0?}) /usr/local/go/src/net/http/client.go:251 net/http.(*Client).send(0xc002ef9410, 0xc001180800, {0x7f3a80ae2108?, 0x100?, 0x0?}) /usr/local/go/src/net/http/client.go:175 net/http.(*Client).do(0xc002ef9410, 0xc001180800) /usr/local/go/src/net/http/client.go:715 net/http.(*Client).Do(...) /usr/local/go/src/net/http/client.go:581 k8s.io/kubernetes/vendor/k8s.io/client-go/rest.(*Request).request(0xc001180600, {0x7fe0bc8, 0xc0000820e0}, 0x75b5196?) vendor/k8s.io/client-go/rest/request.go:964 k8s.io/kubernetes/vendor/k8s.io/client-go/rest.(*Request).Do(0xc001180600, {0x7fe0bc8, 0xc0000820e0}) vendor/k8s.io/client-go/rest/request.go:1005 k8s.io/kubernetes/vendor/k8s.io/client-go/kubernetes/typed/apps/v1.(*statefulSets).Get(0xc002efc540, {0x7fe0bc8, 0xc0000820e0}, {0xc0042e74b8, 0x2}, {{{0x0, 0x0}, {0x0, 0x0}}, {0x0, ...}}) vendor/k8s.io/client-go/kubernetes/typed/apps/v1/statefulset.go:86 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForStatusReadyReplicas.func1() test/e2e/framework/statefulset/wait.go:106 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0000820c8?}, 0xc0042e74b8?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc002ef3fe0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x30?, 0x2fd9d05?, 0x40?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc0040bbd80?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x75b6f82?, 0x4?, 0x76fb525?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForStatusReadyReplicas({0x801de88?, 0xc001c00d00}, 0xc000738a00, 0x0) test/e2e/framework/statefulset/wait.go:104 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10() test/e2e/apps/statefulset.go:678 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x7fb8180, 0xc00409cb40}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 Goroutines of Interest goroutine 141 [select, 2 minutes] k8s.io/kubernetes/vendor/k8s.io/client-go/tools/watch.UntilWithoutRetry({0x7fe0c00, 0xc003576cc0}, {0x7fbcaa0, 0xc000b4a2c0}, {0xc002cf1f38, 0x1, 0x2?}) vendor/k8s.io/client-go/tools/watch/until.go:73 k8s.io/kubernetes/vendor/k8s.io/client-go/tools/watch.Until({0x7fe0c00, 0xc003576cc0}, {0xc001a05108?, 0x75b5154?}, {0x7facee0?, 0xc001525c80?}, {0xc002cf1f38, 0x1, 0x1}) vendor/k8s.io/client-go/tools/watch/until.go:114 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10.3() test/e2e/apps/statefulset.go:665 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10 test/e2e/apps/statefulset.go:657 ------------------------------ Nov 26 06:19:20.355: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Nov 26 06:19:20.735: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Nov 26 06:19:30.735: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 ------------------------------ Progress Report for Ginkgo Process #21 Automatically polling progress: [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] (Spec Runtime: 11m0.672s) test/e2e/apps/statefulset.go:587 In [It] (Node Runtime: 11m0.043s) test/e2e/apps/statefulset.go:587 At [By Step] Scale down will halt with unhealthy stateful pod (Step Runtime: 9m37.547s) test/e2e/apps/statefulset.go:649 Spec Goroutine goroutine 223 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc002ef3fe0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x30?, 0x2fd9d05?, 0x40?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc0040bbd80?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x75b6f82?, 0x4?, 0x76fb525?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForStatusReadyReplicas({0x801de88?, 0xc001c00d00}, 0xc000738a00, 0x0) test/e2e/framework/statefulset/wait.go:104 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10() test/e2e/apps/statefulset.go:678 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x7fb8180, 0xc00409cb40}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 Goroutines of Interest goroutine 141 [select, 2 minutes] k8s.io/kubernetes/vendor/k8s.io/client-go/tools/watch.UntilWithoutRetry({0x7fe0c00, 0xc003576cc0}, {0x7fbcaa0, 0xc000b4a2c0}, {0xc002cf1f38, 0x1, 0x2?}) vendor/k8s.io/client-go/tools/watch/until.go:73 k8s.io/kubernetes/vendor/k8s.io/client-go/tools/watch.Until({0x7fe0c00, 0xc003576cc0}, {0xc001a05108?, 0x75b5154?}, {0x7facee0?, 0xc001525c80?}, {0xc002cf1f38, 0x1, 0x1}) vendor/k8s.io/client-go/tools/watch/until.go:114 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10.3() test/e2e/apps/statefulset.go:665 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10 test/e2e/apps/statefulset.go:657 ------------------------------ Nov 26 06:19:40.735: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Nov 26 06:19:50.755: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 ------------------------------ Progress Report for Ginkgo Process #21 Automatically polling progress: [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] (Spec Runtime: 11m20.675s) test/e2e/apps/statefulset.go:587 In [It] (Node Runtime: 11m20.045s) test/e2e/apps/statefulset.go:587 At [By Step] Scale down will halt with unhealthy stateful pod (Step Runtime: 9m57.55s) test/e2e/apps/statefulset.go:649 Spec Goroutine goroutine 223 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0000820c8}, 0xc002ef3fe0, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0000820c8}, 0x30?, 0x2fd9d05?, 0x40?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0000820c8}, 0x75b521a?, 0xc0040bbd80?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x75b6f82?, 0x4?, 0x76fb525?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForStatusReadyReplicas({0x801de88?, 0xc001c00d00}, 0xc000738a00, 0x0) test/e2e/framework/statefulset/wait.go:104 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10() test/e2e/apps/statefulset.go:678 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x7fb8180, 0xc00409cb40}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 Goroutines of Interest goroutine 141 [select, 3 minutes] k8s.io/kubernetes/vendor/k8s.io/client-go/tools/watch.UntilWithoutRetry({0x7fe0c00, 0xc003576cc0}, {0x7fbcaa0, 0xc000b4a2c0}, {0xc002cf1f38, 0x1, 0x2?}) vendor/k8s.io/client-go/tools/watch/until.go:73 k8s.io/kubernetes/vendor/k8s.io/client-go/tools/watch.Until({0x7fe0c00, 0xc003576cc0}, {0xc001a05108?, 0x75b5154?}, {0x7facee0?, 0xc001525c80?}, {0xc002cf1f38, 0x1, 0x1}) vendor/k8s.io/client-go/tools/watch/until.go:114 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10.3() test/e2e/apps/statefulset.go:665 > k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10 test/e2e/apps/statefulset.go:657 ------------------------------ Nov 26 06:20:00.754: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Nov 26 06:20:00.882: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Nov 26 06:20:00.882: FAIL: Failed waiting for stateful set status.readyReplicas updated to 0: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForStatusReadyReplicas({0x801de88?, 0xc001c00d00}, 0xc000738a00, 0x0) test/e2e/framework/statefulset/wait.go:120 +0x231 k8s.io/kubernetes/test/e2e/apps.glob..func10.2.10() test/e2e/apps/statefulset.go:678 +0x975 [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] test/e2e/apps/statefulset.go:124 Nov 26 06:20:00.941: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=statefulset-6296 describe po ss-0' Nov 26 06:20:01.305: INFO: stderr: "" Nov 26 06:20:01.305: INFO: stdout: "Name: ss-0\nNamespace: statefulset-6296\nPriority: 0\nService Account: default\nNode: bootstrap-e2e-minion-group-8xrn/10.138.0.4\nStart Time: Sat, 26 Nov 2022 06:08:35 +0000\nLabels: baz=blah\n controller-revision-hash=ss-7b6c9599d5\n foo=bar\n statefulset.kubernetes.io/pod-name=ss-0\nAnnotations: <none>\nStatus: Running\nIP: 10.64.1.123\nIPs:\n IP: 10.64.1.123\nControlled By: StatefulSet/ss\nContainers:\n webserver:\n Container ID: containerd://9af26bf016dde21b23409eeb4bcd2fdd228513c2d26364cec05fa637bf229e56\n Image: registry.k8s.io/e2e-test-images/httpd:2.4.38-4\n Image ID: registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22\n Port: <none>\n Host Port: <none>\n State: Waiting\n Reason: CrashLoopBackOff\n Last State: Terminated\n Reason: Error\n Exit Code: 137\n Started: Sat, 26 Nov 2022 06:17:49 +0000\n Finished: Sat, 26 Nov 2022 06:18:19 +0000\n Ready: False\n Restart Count: 5\n Readiness: http-get http://:80/index.html delay=0s timeout=1s period=1s #success=1 #failure=1\n Environment: <none>\n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gznbv (ro)\nConditions:\n Type Status\n Initialized True \n Ready False \n ContainersReady False \n PodScheduled True \nVolumes:\n kube-api-access-gznbv:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: <nil>\n DownwardAPI: true\nQoS Class: BestEffort\nNode-Selectors: <none>\nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 11m default-scheduler Successfully assigned statefulset-6296/ss-0 to bootstrap-e2e-minion-group-8xrn\n Normal Pulling 11m kubelet Pulling image \"registry.k8s.io/e2e-test-images/httpd:2.4.38-4\"\n Normal Pulled 11m kubelet Successfully pulled image \"registry.k8s.io/e2e-test-images/httpd:2.4.38-4\" in 6.015766737s (6.015782789s including waiting)\n Normal Created 11m kubelet Created container webserver\n Normal Started 11m kubelet Started container webserver\n Warning Unhealthy 10m (x21 over 11m) kubelet Readiness probe failed: HTTP probe failed with statuscode: 404\n Warning BackOff 73s (x27 over 8m55s) kubelet Back-off restarting failed container webserver in pod ss-0_statefulset-6296(3807ded2-8771-446e-94e2-f17bfbdfba40)\n" Nov 26 06:20:01.305: INFO: Output of kubectl describe ss-0: Name: ss-0 Namespace: statefulset-6296 Priority: 0 Service Account: default Node: bootstrap-e2e-minion-group-8xrn/10.138.0.4 Start Time: Sat, 26 Nov 2022 06:08:35 +0000 Labels: baz=blah controller-revision-hash=ss-7b6c9599d5 foo=bar statefulset.kubernetes.io/pod-name=ss-0 Annotations: <none> Status: Running IP: 10.64.1.123 IPs: IP: 10.64.1.123 Controlled By: StatefulSet/ss Containers: webserver: Container ID: containerd://9af26bf016dde21b23409eeb4bcd2fdd228513c2d26364cec05fa637bf229e56 Image: registry.k8s.io/e2e-test-images/httpd:2.4.38-4 Image ID: registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 Port: <none> Host Port: <none> State: Waiting Reason: CrashLoopBackOff Last State: Terminated Reason: Error Exit Code: 137 Started: Sat, 26 Nov 2022 06:17:49 +0000 Finished: Sat, 26 Nov 2022 06:18:19 +0000 Ready: False Restart Count: 5 Readiness: http-get http://:80/index.html delay=0s timeout=1s period=1s #success=1 #failure=1 Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gznbv (ro) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: kube-api-access-gznbv: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true QoS Class: BestEffort Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 11m default-scheduler Successfully assigned statefulset-6296/ss-0 to bootstrap-e2e-minion-group-8xrn Normal Pulling 11m kubelet Pulling image "registry.k8s.io/e2e-test-images/httpd:2.4.38-4" Normal Pulled 11m kubelet Successfully pulled image "registry.k8s.io/e2e-test-images/httpd:2.4.38-4" in 6.015766737s (6.015782789s including waiting) Normal Created 11m kubelet Created container webserver Normal Started 11m kubelet Started container webserver Warning Unhealthy 10m (x21 over 11m) kubelet Readiness probe failed: HTTP probe failed with statuscode: 404 Warning BackOff 73s (x27 over 8m55s) kubelet Back-off restarting failed container webserver in pod ss-0_statefulset-6296(3807ded2-8771-446e-94e2-f17bfbdfba40) Nov 26 06:20:01.305: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=statefulset-6296 logs ss-0 --tail=100' Nov 26 06:20:01.596: INFO: stderr: "" Nov 26 06:20:01.596: INFO: stdout: "[Sat Nov 26 06:17:49.763715 2022] [mpm_event:notice] [pid 1:tid 139962561608552] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations\n[Sat Nov 26 06:17:49.763795 2022] [core:notice] [pid 1:tid 139962561608552] AH00094: Command line: 'httpd -D FOREGROUND'\n10.64.1.1 - - [26/Nov/2022:06:17:50 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.1.1 - - [26/Nov/2022:06:17:51 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.1.1 - - [26/Nov/2022:06:17:52 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.1.1 - - [26/Nov/2022:06:17:53 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.1.1 - - [26/Nov/2022:06:17:54 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.1.1 - - [26/Nov/2022:06:17:55 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.1.1 - - [26/Nov/2022:06:17:56 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.1.1 - - [26/Nov/2022:06:17:57 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.1.1 - - [26/Nov/2022:06:17:58 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.1.1 - - [26/Nov/2022:06:17:59 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.1.1 - - [26/Nov/2022:06:18:00 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.1.1 - - [26/Nov/2022:06:18:01 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.1.1 - - [26/Nov/2022:06:18:02 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.1.1 - - [26/Nov/2022:06:18:03 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.1.1 - - [26/Nov/2022:06:18:04 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.1.1 - - [26/Nov/2022:06:18:05 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.1.1 - - [26/Nov/2022:06:18:06 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.1.1 - - [26/Nov/2022:06:18:07 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.1.1 - - [26/Nov/2022:06:18:08 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.1.1 - - [26/Nov/2022:06:18:09 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.1.1 - - [26/Nov/2022:06:18:10 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.1.1 - - [26/Nov/2022:06:18:11 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.1.1 - - [26/Nov/2022:06:18:12 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.1.1 - - [26/Nov/2022:06:18:13 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.1.1 - - [26/Nov/2022:06:18:14 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.1.1 - - [26/Nov/2022:06:18:15 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.1.1 - - [26/Nov/2022:06:18:16 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.1.1 - - [26/Nov/2022:06:18:17 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.1.1 - - [26/Nov/2022:06:18:18 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.1.1 - - [26/Nov/2022:06:18:19 +0000] \"GET /index.html HTTP/1.1\" 200 45\n" Nov 26 06:20:01.596: INFO: Last 100 log lines of ss-0: [Sat Nov 26 06:17:49.763715 2022] [mpm_event:notice] [pid 1:tid 139962561608552] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations [Sat Nov 26 06:17:49.763795 2022] [core:notice] [pid 1:tid 139962561608552] AH00094: Command line: 'httpd -D FOREGROUND' 10.64.1.1 - - [26/Nov/2022:06:17:50 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.1.1 - - [26/Nov/2022:06:17:51 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.1.1 - - [26/Nov/2022:06:17:52 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.1.1 - - [26/Nov/2022:06:17:53 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.1.1 - - [26/Nov/2022:06:17:54 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.1.1 - - [26/Nov/2022:06:17:55 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.1.1 - - [26/Nov/2022:06:17:56 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.1.1 - - [26/Nov/2022:06:17:57 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.1.1 - - [26/Nov/2022:06:17:58 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.1.1 - - [26/Nov/2022:06:17:59 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.1.1 - - [26/Nov/2022:06:18:00 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.1.1 - - [26/Nov/2022:06:18:01 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.1.1 - - [26/Nov/2022:06:18:02 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.1.1 - - [26/Nov/2022:06:18:03 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.1.1 - - [26/Nov/2022:06:18:04 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.1.1 - - [26/Nov/2022:06:18:05 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.1.1 - - [26/Nov/2022:06:18:06 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.1.1 - - [26/Nov/2022:06:18:07 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.1.1 - - [26/Nov/2022:06:18:08 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.1.1 - - [26/Nov/2022:06:18:09 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.1.1 - - [26/Nov/2022:06:18:10 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.1.1 - - [26/Nov/2022:06:18:11 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.1.1 - - [26/Nov/2022:06:18:12 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.1.1 - - [26/Nov/2022:06:18:13 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.1.1 - - [26/Nov/2022:06:18:14 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.1.1 - - [26/Nov/2022:06:18:15 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.1.1 - - [26/Nov/2022:06:18:16 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.1.1 - - [26/Nov/2022:06:18:17 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.1.1 - - [26/Nov/2022:06:18:18 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.1.1 - - [26/Nov/2022:06:18:19 +0000] "GET /index.html HTTP/1.1" 200 45 Nov 26 06:20:01.596: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=statefulset-6296 describe po ss-1' Nov 26 06:20:01.915: INFO: stderr: "" Nov 26 06:20:01.915: INFO: stdout: "Name: ss-1\nNamespace: statefulset-6296\nPriority: 0\nService Account: default\nNode: bootstrap-e2e-minion-group-6hf3/10.138.0.3\nStart Time: Sat, 26 Nov 2022 06:09:25 +0000\nLabels: baz=blah\n controller-revision-hash=ss-7b6c9599d5\n foo=bar\n statefulset.kubernetes.io/pod-name=ss-1\nAnnotations: <none>\nStatus: Running\nIP: 10.64.3.90\nIPs:\n IP: 10.64.3.90\nControlled By: StatefulSet/ss\nContainers:\n webserver:\n Container ID: containerd://c3326c3ed188e6e9f3fc3d57ce2c0c443321618543e170df88792cf2d05b0699\n Image: registry.k8s.io/e2e-test-images/httpd:2.4.38-4\n Image ID: registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22\n Port: <none>\n Host Port: <none>\n State: Running\n Started: Sat, 26 Nov 2022 06:15:25 +0000\n Last State: Terminated\n Reason: Completed\n Exit Code: 0\n Started: Sat, 26 Nov 2022 06:12:40 +0000\n Finished: Sat, 26 Nov 2022 06:12:42 +0000\n Ready: True\n Restart Count: 6\n Readiness: http-get http://:80/index.html delay=0s timeout=1s period=1s #success=1 #failure=1\n Environment: <none>\n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8zmfh (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n kube-api-access-8zmfh:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: <nil>\n DownwardAPI: true\nQoS Class: BestEffort\nNode-Selectors: <none>\nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 10m default-scheduler Successfully assigned statefulset-6296/ss-1 to bootstrap-e2e-minion-group-6hf3\n Normal Pulling 10m kubelet Pulling image \"registry.k8s.io/e2e-test-images/httpd:2.4.38-4\"\n Normal Pulled 10m kubelet Successfully pulled image \"registry.k8s.io/e2e-test-images/httpd:2.4.38-4\" in 6.792243244s (6.792259662s including waiting)\n Warning Unhealthy 10m kubelet Readiness probe failed: Get \"http://10.64.3.24:80/index.html\": dial tcp 10.64.3.24:80: connect: connection refused\n Warning Unhealthy 10m kubelet Readiness probe failed: Get \"http://10.64.3.24:80/index.html\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\n Warning Unhealthy 10m kubelet Readiness probe failed: Get \"http://10.64.3.34:80/index.html\": dial tcp 10.64.3.34:80: connect: connection refused\n Normal Created 10m (x3 over 10m) kubelet Created container webserver\n Normal Pulled 10m (x2 over 10m) kubelet Container image \"registry.k8s.io/e2e-test-images/httpd:2.4.38-4\" already present on machine\n Normal Started 10m (x3 over 10m) kubelet Started container webserver\n Normal SandboxChanged 10m (x3 over 10m) kubelet Pod sandbox changed, it will be killed and re-created.\n Normal Killing 10m (x3 over 10m) kubelet Stopping container webserver\n Warning Unhealthy 10m (x2 over 10m) kubelet Readiness probe failed: HTTP probe failed with statuscode: 404\n Warning BackOff 5m28s (x31 over 10m) kubelet Back-off restarting failed container webserver in pod ss-1_statefulset-6296(f6525ca0-f7f3-4743-a012-a56312091408)\n" Nov 26 06:20:01.915: INFO: Output of kubectl describe ss-1: Name: ss-1 Namespace: statefulset-6296 Priority: 0 Service Account: default Node: bootstrap-e2e-minion-group-6hf3/10.138.0.3 Start Time: Sat, 26 Nov 2022 06:09:25 +0000 Labels: baz=blah controller-revision-hash=ss-7b6c9599d5 foo=bar statefulset.kubernetes.io/pod-name=ss-1 Annotations: <none> Status: Running IP: 10.64.3.90 IPs: IP: 10.64.3.90 Controlled By: StatefulSet/ss Containers: webserver: Container ID: containerd://c3326c3ed188e6e9f3fc3d57ce2c0c443321618543e170df88792cf2d05b0699 Image: registry.k8s.io/e2e-test-images/httpd:2.4.38-4 Image ID: registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 Port: <none> Host Port: <none> State: Running Started: Sat, 26 Nov 2022 06:15:25 +0000 Last State: Terminated Reason: Completed Exit Code: 0 Started: Sat, 26 Nov 2022 06:12:40 +0000 Finished: Sat, 26 Nov 2022 06:12:42 +0000 Ready: True Restart Count: 6 Readiness: http-get http://:80/index.html delay=0s timeout=1s period=1s #success=1 #failure=1 Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8zmfh (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: kube-api-access-8zmfh: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true QoS Class: BestEffort Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 10m default-scheduler Successfully assigned statefulset-6296/ss-1 to bootstrap-e2e-minion-group-6hf3 Normal Pulling 10m kubelet Pulling image "registry.k8s.io/e2e-test-images/httpd:2.4.38-4" Normal Pulled 10m kubelet Successfully pulled image "registry.k8s.io/e2e-test-images/httpd:2.4.38-4" in 6.792243244s (6.792259662s including waiting) Warning Unhealthy 10m kubelet Readiness probe failed: Get "http://10.64.3.24:80/index.html": dial tcp 10.64.3.24:80: connect: connection refused Warning Unhealthy 10m kubelet Readiness probe failed: Get "http://10.64.3.24:80/index.html": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Warning Unhealthy 10m kubelet Readiness probe failed: Get "http://10.64.3.34:80/index.html": dial tcp 10.64.3.34:80: connect: connection refused Normal Created 10m (x3 over 10m) kubelet Created container webserver Normal Pulled 10m (x2 over 10m) kubelet Container image "registry.k8s.io/e2e-test-images/httpd:2.4.38-4" already present on machine Normal Started 10m (x3 over 10m) kubelet Started container webserver Normal SandboxChanged 10m (x3 over 10m) kubelet Pod sandbox changed, it will be killed and re-created. Normal Killing 10m (x3 over 10m) kubelet Stopping container webserver Warning Unhealthy 10m (x2 over 10m) kubelet Readiness probe failed: HTTP probe failed with statuscode: 404 Warning BackOff 5m28s (x31 over 10m) kubelet Back-off restarting failed container webserver in pod ss-1_statefulset-6296(f6525ca0-f7f3-4743-a012-a56312091408) Nov 26 06:20:01.915: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=statefulset-6296 logs ss-1 --tail=100' Nov 26 06:20:02.202: INFO: stderr: "" Nov 26 06:20:02.202: INFO: stdout: "10.64.3.1 - - [26/Nov/2022:06:18:23 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.3.1 - - [26/Nov/2022:06:18:24 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.3.1 - - [26/Nov/2022:06:18:25 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.3.1 - - [26/Nov/2022:06:18:26 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.3.1 - - [26/Nov/2022:06:18:27 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.3.1 - - [26/Nov/2022:06:18:28 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.3.1 - - [26/Nov/2022:06:18:29 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.3.1 - - [26/Nov/2022:06:18:30 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.3.1 - - [26/Nov/2022:06:18:31 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.3.1 - - [26/Nov/2022:06:18:32 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.3.1 - - [26/Nov/2022:06:18:33 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.3.1 - - [26/Nov/2022:06:18:34 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.3.1 - - [26/Nov/2022:06:18:35 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.3.1 - - [26/Nov/2022:06:18:36 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.3.1 - - [26/Nov/2022:06:18:37 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.3.1 - - [26/Nov/2022:06:18:38 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.3.1 - - [26/Nov/2022:06:18:39 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.3.1 - - [26/Nov/2022:06:18:40 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.3.1 - - [26/Nov/2022:06:18:41 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.3.1 - - [26/Nov/2022:06:18:42 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.3.1 - - [26/Nov/2022:06:18:43 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.3.1 - - [26/Nov/2022:06:18:44 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.3.1 - - [26/Nov/2022:06:18:45 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.3.1 - - [26/Nov/2022:06:18:46 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.3.1 - - [26/Nov/2022:06:18:47 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.3.1 - - [26/Nov/2022:06:18:48 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.3.1 - - [26/Nov/2022:06:18:49 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.3.1 - - [26/Nov/2022:06:18:50 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.3.1 - - [26/Nov/2022:06:18:51 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.3.1 - - [26/Nov/2022:06:18:52 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.3.1 - - [26/Nov/2022:06:18:53 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.3.1 - - [26/Nov/2022:06:18:54 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.3.1 - - [26/Nov/2022:06:18:55 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.3.1 - - [26/Nov/2022:06:18:56 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.3.1 - - [26/Nov/2022:06:18:57 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.3.1 - - [26/Nov/2022:06:18:58 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.3.1 - - [26/Nov/2022:06:18:59 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.3.1 - - [26/Nov/2022:06:19:00 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.3.1 - - [26/Nov/2022:06:19:01 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.3.1 - - [26/Nov/2022:06:19:02 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.3.1 - - [26/Nov/2022:06:19:03 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.3.1 - - [26/Nov/2022:06:19:04 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.3.1 - - [26/Nov/2022:06:19:05 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.3.1 - - [26/Nov/2022:06:19:06 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.3.1 - - [26/Nov/2022:06:19:07 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.3.1 - - [26/Nov/2022:06:19:08 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.3.1 - - [26/Nov/2022:06:19:09 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.3.1 - - [26/Nov/2022:06:19:10 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.3.1 - - [26/Nov/2022:06:19:11 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.3.1 - - [26/Nov/2022:06:19:12 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.3.1 - - [26/Nov/2022:06:19:13 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.3.1 - - [26/Nov/2022:06:19:14 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.3.1 - - [26/Nov/2022:06:19:15 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.3.1 - - [26/Nov/2022:06:19:16 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.3.1 - - [26/Nov/2022:06:19:17 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.3.1 - - [26/Nov/2022:06:19:18 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.3.1 - - [26/Nov/2022:06:19:19 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.3.1 - - [26/Nov/2022:06:19:20 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.3.1 - - [26/Nov/2022:06:19:21 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.3.1 - - [26/Nov/2022:06:19:22 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.3.1 - - [26/Nov/2022:06:19:23 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.3.1 - - [26/Nov/2022:06:19:24 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.3.1 - - [26/Nov/2022:06:19:25 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.3.1 - - [26/Nov/2022:06:19:26 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.3.1 - - [26/Nov/2022:06:19:27 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.3.1 - - [26/Nov/2022:06:19:28 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.3.1 - - [26/Nov/2022:06:19:29 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.3.1 - - [26/Nov/2022:06:19:30 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.3.1 - - [26/Nov/2022:06:19:31 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.3.1 - - [26/Nov/2022:06:19:32 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.3.1 - - [26/Nov/2022:06:19:33 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.3.1 - - [26/Nov/2022:06:19:34 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.3.1 - - [26/Nov/2022:06:19:35 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.3.1 - - [26/Nov/2022:06:19:36 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.3.1 - - [26/Nov/2022:06:19:37 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.3.1 - - [26/Nov/2022:06:19:38 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.3.1 - - [26/Nov/2022:06:19:39 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.3.1 - - [26/Nov/2022:06:19:40 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.3.1 - - [26/Nov/2022:06:19:41 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.3.1 - - [26/Nov/2022:06:19:42 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.3.1 - - [26/Nov/2022:06:19:43 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.3.1 - - [26/Nov/2022:06:19:44 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.3.1 - - [26/Nov/2022:06:19:45 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.3.1 - - [26/Nov/2022:06:19:46 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.3.1 - - [26/Nov/2022:06:19:47 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.3.1 - - [26/Nov/2022:06:19:48 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.3.1 - - [26/Nov/2022:06:19:49 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.3.1 - - [26/Nov/2022:06:19:50 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.3.1 - - [26/Nov/2022:06:19:51 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.3.1 - - [26/Nov/2022:06:19:52 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.3.1 - - [26/Nov/2022:06:19:53 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.3.1 - - [26/Nov/2022:06:19:54 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.3.1 - - [26/Nov/2022:06:19:55 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.3.1 - - [26/Nov/2022:06:19:56 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.3.1 - - [26/Nov/2022:06:19:57 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.3.1 - - [26/Nov/2022:06:19:58 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.3.1 - - [26/Nov/2022:06:19:59 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.3.1 - - [26/Nov/2022:06:20:00 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.3.1 - - [26/Nov/2022:06:20:01 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.3.1 - - [26/Nov/2022:06:20:02 +0000] \"GET /index.html HTTP/1.1\" 200 45\n" Nov 26 06:20:02.203: INFO: Last 100 log lines of ss-1: 10.64.3.1 - - [26/Nov/2022:06:18:23 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.3.1 - - [26/Nov/2022:06:18:24 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.3.1 - - [26/Nov/2022:06:18:25 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.3.1 - - [26/Nov/2022:06:18:26 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.3.1 - - [26/Nov/2022:06:18:27 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.3.1 - - [26/Nov/2022:06:18:28 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.3.1 - - [26/Nov/2022:06:18:29 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.3.1 - - [26/Nov/2022:06:18:30 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.3.1 - - [26/Nov/2022:06:18:31 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.3.1 - - [26/Nov/2022:06:18:32 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.3.1 - - [26/Nov/2022:06:18:33 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.3.1 - - [26/Nov/2022:06:18:34 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.3.1 - - [26/Nov/2022:06:18:35 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.3.1 - - [26/Nov/2022:06:18:36 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.3.1 - - [26/Nov/2022:06:18:37 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.3.1 - - [26/Nov/2022:06:18:38 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.3.1 - - [26/Nov/2022:06:18:39 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.3.1 - - [26/Nov/2022:06:18:40 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.3.1 - - [26/Nov/2022:06:18:41 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.3.1 - - [26/Nov/2022:06:18:42 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.3.1 - - [26/Nov/2022:06:18:43 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.3.1 - - [26/Nov/2022:06:18:44 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.3.1 - - [26/Nov/2022:06:18:45 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.3.1 - - [26/Nov/2022:06:18:46 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.3.1 - - [26/Nov/2022:06:18:47 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.3.1 - - [26/Nov/2022:06:18:48 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.3.1 - - [26/Nov/2022:06:18:49 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.3.1 - - [26/Nov/2022:06:18:50 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.3.1 - - [26/Nov/2022:06:18:51 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.3.1 - - [26/Nov/2022:06:18:52 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.3.1 - - [26/Nov/2022:06:18:53 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.3.1 - - [26/Nov/2022:06:18:54 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.3.1 - - [26/Nov/2022:06:18:55 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.3.1 - - [26/Nov/2022:06:18:56 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.3.1 - - [26/Nov/2022:06:18:57 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.3.1 - - [26/Nov/2022:06:18:58 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.3.1 - - [26/Nov/2022:06:18:59 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.3.1 - - [26/Nov/2022:06:19:00 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.3.1 - - [26/Nov/2022:06:19:01 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.3.1 - - [26/Nov/2022:06:19:02 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.3.1 - - [26/Nov/2022:06:19:03 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.3.1 - - [26/Nov/2022:06:19:04 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.3.1 - - [26/Nov/2022:06:19:05 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.3.1 - - [26/Nov/2022:06:19:06 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.3.1 - - [26/Nov/2022:06:19:07 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.3.1 - - [26/Nov/2022:06:19:08 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.3.1 - - [26/Nov/2022:06:19:09 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.3.1 - - [26/Nov/2022:06:19:10 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.3.1 - - [26/Nov/2022:06:19:11 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.3.1 - - [26/Nov/2022:06:19:12 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.3.1 - - [26/Nov/2022:06:19:13 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.3.1 - - [26/Nov/2022:06:19:14 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.3.1 - - [26/Nov/2022:06:19:15 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.3.1 - - [26/Nov/2022:06:19:16 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.3.1 - - [26/Nov/2022:06:19:17 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.3.1 - - [26/Nov/2022:06:19:18 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.3.1 - - [26/Nov/2022:06:19:19 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.3.1 - - [26/Nov/2022:06:19:20 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.3.1 - - [26/Nov/2022:06:19:21 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.3.1 - - [26/Nov/2022:06:19:22 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.3.1 - - [26/Nov/2022:06:19:23 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.3.1 - - [26/Nov/2022:06:19:24 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.3.1 - - [26/Nov/2022:06:19:25 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.3.1 - - [26/Nov/2022:06:19:26 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.3.1 - - [26/Nov/2022:06:19:27 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.3.1 - - [26/Nov/2022:06:19:28 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.3.1 - - [26/Nov/2022:06:19:29 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.3.1 - - [26/Nov/2022:06:19:30 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.3.1 - - [26/Nov/2022:06:19:31 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.3.1 - - [26/Nov/2022:06:19:32 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.3.1 - - [26/Nov/2022:06:19:33 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.3.1 - - [26/Nov/2022:06:19:34 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.3.1 - - [26/Nov/2022:06:19:35 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.3.1 - - [26/Nov/2022:06:19:36 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.3.1 - - [26/Nov/2022:06:19:37 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.3.1 - - [26/Nov/2022:06:19:38 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.3.1 - - [26/Nov/2022:06:19:39 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.3.1 - - [26/Nov/2022:06:19:40 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.3.1 - - [26/Nov/2022:06:19:41 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.3.1 - - [26/Nov/2022:06:19:42 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.3.1 - - [26/Nov/2022:06:19:43 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.3.1 - - [26/Nov/2022:06:19:44 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.3.1 - - [26/Nov/2022:06:19:45 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.3.1 - - [26/Nov/2022:06:19:46 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.3.1 - - [26/Nov/2022:06:19:47 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.3.1 - - [26/Nov/2022:06:19:48 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.3.1 - - [26/Nov/2022:06:19:49 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.3.1 - - [26/Nov/2022:06:19:50 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.3.1 - - [26/Nov/2022:06:19:51 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.3.1 - - [26/Nov/2022:06:19:52 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.3.1 - - [26/Nov/2022:06:19:53 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.3.1 - - [26/Nov/2022:06:19:54 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.3.1 - - [26/Nov/2022:06:19:55 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.3.1 - - [26/Nov/2022:06:19:56 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.3.1 - - [26/Nov/2022:06:19:57 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.3.1 - - [26/Nov/2022:06:19:58 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.3.1 - - [26/Nov/2022:06:19:59 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.3.1 - - [26/Nov/2022:06:20:00 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.3.1 - - [26/Nov/2022:06:20:01 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.3.1 - - [26/Nov/2022:06:20:02 +0000] "GET /index.html HTTP/1.1" 200 45 Nov 26 06:20:02.203: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=statefulset-6296 describe po ss-2' Nov 26 06:20:02.542: INFO: stderr: "" Nov 26 06:20:02.542: INFO: stdout: "Name: ss-2\nNamespace: statefulset-6296\nPriority: 0\nService Account: default\nNode: bootstrap-e2e-minion-group-4lvd/10.138.0.5\nStart Time: Sat, 26 Nov 2022 06:09:41 +0000\nLabels: baz=blah\n controller-revision-hash=ss-7b6c9599d5\n foo=bar\n statefulset.kubernetes.io/pod-name=ss-2\nAnnotations: <none>\nStatus: Running\nIP: 10.64.0.71\nIPs:\n IP: 10.64.0.71\nControlled By: StatefulSet/ss\nContainers:\n webserver:\n Container ID: containerd://af40358fb5ed4f6f734df72ae2404c7cf15e82907c724f2c743ad36f53db747b\n Image: registry.k8s.io/e2e-test-images/httpd:2.4.38-4\n Image ID: registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22\n Port: <none>\n Host Port: <none>\n State: Running\n Started: Sat, 26 Nov 2022 06:16:04 +0000\n Last State: Terminated\n Reason: Completed\n Exit Code: 0\n Started: Sat, 26 Nov 2022 06:10:18 +0000\n Finished: Sat, 26 Nov 2022 06:15:34 +0000\n Ready: True\n Restart Count: 3\n Readiness: http-get http://:80/index.html delay=0s timeout=1s period=1s #success=1 #failure=1\n Environment: <none>\n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-q75nb (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n kube-api-access-q75nb:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: <nil>\n DownwardAPI: true\nQoS Class: BestEffort\nNode-Selectors: <none>\nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 10m default-scheduler Successfully assigned statefulset-6296/ss-2 to bootstrap-e2e-minion-group-4lvd\n Normal Pulling 10m kubelet Pulling image \"registry.k8s.io/e2e-test-images/httpd:2.4.38-4\"\n Normal Pulled 10m kubelet Successfully pulled image \"registry.k8s.io/e2e-test-images/httpd:2.4.38-4\" in 3.293571943s (3.293622235s including waiting)\n Warning Unhealthy 10m kubelet Readiness probe failed: Get \"http://10.64.0.29:80/index.html\": read tcp 10.64.0.1:37312->10.64.0.29:80: read: connection reset by peer\n Warning Unhealthy 10m kubelet Readiness probe failed: Get \"http://10.64.0.29:80/index.html\": dial tcp 10.64.0.29:80: connect: connection refused\n Warning Unhealthy 10m (x2 over 10m) kubelet Readiness probe failed: HTTP probe failed with statuscode: 404\n Warning Unhealthy 10m kubelet Readiness probe failed: Get \"http://10.64.0.30:80/index.html\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\n Normal Created 9m44s (x3 over 10m) kubelet Created container webserver\n Normal Pulled 9m44s (x2 over 10m) kubelet Container image \"registry.k8s.io/e2e-test-images/httpd:2.4.38-4\" already present on machine\n Normal Started 9m44s (x3 over 10m) kubelet Started container webserver\n Normal Killing 4m28s (x3 over 10m) kubelet Stopping container webserver\n Normal SandboxChanged 4m27s (x3 over 10m) kubelet Pod sandbox changed, it will be killed and re-created.\n Warning BackOff 4m27s (x4 over 10m) kubelet Back-off restarting failed container webserver in pod ss-2_statefulset-6296(32e79780-e8c9-4369-b30f-cfccff8cacf4)\n Warning Unhealthy 4m27s kubelet Readiness probe failed: Get \"http://10.64.0.32:80/index.html\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\n" Nov 26 06:20:02.542: INFO: Output of kubectl describe ss-2: Name: ss-2 Namespace: statefulset-6296 Priority: 0 Service Account: default Node: bootstrap-e2e-minion-group-4lvd/10.138.0.5 Start Time: Sat, 26 Nov 2022 06:09:41 +0000 Labels: baz=blah controller-revision-hash=ss-7b6c9599d5 foo=bar statefulset.kubernetes.io/pod-name=ss-2 Annotations: <none> Status: Running IP: 10.64.0.71 IPs: IP: 10.64.0.71 Controlled By: StatefulSet/ss Containers: webserver: Container ID: containerd://af40358fb5ed4f6f734df72ae2404c7cf15e82907c724f2c743ad36f53db747b Image: registry.k8s.io/e2e-test-images/httpd:2.4.38-4 Image ID: registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 Port: <none> Host Port: <none> State: Running Started: Sat, 26 Nov 2022 06:16:04 +0000 Last State: Terminated Reason: Completed Exit Code: 0 Started: Sat, 26 Nov 2022 06:10:18 +0000 Finished: Sat, 26 Nov 2022 06:15:34 +0000 Ready: True Restart Count: 3 Readiness: http-get http://:80/index.html delay=0s timeout=1s period=1s #success=1 #failure=1 Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-q75nb (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: kube-api-access-q75nb: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true QoS Class: BestEffort Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 10m default-scheduler Successfully assigned statefulset-6296/ss-2 to bootstrap-e2e-minion-group-4lvd Normal Pulling 10m kubelet Pulling image "registry.k8s.io/e2e-test-images/httpd:2.4.38-4" Normal Pulled 10m kubelet Successfully pulled image "registry.k8s.io/e2e-test-images/httpd:2.4.38-4" in 3.293571943s (3.293622235s including waiting) Warning Unhealthy 10m kubelet Readiness probe failed: Get "http://10.64.0.29:80/index.html": read tcp 10.64.0.1:37312->10.64.0.29:80: read: connection reset by peer Warning Unhealthy 10m kubelet Readiness probe failed: Get "http://10.64.0.29:80/index.html": dial tcp 10.64.0.29:80: connect: connection refused Warning Unhealthy 10m (x2 over 10m) kubelet Readiness probe failed: HTTP probe failed with statuscode: 404 Warning Unhealthy 10m kubelet Readiness probe failed: Get "http://10.64.0.30:80/index.html": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Normal Created 9m44s (x3 over 10m) kubelet Created container webserver Normal Pulled 9m44s (x2 over 10m) kubelet Container image "registry.k8s.io/e2e-test-images/httpd:2.4.38-4" already present on machine Normal Started 9m44s (x3 over 10m) kubelet Started container webserver Normal Killing 4m28s (x3 over 10m) kubelet Stopping container webserver Normal SandboxChanged 4m27s (x3 over 10m) kubelet Pod sandbox changed, it will be killed and re-created. Warning BackOff 4m27s (x4 over 10m) kubelet Back-off restarting failed container webserver in pod ss-2_statefulset-6296(32e79780-e8c9-4369-b30f-cfccff8cacf4) Warning Unhealthy 4m27s kubelet Readiness probe failed: Get "http://10.64.0.32:80/index.html": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Nov 26 06:20:02.542: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=statefulset-6296 logs ss-2 --tail=100' Nov 26 06:20:02.807: INFO: stderr: "" Nov 26 06:20:02.807: INFO: stdout: "10.64.0.1 - - [26/Nov/2022:06:18:22 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.0.1 - - [26/Nov/2022:06:18:23 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.0.1 - - [26/Nov/2022:06:18:24 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.0.1 - - [26/Nov/2022:06:18:25 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.0.1 - - [26/Nov/2022:06:18:26 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.0.1 - - [26/Nov/2022:06:18:27 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.0.1 - - [26/Nov/2022:06:18:28 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.0.1 - - [26/Nov/2022:06:18:29 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.0.1 - - [26/Nov/2022:06:18:30 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.0.1 - - [26/Nov/2022:06:18:31 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.0.1 - - [26/Nov/2022:06:18:32 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.0.1 - - [26/Nov/2022:06:18:33 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.0.1 - - [26/Nov/2022:06:18:34 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.0.1 - - [26/Nov/2022:06:18:35 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.0.1 - - [26/Nov/2022:06:18:36 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.0.1 - - [26/Nov/2022:06:18:37 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.0.1 - - [26/Nov/2022:06:18:38 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.0.1 - - [26/Nov/2022:06:18:39 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.0.1 - - [26/Nov/2022:06:18:40 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.0.1 - - [26/Nov/2022:06:18:41 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.0.1 - - [26/Nov/2022:06:18:42 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.0.1 - - [26/Nov/2022:06:18:43 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.0.1 - - [26/Nov/2022:06:18:44 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.0.1 - - [26/Nov/2022:06:18:45 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.0.1 - - [26/Nov/2022:06:18:46 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.0.1 - - [26/Nov/2022:06:18:47 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.0.1 - - [26/Nov/2022:06:18:48 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.0.1 - - [26/Nov/2022:06:18:49 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.0.1 - - [26/Nov/2022:06:18:50 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.0.1 - - [26/Nov/2022:06:18:51 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.0.1 - - [26/Nov/2022:06:18:52 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.0.1 - - [26/Nov/2022:06:18:53 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.0.1 - - [26/Nov/2022:06:18:54 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.0.1 - - [26/Nov/2022:06:18:55 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.0.1 - - [26/Nov/2022:06:18:56 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.0.1 - - [26/Nov/2022:06:18:57 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.0.1 - - [26/Nov/2022:06:18:58 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.0.1 - - [26/Nov/2022:06:18:59 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.0.1 - - [26/Nov/2022:06:19:00 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.0.1 - - [26/Nov/2022:06:19:01 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.0.1 - - [26/Nov/2022:06:19:02 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.0.1 - - [26/Nov/2022:06:19:03 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.0.1 - - [26/Nov/2022:06:19:04 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.0.1 - - [26/Nov/2022:06:19:05 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.0.1 - - [26/Nov/2022:06:19:06 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.0.1 - - [26/Nov/2022:06:19:07 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.0.1 - - [26/Nov/2022:06:19:08 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.0.1 - - [26/Nov/2022:06:19:09 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.0.1 - - [26/Nov/2022:06:19:10 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.0.1 - - [26/Nov/2022:06:19:11 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.0.1 - - [26/Nov/2022:06:19:12 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.0.1 - - [26/Nov/2022:06:19:13 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.0.1 - - [26/Nov/2022:06:19:14 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.0.1 - - [26/Nov/2022:06:19:15 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.0.1 - - [26/Nov/2022:06:19:16 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.0.1 - - [26/Nov/2022:06:19:17 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.0.1 - - [26/Nov/2022:06:19:18 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.0.1 - - [26/Nov/2022:06:19:19 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.0.1 - - [26/Nov/2022:06:19:20 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.0.1 - - [26/Nov/2022:06:19:21 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.0.1 - - [26/Nov/2022:06:19:22 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.0.1 - - [26/Nov/2022:06:19:23 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.0.1 - - [26/Nov/2022:06:19:24 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.0.1 - - [26/Nov/2022:06:19:25 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.0.1 - - [26/Nov/2022:06:19:26 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.0.1 - - [26/Nov/2022:06:19:27 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.0.1 - - [26/Nov/2022:06:19:28 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.0.1 - - [26/Nov/2022:06:19:29 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.0.1 - - [26/Nov/2022:06:19:30 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.0.1 - - [26/Nov/2022:06:19:31 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.0.1 - - [26/Nov/2022:06:19:32 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.0.1 - - [26/Nov/2022:06:19:33 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.0.1 - - [26/Nov/2022:06:19:34 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.0.1 - - [26/Nov/2022:06:19:35 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.0.1 - - [26/Nov/2022:06:19:36 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.0.1 - - [26/Nov/2022:06:19:37 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.0.1 - - [26/Nov/2022:06:19:38 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.0.1 - - [26/Nov/2022:06:19:39 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.0.1 - - [26/Nov/2022:06:19:40 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.0.1 - - [26/Nov/2022:06:19:41 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.0.1 - - [26/Nov/2022:06:19:42 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.0.1 - - [26/Nov/2022:06:19:43 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.0.1 - - [26/Nov/2022:06:19:44 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.0.1 - - [26/Nov/2022:06:19:45 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.0.1 - - [26/Nov/2022:06:19:46 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.0.1 - - [26/Nov/2022:06:19:47 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.0.1 - - [26/Nov/2022:06:19:48 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.0.1 - - [26/Nov/2022:06:19:49 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.0.1 - - [26/Nov/2022:06:19:50 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.0.1 - - [26/Nov/2022:06:19:51 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.0.1 - - [26/Nov/2022:06:19:52 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.0.1 - - [26/Nov/2022:06:19:53 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.0.1 - - [26/Nov/2022:06:19:54 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.0.1 - - [26/Nov/2022:06:19:55 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.0.1 - - [26/Nov/2022:06:19:56 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.0.1 - - [26/Nov/2022:06:19:57 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.0.1 - - [26/Nov/2022:06:19:58 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.0.1 - - [26/Nov/2022:06:19:59 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.0.1 - - [26/Nov/2022:06:20:00 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.64.0.1 - - [26/Nov/2022:06:20:01 +0000] \"GET /index.html HTTP/1.1\" 200 45\n" Nov 26 06:20:02.807: INFO: Last 100 log lines of ss-2: 10.64.0.1 - - [26/Nov/2022:06:18:22 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.0.1 - - [26/Nov/2022:06:18:23 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.0.1 - - [26/Nov/2022:06:18:24 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.0.1 - - [26/Nov/2022:06:18:25 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.0.1 - - [26/Nov/2022:06:18:26 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.0.1 - - [26/Nov/2022:06:18:27 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.0.1 - - [26/Nov/2022:06:18:28 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.0.1 - - [26/Nov/2022:06:18:29 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.0.1 - - [26/Nov/2022:06:18:30 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.0.1 - - [26/Nov/2022:06:18:31 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.0.1 - - [26/Nov/2022:06:18:32 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.0.1 - - [26/Nov/2022:06:18:33 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.0.1 - - [26/Nov/2022:06:18:34 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.0.1 - - [26/Nov/2022:06:18:35 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.0.1 - - [26/Nov/2022:06:18:36 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.0.1 - - [26/Nov/2022:06:18:37 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.0.1 - - [26/Nov/2022:06:18:38 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.0.1 - - [26/Nov/2022:06:18:39 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.0.1 - - [26/Nov/2022:06:18:40 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.0.1 - - [26/Nov/2022:06:18:41 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.0.1 - - [26/Nov/2022:06:18:42 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.0.1 - - [26/Nov/2022:06:18:43 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.0.1 - - [26/Nov/2022:06:18:44 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.0.1 - - [26/Nov/2022:06:18:45 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.0.1 - - [26/Nov/2022:06:18:46 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.0.1 - - [26/Nov/2022:06:18:47 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.0.1 - - [26/Nov/2022:06:18:48 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.0.1 - - [26/Nov/2022:06:18:49 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.0.1 - - [26/Nov/2022:06:18:50 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.0.1 - - [26/Nov/2022:06:18:51 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.0.1 - - [26/Nov/2022:06:18:52 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.0.1 - - [26/Nov/2022:06:18:53 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.0.1 - - [26/Nov/2022:06:18:54 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.0.1 - - [26/Nov/2022:06:18:55 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.0.1 - - [26/Nov/2022:06:18:56 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.0.1 - - [26/Nov/2022:06:18:57 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.0.1 - - [26/Nov/2022:06:18:58 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.0.1 - - [26/Nov/2022:06:18:59 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.0.1 - - [26/Nov/2022:06:19:00 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.0.1 - - [26/Nov/2022:06:19:01 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.0.1 - - [26/Nov/2022:06:19:02 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.0.1 - - [26/Nov/2022:06:19:03 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.0.1 - - [26/Nov/2022:06:19:04 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.0.1 - - [26/Nov/2022:06:19:05 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.0.1 - - [26/Nov/2022:06:19:06 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.0.1 - - [26/Nov/2022:06:19:07 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.0.1 - - [26/Nov/2022:06:19:08 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.0.1 - - [26/Nov/2022:06:19:09 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.0.1 - - [26/Nov/2022:06:19:10 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.0.1 - - [26/Nov/2022:06:19:11 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.0.1 - - [26/Nov/2022:06:19:12 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.0.1 - - [26/Nov/2022:06:19:13 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.0.1 - - [26/Nov/2022:06:19:14 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.0.1 - - [26/Nov/2022:06:19:15 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.0.1 - - [26/Nov/2022:06:19:16 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.0.1 - - [26/Nov/2022:06:19:17 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.0.1 - - [26/Nov/2022:06:19:18 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.0.1 - - [26/Nov/2022:06:19:19 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.0.1 - - [26/Nov/2022:06:19:20 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.0.1 - - [26/Nov/2022:06:19:21 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.0.1 - - [26/Nov/2022:06:19:22 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.0.1 - - [26/Nov/2022:06:19:23 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.0.1 - - [26/Nov/2022:06:19:24 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.0.1 - - [26/Nov/2022:06:19:25 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.0.1 - - [26/Nov/2022:06:19:26 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.0.1 - - [26/Nov/2022:06:19:27 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.0.1 - - [26/Nov/2022:06:19:28 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.0.1 - - [26/Nov/2022:06:19:29 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.0.1 - - [26/Nov/2022:06:19:30 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.0.1 - - [26/Nov/2022:06:19:31 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.0.1 - - [26/Nov/2022:06:19:32 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.0.1 - - [26/Nov/2022:06:19:33 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.0.1 - - [26/Nov/2022:06:19:34 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.0.1 - - [26/Nov/2022:06:19:35 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.0.1 - - [26/Nov/2022:06:19:36 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.0.1 - - [26/Nov/2022:06:19:37 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.0.1 - - [26/Nov/2022:06:19:38 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.0.1 - - [26/Nov/2022:06:19:39 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.0.1 - - [26/Nov/2022:06:19:40 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.0.1 - - [26/Nov/2022:06:19:41 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.0.1 - - [26/Nov/2022:06:19:42 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.0.1 - - [26/Nov/2022:06:19:43 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.0.1 - - [26/Nov/2022:06:19:44 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.0.1 - - [26/Nov/2022:06:19:45 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.0.1 - - [26/Nov/2022:06:19:46 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.0.1 - - [26/Nov/2022:06:19:47 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.0.1 - - [26/Nov/2022:06:19:48 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.0.1 - - [26/Nov/2022:06:19:49 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.0.1 - - [26/Nov/2022:06:19:50 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.0.1 - - [26/Nov/2022:06:19:51 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.0.1 - - [26/Nov/2022:06:19:52 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.0.1 - - [26/Nov/2022:06:19:53 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.0.1 - - [26/Nov/2022:06:19:54 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.0.1 - - [26/Nov/2022:06:19:55 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.0.1 - - [26/Nov/2022:06:19:56 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.0.1 - - [26/Nov/2022:06:19:57 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.0.1 - - [26/Nov/2022:06:19:58 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.0.1 - - [26/Nov/2022:06:19:59 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.0.1 - - [26/Nov/2022:06:20:00 +0000] "GET /index.html HTTP/1.1" 200 45 10.64.0.1 - - [26/Nov/2022:06:20:01 +0000] "GET /index.html HTTP/1.1" 200 45 Nov 26 06:20:02.807: INFO: Deleting all statefulset in ns statefulset-6296 Nov 26 06:20:02.863: INFO: Scaling statefulset ss to 0 Nov 26 06:20:13.152: INFO: Waiting for statefulset status.replicas updated to 0 Nov 26 06:20:13.213: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet test/e2e/framework/node/init/init.go:32 Nov 26 06:20:13.385: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-apps] StatefulSet test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-apps] StatefulSet dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/26/22 06:20:13.446 STEP: Collecting events from namespace "statefulset-6296". 11/26/22 06:20:13.446 STEP: Found 40 events. 11/26/22 06:20:13.502 Nov 26 06:20:13.502: INFO: At 2022-11-26 06:08:35 +0000 UTC - event for ss: {statefulset-controller } SuccessfulCreate: create Pod ss-0 in StatefulSet ss successful Nov 26 06:20:13.502: INFO: At 2022-11-26 06:08:35 +0000 UTC - event for ss-0: {default-scheduler } Scheduled: Successfully assigned statefulset-6296/ss-0 to bootstrap-e2e-minion-group-8xrn Nov 26 06:20:13.502: INFO: At 2022-11-26 06:08:37 +0000 UTC - event for ss-0: {kubelet bootstrap-e2e-minion-group-8xrn} Pulling: Pulling image "registry.k8s.io/e2e-test-images/httpd:2.4.38-4" Nov 26 06:20:13.502: INFO: At 2022-11-26 06:08:43 +0000 UTC - event for ss-0: {kubelet bootstrap-e2e-minion-group-8xrn} Pulled: Successfully pulled image "registry.k8s.io/e2e-test-images/httpd:2.4.38-4" in 6.015766737s (6.015782789s including waiting) Nov 26 06:20:13.502: INFO: At 2022-11-26 06:08:43 +0000 UTC - event for ss-0: {kubelet bootstrap-e2e-minion-group-8xrn} Created: Created container webserver Nov 26 06:20:13.502: INFO: At 2022-11-26 06:08:43 +0000 UTC - event for ss-0: {kubelet bootstrap-e2e-minion-group-8xrn} Started: Started container webserver Nov 26 06:20:13.502: INFO: At 2022-11-26 06:08:56 +0000 UTC - event for ss-0: {kubelet bootstrap-e2e-minion-group-8xrn} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 404 Nov 26 06:20:13.502: INFO: At 2022-11-26 06:09:25 +0000 UTC - event for ss: {statefulset-controller } SuccessfulCreate: create Pod ss-1 in StatefulSet ss successful Nov 26 06:20:13.502: INFO: At 2022-11-26 06:09:25 +0000 UTC - event for ss-1: {default-scheduler } Scheduled: Successfully assigned statefulset-6296/ss-1 to bootstrap-e2e-minion-group-6hf3 Nov 26 06:20:13.502: INFO: At 2022-11-26 06:09:27 +0000 UTC - event for ss-1: {kubelet bootstrap-e2e-minion-group-6hf3} Pulling: Pulling image "registry.k8s.io/e2e-test-images/httpd:2.4.38-4" Nov 26 06:20:13.502: INFO: At 2022-11-26 06:09:34 +0000 UTC - event for ss-1: {kubelet bootstrap-e2e-minion-group-6hf3} Created: Created container webserver Nov 26 06:20:13.502: INFO: At 2022-11-26 06:09:34 +0000 UTC - event for ss-1: {kubelet bootstrap-e2e-minion-group-6hf3} Started: Started container webserver Nov 26 06:20:13.502: INFO: At 2022-11-26 06:09:34 +0000 UTC - event for ss-1: {kubelet bootstrap-e2e-minion-group-6hf3} Pulled: Successfully pulled image "registry.k8s.io/e2e-test-images/httpd:2.4.38-4" in 6.792243244s (6.792259662s including waiting) Nov 26 06:20:13.502: INFO: At 2022-11-26 06:09:35 +0000 UTC - event for ss-1: {kubelet bootstrap-e2e-minion-group-6hf3} Killing: Stopping container webserver Nov 26 06:20:13.502: INFO: At 2022-11-26 06:09:36 +0000 UTC - event for ss-1: {kubelet bootstrap-e2e-minion-group-6hf3} Unhealthy: Readiness probe failed: Get "http://10.64.3.24:80/index.html": dial tcp 10.64.3.24:80: connect: connection refused Nov 26 06:20:13.502: INFO: At 2022-11-26 06:09:37 +0000 UTC - event for ss-1: {kubelet bootstrap-e2e-minion-group-6hf3} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Nov 26 06:20:13.502: INFO: At 2022-11-26 06:09:38 +0000 UTC - event for ss-1: {kubelet bootstrap-e2e-minion-group-6hf3} Unhealthy: Readiness probe failed: Get "http://10.64.3.24:80/index.html": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Nov 26 06:20:13.502: INFO: At 2022-11-26 06:09:38 +0000 UTC - event for ss-1: {kubelet bootstrap-e2e-minion-group-6hf3} Pulled: Container image "registry.k8s.io/e2e-test-images/httpd:2.4.38-4" already present on machine Nov 26 06:20:13.502: INFO: At 2022-11-26 06:09:40 +0000 UTC - event for ss-1: {kubelet bootstrap-e2e-minion-group-6hf3} Unhealthy: Readiness probe failed: Get "http://10.64.3.34:80/index.html": dial tcp 10.64.3.34:80: connect: connection refused Nov 26 06:20:13.502: INFO: At 2022-11-26 06:09:41 +0000 UTC - event for ss: {statefulset-controller } SuccessfulCreate: create Pod ss-2 in StatefulSet ss successful Nov 26 06:20:13.502: INFO: At 2022-11-26 06:09:41 +0000 UTC - event for ss-1: {kubelet bootstrap-e2e-minion-group-6hf3} BackOff: Back-off restarting failed container webserver in pod ss-1_statefulset-6296(f6525ca0-f7f3-4743-a012-a56312091408) Nov 26 06:20:13.502: INFO: At 2022-11-26 06:09:41 +0000 UTC - event for ss-2: {default-scheduler } Scheduled: Successfully assigned statefulset-6296/ss-2 to bootstrap-e2e-minion-group-4lvd Nov 26 06:20:13.502: INFO: At 2022-11-26 06:09:43 +0000 UTC - event for ss-2: {kubelet bootstrap-e2e-minion-group-4lvd} Pulling: Pulling image "registry.k8s.io/e2e-test-images/httpd:2.4.38-4" Nov 26 06:20:13.502: INFO: At 2022-11-26 06:09:46 +0000 UTC - event for ss-2: {kubelet bootstrap-e2e-minion-group-4lvd} Unhealthy: Readiness probe failed: Get "http://10.64.0.29:80/index.html": read tcp 10.64.0.1:37312->10.64.0.29:80: read: connection reset by peer Nov 26 06:20:13.502: INFO: At 2022-11-26 06:09:46 +0000 UTC - event for ss-2: {kubelet bootstrap-e2e-minion-group-4lvd} Killing: Stopping container webserver Nov 26 06:20:13.502: INFO: At 2022-11-26 06:09:46 +0000 UTC - event for ss-2: {kubelet bootstrap-e2e-minion-group-4lvd} Created: Created container webserver Nov 26 06:20:13.502: INFO: At 2022-11-26 06:09:46 +0000 UTC - event for ss-2: {kubelet bootstrap-e2e-minion-group-4lvd} Pulled: Successfully pulled image "registry.k8s.io/e2e-test-images/httpd:2.4.38-4" in 3.293571943s (3.293622235s including waiting) Nov 26 06:20:13.502: INFO: At 2022-11-26 06:09:46 +0000 UTC - event for ss-2: {kubelet bootstrap-e2e-minion-group-4lvd} Started: Started container webserver Nov 26 06:20:13.502: INFO: At 2022-11-26 06:09:47 +0000 UTC - event for ss-2: {kubelet bootstrap-e2e-minion-group-4lvd} Unhealthy: Readiness probe failed: Get "http://10.64.0.29:80/index.html": dial tcp 10.64.0.29:80: connect: connection refused Nov 26 06:20:13.502: INFO: At 2022-11-26 06:09:48 +0000 UTC - event for ss-2: {kubelet bootstrap-e2e-minion-group-4lvd} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Nov 26 06:20:13.502: INFO: At 2022-11-26 06:09:49 +0000 UTC - event for ss-2: {kubelet bootstrap-e2e-minion-group-4lvd} Pulled: Container image "registry.k8s.io/e2e-test-images/httpd:2.4.38-4" already present on machine Nov 26 06:20:13.502: INFO: At 2022-11-26 06:10:00 +0000 UTC - event for ss-1: {kubelet bootstrap-e2e-minion-group-6hf3} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 404 Nov 26 06:20:13.502: INFO: At 2022-11-26 06:10:00 +0000 UTC - event for ss-2: {kubelet bootstrap-e2e-minion-group-4lvd} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 404 Nov 26 06:20:13.502: INFO: At 2022-11-26 06:10:02 +0000 UTC - event for ss-2: {kubelet bootstrap-e2e-minion-group-4lvd} Unhealthy: Readiness probe failed: Get "http://10.64.0.30:80/index.html": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Nov 26 06:20:13.502: INFO: At 2022-11-26 06:10:02 +0000 UTC - event for ss-2: {kubelet bootstrap-e2e-minion-group-4lvd} BackOff: Back-off restarting failed container webserver in pod ss-2_statefulset-6296(32e79780-e8c9-4369-b30f-cfccff8cacf4) Nov 26 06:20:13.502: INFO: At 2022-11-26 06:11:06 +0000 UTC - event for ss-0: {kubelet bootstrap-e2e-minion-group-8xrn} BackOff: Back-off restarting failed container webserver in pod ss-0_statefulset-6296(3807ded2-8771-446e-94e2-f17bfbdfba40) Nov 26 06:20:13.502: INFO: At 2022-11-26 06:15:35 +0000 UTC - event for ss-2: {kubelet bootstrap-e2e-minion-group-4lvd} Unhealthy: Readiness probe failed: Get "http://10.64.0.32:80/index.html": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Nov 26 06:20:13.502: INFO: At 2022-11-26 06:20:03 +0000 UTC - event for ss: {statefulset-controller } SuccessfulDelete: delete Pod ss-2 in StatefulSet ss successful Nov 26 06:20:13.502: INFO: At 2022-11-26 06:20:04 +0000 UTC - event for ss: {statefulset-controller } SuccessfulDelete: delete Pod ss-1 in StatefulSet ss successful Nov 26 06:20:13.502: INFO: At 2022-11-26 06:20:05 +0000 UTC - event for ss: {statefulset-controller } SuccessfulDelete: delete Pod ss-0 in StatefulSet ss successful Nov 26 06:20:13.561: INFO: POD NODE PHASE GRACE CONDITIONS Nov 26 06:20:13.561: INFO: Nov 26 06:20:13.620: INFO: Logging node info for node bootstrap-e2e-master Nov 26 06:20:13.679: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-master 193495c1-5b1f-409e-8da9-b1de094ba8ed 7957 0 2022-11-26 06:06:31 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-26 06:06:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2022-11-26 06:06:49 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}},"f:taints":{}}} } {kube-controller-manager Update v1 2022-11-26 06:06:49 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2022-11-26 06:17:05 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-gce-dg-1-6-1-5-dwngr-clu/us-west1-b/bootstrap-e2e-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16656896000 0} {<nil>} 16266500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3858366464 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{14991206376 0} {<nil>} 14991206376 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3596222464 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 06:06:49 +0000 UTC,LastTransitionTime:2022-11-26 06:06:49 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-26 06:17:05 +0000 UTC,LastTransitionTime:2022-11-26 06:06:31 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-26 06:17:05 +0000 UTC,LastTransitionTime:2022-11-26 06:06:31 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-26 06:17:05 +0000 UTC,LastTransitionTime:2022-11-26 06:06:31 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-26 06:17:05 +0000 UTC,LastTransitionTime:2022-11-26 06:06:33 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:34.83.17.181,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-master.c.k8s-gce-dg-1-6-1-5-dwngr-clu.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-master.c.k8s-gce-dg-1-6-1-5-dwngr-clu.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:d1b5362410d55fa2b320229b7ed22cb3,SystemUUID:d1b53624-10d5-5fa2-b320-229b7ed22cb3,BootID:13424371-b4dc-4f78-9364-1afc151da040,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:135160272,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:124990265,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:57660216,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:2c111f004bec24888d8cfa2a812a38fb8341350abac67dcd0ac64e709dfe389c registry.k8s.io/kas-network-proxy/proxy-server:v0.0.33],SizeBytes:22020129,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 26 06:20:13.679: INFO: Logging kubelet events for node bootstrap-e2e-master Nov 26 06:20:13.771: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-master Nov 26 06:20:13.878: INFO: kube-apiserver-bootstrap-e2e-master started at 2022-11-26 06:05:48 +0000 UTC (0+1 container statuses recorded) Nov 26 06:20:13.878: INFO: Container kube-apiserver ready: true, restart count 0 Nov 26 06:20:13.878: INFO: kube-controller-manager-bootstrap-e2e-master started at 2022-11-26 06:05:48 +0000 UTC (0+1 container statuses recorded) Nov 26 06:20:13.878: INFO: Container kube-controller-manager ready: true, restart count 3 Nov 26 06:20:13.878: INFO: kube-addon-manager-bootstrap-e2e-master started at 2022-11-26 06:06:05 +0000 UTC (0+1 container statuses recorded) Nov 26 06:20:13.878: INFO: Container kube-addon-manager ready: true, restart count 1 Nov 26 06:20:13.878: INFO: l7-lb-controller-bootstrap-e2e-master started at 2022-11-26 06:06:05 +0000 UTC (0+1 container statuses recorded) Nov 26 06:20:13.878: INFO: Container l7-lb-controller ready: true, restart count 6 Nov 26 06:20:13.878: INFO: etcd-server-events-bootstrap-e2e-master started at 2022-11-26 06:05:48 +0000 UTC (0+1 container statuses recorded) Nov 26 06:20:13.878: INFO: Container etcd-container ready: true, restart count 1 Nov 26 06:20:13.878: INFO: konnectivity-server-bootstrap-e2e-master started at 2022-11-26 06:05:48 +0000 UTC (0+1 container statuses recorded) Nov 26 06:20:13.878: INFO: Container konnectivity-server-container ready: true, restart count 1 Nov 26 06:20:13.878: INFO: kube-scheduler-bootstrap-e2e-master started at 2022-11-26 06:05:48 +0000 UTC (0+1 container statuses recorded) Nov 26 06:20:13.878: INFO: Container kube-scheduler ready: true, restart count 2 Nov 26 06:20:13.878: INFO: metadata-proxy-v0.1-gg5tl started at 2022-11-26 06:06:31 +0000 UTC (0+2 container statuses recorded) Nov 26 06:20:13.878: INFO: Container metadata-proxy ready: true, restart count 0 Nov 26 06:20:13.878: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 26 06:20:13.878: INFO: etcd-server-bootstrap-e2e-master started at 2022-11-26 06:05:48 +0000 UTC (0+1 container statuses recorded) Nov 26 06:20:13.878: INFO: Container etcd-container ready: true, restart count 2 Nov 26 06:20:14.181: INFO: Latency metrics for node bootstrap-e2e-master Nov 26 06:20:14.181: INFO: Logging node info for node bootstrap-e2e-minion-group-4lvd Nov 26 06:20:14.283: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-4lvd e0d66193-5762-49e7-b872-8ea4dee27a72 8569 0 2022-11-26 06:06:29 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-4lvd kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-4lvd topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-4587":"bootstrap-e2e-minion-group-4lvd"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-11-26 06:06:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}}}} } {kubelet Update v1 2022-11-26 06:06:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-26 06:11:57 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {node-problem-detector Update v1 2022-11-26 06:16:35 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-11-26 06:19:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-gce-dg-1-6-1-5-dwngr-clu/us-west1-b/bootstrap-e2e-minion-group-4lvd,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-26 06:16:35 +0000 UTC,LastTransitionTime:2022-11-26 06:06:33 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-26 06:16:35 +0000 UTC,LastTransitionTime:2022-11-26 06:06:33 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-26 06:16:35 +0000 UTC,LastTransitionTime:2022-11-26 06:06:33 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-26 06:16:35 +0000 UTC,LastTransitionTime:2022-11-26 06:06:33 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-26 06:16:35 +0000 UTC,LastTransitionTime:2022-11-26 06:06:33 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-26 06:16:35 +0000 UTC,LastTransitionTime:2022-11-26 06:06:33 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-26 06:16:35 +0000 UTC,LastTransitionTime:2022-11-26 06:06:33 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 06:06:38 +0000 UTC,LastTransitionTime:2022-11-26 06:06:38 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-26 06:17:04 +0000 UTC,LastTransitionTime:2022-11-26 06:06:29 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-26 06:17:04 +0000 UTC,LastTransitionTime:2022-11-26 06:06:29 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-26 06:17:04 +0000 UTC,LastTransitionTime:2022-11-26 06:06:29 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-26 06:17:04 +0000 UTC,LastTransitionTime:2022-11-26 06:06:29 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.5,},NodeAddress{Type:ExternalIP,Address:34.83.235.77,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-4lvd.c.k8s-gce-dg-1-6-1-5-dwngr-clu.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-4lvd.c.k8s-gce-dg-1-6-1-5-dwngr-clu.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:60066c0df6854403f12a43e8820a94a9,SystemUUID:60066c0d-f685-4403-f12a-43e8820a94a9,BootID:c95b6e64-9275-42a1-b638-b99f8479d167,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:67201736,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 26 06:20:14.284: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-4lvd Nov 26 06:20:14.397: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-4lvd Nov 26 06:20:14.564: INFO: test-hostpath-type-xngqk started at 2022-11-26 06:20:06 +0000 UTC (0+1 container statuses recorded) Nov 26 06:20:14.564: INFO: Container host-path-testing ready: true, restart count 0 Nov 26 06:20:14.564: INFO: csi-mockplugin-0 started at 2022-11-26 06:14:37 +0000 UTC (0+3 container statuses recorded) Nov 26 06:20:14.564: INFO: Container csi-provisioner ready: true, restart count 3 Nov 26 06:20:14.564: INFO: Container driver-registrar ready: true, restart count 3 Nov 26 06:20:14.564: INFO: Container mock ready: true, restart count 3 Nov 26 06:20:14.564: INFO: metadata-proxy-v0.1-z77w4 started at 2022-11-26 06:06:30 +0000 UTC (0+2 container statuses recorded) Nov 26 06:20:14.564: INFO: Container metadata-proxy ready: true, restart count 0 Nov 26 06:20:14.564: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 26 06:20:14.564: INFO: konnectivity-agent-dx4vl started at 2022-11-26 06:06:38 +0000 UTC (0+1 container statuses recorded) Nov 26 06:20:14.564: INFO: Container konnectivity-agent ready: false, restart count 5 Nov 26 06:20:14.564: INFO: httpd started at 2022-11-26 06:20:05 +0000 UTC (0+1 container statuses recorded) Nov 26 06:20:14.564: INFO: Container httpd ready: false, restart count 0 Nov 26 06:20:14.564: INFO: pod-b74700ba-ab26-4c8f-9bec-8b52e79c02ee started at 2022-11-26 06:20:12 +0000 UTC (0+1 container statuses recorded) Nov 26 06:20:14.564: INFO: Container write-pod ready: true, restart count 0 Nov 26 06:20:14.564: INFO: csi-hostpathplugin-0 started at 2022-11-26 06:08:55 +0000 UTC (0+7 container statuses recorded) Nov 26 06:20:14.564: INFO: Container csi-attacher ready: false, restart count 5 Nov 26 06:20:14.564: INFO: Container csi-provisioner ready: false, restart count 5 Nov 26 06:20:14.564: INFO: Container csi-resizer ready: false, restart count 5 Nov 26 06:20:14.564: INFO: Container csi-snapshotter ready: false, restart count 5 Nov 26 06:20:14.564: INFO: Container hostpath ready: false, restart count 5 Nov 26 06:20:14.564: INFO: Container liveness-probe ready: false, restart count 5 Nov 26 06:20:14.564: INFO: Container node-driver-registrar ready: false, restart count 5 Nov 26 06:20:14.564: INFO: hostexec-bootstrap-e2e-minion-group-4lvd-nrj69 started at 2022-11-26 06:20:06 +0000 UTC (0+1 container statuses recorded) Nov 26 06:20:14.564: INFO: Container agnhost-container ready: true, restart count 0 Nov 26 06:20:14.564: INFO: execpod-acceptqsr5s started at 2022-11-26 06:20:06 +0000 UTC (0+1 container statuses recorded) Nov 26 06:20:14.564: INFO: Container agnhost-container ready: true, restart count 0 Nov 26 06:20:14.564: INFO: kube-proxy-bootstrap-e2e-minion-group-4lvd started at 2022-11-26 06:06:29 +0000 UTC (0+1 container statuses recorded) Nov 26 06:20:14.564: INFO: Container kube-proxy ready: false, restart count 6 Nov 26 06:20:14.564: INFO: coredns-6d97d5ddb-n6d4l started at 2022-11-26 06:06:46 +0000 UTC (0+1 container statuses recorded) Nov 26 06:20:14.564: INFO: Container coredns ready: false, restart count 6 Nov 26 06:20:14.564: INFO: netserver-0 started at 2022-11-26 06:18:37 +0000 UTC (0+1 container statuses recorded) Nov 26 06:20:14.564: INFO: Container webserver ready: true, restart count 1 Nov 26 06:20:14.564: INFO: hostexec-bootstrap-e2e-minion-group-4lvd-75xbh started at 2022-11-26 06:20:02 +0000 UTC (0+1 container statuses recorded) Nov 26 06:20:14.564: INFO: Container agnhost-container ready: true, restart count 1 Nov 26 06:20:15.064: INFO: Latency metrics for node bootstrap-e2e-minion-group-4lvd Nov 26 06:20:15.064: INFO: Logging node info for node bootstrap-e2e-minion-group-6hf3 Nov 26 06:20:15.122: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-6hf3 b331b1c1-4929-41cb-863a-a271765dc91a 8608 0 2022-11-26 06:06:35 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-6hf3 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-6hf3 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-multivolume-8219":"bootstrap-e2e-minion-group-6hf3","csi-hostpath-provisioning-4288":"bootstrap-e2e-minion-group-6hf3","csi-mock-csi-mock-volumes-3236":"bootstrap-e2e-minion-group-6hf3"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-26 06:06:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-26 06:06:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}}}} } {node-problem-detector Update v1 2022-11-26 06:16:40 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2022-11-26 06:17:20 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:volumesAttached":{}}} status} {kubelet Update v1 2022-11-26 06:19:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-gce-dg-1-6-1-5-dwngr-clu/us-west1-b/bootstrap-e2e-minion-group-6hf3,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-26 06:16:40 +0000 UTC,LastTransitionTime:2022-11-26 06:06:38 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-26 06:16:40 +0000 UTC,LastTransitionTime:2022-11-26 06:06:38 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-26 06:16:40 +0000 UTC,LastTransitionTime:2022-11-26 06:06:38 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-26 06:16:40 +0000 UTC,LastTransitionTime:2022-11-26 06:06:38 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-26 06:16:40 +0000 UTC,LastTransitionTime:2022-11-26 06:06:38 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-26 06:16:40 +0000 UTC,LastTransitionTime:2022-11-26 06:06:38 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-26 06:16:40 +0000 UTC,LastTransitionTime:2022-11-26 06:06:38 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 06:06:49 +0000 UTC,LastTransitionTime:2022-11-26 06:06:49 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-26 06:17:19 +0000 UTC,LastTransitionTime:2022-11-26 06:06:35 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-26 06:17:19 +0000 UTC,LastTransitionTime:2022-11-26 06:06:35 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-26 06:17:19 +0000 UTC,LastTransitionTime:2022-11-26 06:06:35 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-26 06:17:19 +0000 UTC,LastTransitionTime:2022-11-26 06:06:36 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.3,},NodeAddress{Type:ExternalIP,Address:34.127.104.133,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-6hf3.c.k8s-gce-dg-1-6-1-5-dwngr-clu.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-6hf3.c.k8s-gce-dg-1-6-1-5-dwngr-clu.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:26797942e06ae330f816fe3103a312e5,SystemUUID:26797942-e06a-e330-f816-fe3103a312e5,BootID:2fb94944-0ddb-4524-8e35-935d1c8900f4,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/volume/nfs@sha256:3bda73f2428522b0e342af80a0b9679e8594c2126f2b3cca39ed787589741b9e registry.k8s.io/e2e-test-images/volume/nfs:1.3],SizeBytes:95836203,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:67201736,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[kubernetes.io/csi/csi-mock-csi-mock-volumes-3236^8356e743-6d51-11ed-b618-52bed00202b8 kubernetes.io/csi/csi-mock-csi-mock-volumes-3236^8af161f1-6d51-11ed-b618-52bed00202b8],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-mock-csi-mock-volumes-3236^8356e743-6d51-11ed-b618-52bed00202b8,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-mock-csi-mock-volumes-3236^8af161f1-6d51-11ed-b618-52bed00202b8,DevicePath:,},},Config:nil,},} Nov 26 06:20:15.123: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-6hf3 Nov 26 06:20:15.214: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-6hf3 Nov 26 06:20:15.371: INFO: metadata-proxy-v0.1-hgwt5 started at 2022-11-26 06:06:36 +0000 UTC (0+2 container statuses recorded) Nov 26 06:20:15.371: INFO: Container metadata-proxy ready: true, restart count 0 Nov 26 06:20:15.371: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 26 06:20:15.371: INFO: csi-hostpathplugin-0 started at 2022-11-26 06:15:56 +0000 UTC (0+7 container statuses recorded) Nov 26 06:20:15.371: INFO: Container csi-attacher ready: true, restart count 2 Nov 26 06:20:15.371: INFO: Container csi-provisioner ready: true, restart count 2 Nov 26 06:20:15.371: INFO: Container csi-resizer ready: true, restart count 2 Nov 26 06:20:15.371: INFO: Container csi-snapshotter ready: true, restart count 2 Nov 26 06:20:15.371: INFO: Container hostpath ready: true, restart count 2 Nov 26 06:20:15.371: INFO: Container liveness-probe ready: true, restart count 2 Nov 26 06:20:15.371: INFO: Container node-driver-registrar ready: true, restart count 2 Nov 26 06:20:15.371: INFO: csi-hostpathplugin-0 started at 2022-11-26 06:16:55 +0000 UTC (0+7 container statuses recorded) Nov 26 06:20:15.371: INFO: Container csi-attacher ready: true, restart count 0 Nov 26 06:20:15.371: INFO: Container csi-provisioner ready: true, restart count 0 Nov 26 06:20:15.371: INFO: Container csi-resizer ready: true, restart count 0 Nov 26 06:20:15.371: INFO: Container csi-snapshotter ready: true, restart count 0 Nov 26 06:20:15.371: INFO: Container hostpath ready: true, restart count 0 Nov 26 06:20:15.371: INFO: Container liveness-probe ready: true, restart count 0 Nov 26 06:20:15.371: INFO: Container node-driver-registrar ready: true, restart count 0 Nov 26 06:20:15.371: INFO: netserver-1 started at 2022-11-26 06:18:37 +0000 UTC (0+1 container statuses recorded) Nov 26 06:20:15.371: INFO: Container webserver ready: false, restart count 1 Nov 26 06:20:15.371: INFO: pvc-volume-tester-pg8h9 started at 2022-11-26 06:14:13 +0000 UTC (0+1 container statuses recorded) Nov 26 06:20:15.371: INFO: Container volume-tester ready: true, restart count 0 Nov 26 06:20:15.371: INFO: konnectivity-agent-czjjn started at 2022-11-26 06:06:49 +0000 UTC (0+1 container statuses recorded) Nov 26 06:20:15.371: INFO: Container konnectivity-agent ready: true, restart count 5 Nov 26 06:20:15.371: INFO: pvc-volume-tester-bl2qj started at 2022-11-26 06:14:00 +0000 UTC (0+1 container statuses recorded) Nov 26 06:20:15.371: INFO: Container volume-tester ready: false, restart count 0 Nov 26 06:20:15.371: INFO: hostexec-bootstrap-e2e-minion-group-6hf3-sbzgj started at 2022-11-26 06:20:09 +0000 UTC (0+1 container statuses recorded) Nov 26 06:20:15.371: INFO: Container agnhost-container ready: true, restart count 0 Nov 26 06:20:15.371: INFO: csi-mockplugin-attacher-0 started at 2022-11-26 06:13:45 +0000 UTC (0+1 container statuses recorded) Nov 26 06:20:15.371: INFO: Container csi-attacher ready: true, restart count 3 Nov 26 06:20:15.371: INFO: csi-mockplugin-0 started at 2022-11-26 06:13:45 +0000 UTC (0+3 container statuses recorded) Nov 26 06:20:15.371: INFO: Container csi-provisioner ready: true, restart count 4 Nov 26 06:20:15.371: INFO: Container driver-registrar ready: true, restart count 4 Nov 26 06:20:15.371: INFO: Container mock ready: true, restart count 4 Nov 26 06:20:15.371: INFO: kube-proxy-bootstrap-e2e-minion-group-6hf3 started at 2022-11-26 06:06:35 +0000 UTC (0+1 container statuses recorded) Nov 26 06:20:15.371: INFO: Container kube-proxy ready: false, restart count 6 Nov 26 06:20:15.371: INFO: metrics-server-v0.5.2-867b8754b9-hr966 started at 2022-11-26 06:07:02 +0000 UTC (0+2 container statuses recorded) Nov 26 06:20:15.371: INFO: Container metrics-server ready: false, restart count 5 Nov 26 06:20:15.371: INFO: Container metrics-server-nanny ready: true, restart count 6 Nov 26 06:20:15.371: INFO: hostexec-bootstrap-e2e-minion-group-6hf3-6t5wv started at 2022-11-26 06:20:08 +0000 UTC (0+1 container statuses recorded) Nov 26 06:20:15.371: INFO: Container agnhost-container ready: true, restart count 0 Nov 26 06:20:15.371: INFO: nfs-server started at 2022-11-26 06:09:26 +0000 UTC (0+1 container statuses recorded) Nov 26 06:20:15.371: INFO: Container nfs-server ready: true, restart count 5 Nov 26 06:20:15.781: INFO: Latency metrics for node bootstrap-e2e-minion-group-6hf3 Nov 26 06:20:15.781: INFO: Logging node info for node bootstrap-e2e-minion-group-8xrn Nov 26 06:20:15.854: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-8xrn bdb3ca25-6ac6-4bb7-a82f-bb358d7ba8f9 8090 0 2022-11-26 06:06:29 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-8xrn kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-8xrn topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-11-26 06:06:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}}}} } {kubelet Update v1 2022-11-26 06:06:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-26 06:10:28 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {node-problem-detector Update v1 2022-11-26 06:16:35 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-11-26 06:17:34 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-gce-dg-1-6-1-5-dwngr-clu/us-west1-b/bootstrap-e2e-minion-group-8xrn,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-26 06:16:35 +0000 UTC,LastTransitionTime:2022-11-26 06:06:33 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-26 06:16:35 +0000 UTC,LastTransitionTime:2022-11-26 06:06:33 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-26 06:16:35 +0000 UTC,LastTransitionTime:2022-11-26 06:06:33 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-26 06:16:35 +0000 UTC,LastTransitionTime:2022-11-26 06:06:33 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-26 06:16:35 +0000 UTC,LastTransitionTime:2022-11-26 06:06:33 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-26 06:16:35 +0000 UTC,LastTransitionTime:2022-11-26 06:06:33 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-26 06:16:35 +0000 UTC,LastTransitionTime:2022-11-26 06:06:33 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 06:06:38 +0000 UTC,LastTransitionTime:2022-11-26 06:06:38 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-26 06:17:34 +0000 UTC,LastTransitionTime:2022-11-26 06:06:29 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-26 06:17:34 +0000 UTC,LastTransitionTime:2022-11-26 06:06:29 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-26 06:17:34 +0000 UTC,LastTransitionTime:2022-11-26 06:06:29 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-26 06:17:34 +0000 UTC,LastTransitionTime:2022-11-26 06:06:29 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.4,},NodeAddress{Type:ExternalIP,Address:34.105.92.105,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-8xrn.c.k8s-gce-dg-1-6-1-5-dwngr-clu.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-8xrn.c.k8s-gce-dg-1-6-1-5-dwngr-clu.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:1e49116bed901a0b492cc2a872edd7a8,SystemUUID:1e49116b-ed90-1a0b-492c-c2a872edd7a8,BootID:24bc719e-5b36-4d61-bc89-44455ca28dd7,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:67201736,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 26 06:20:15.854: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-8xrn Nov 26 06:20:15.943: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-8xrn Nov 26 06:20:16.313: INFO: metadata-proxy-v0.1-h465b started at 2022-11-26 06:06:30 +0000 UTC (0+2 container statuses recorded) Nov 26 06:20:16.313: INFO: Container metadata-proxy ready: true, restart count 0 Nov 26 06:20:16.313: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 26 06:20:16.313: INFO: test-container-pod started at 2022-11-26 06:18:50 +0000 UTC (0+1 container statuses recorded) Nov 26 06:20:16.313: INFO: Container webserver ready: true, restart count 1 Nov 26 06:20:16.313: INFO: coredns-6d97d5ddb-rr67j started at 2022-11-26 06:06:38 +0000 UTC (0+1 container statuses recorded) Nov 26 06:20:16.313: INFO: Container coredns ready: true, restart count 6 Nov 26 06:20:16.313: INFO: hostexec-bootstrap-e2e-minion-group-8xrn-p6wdn started at 2022-11-26 06:20:06 +0000 UTC (0+1 container statuses recorded) Nov 26 06:20:16.313: INFO: Container agnhost-container ready: true, restart count 1 Nov 26 06:20:16.313: INFO: lb-sourcerange-pb7ck started at 2022-11-26 06:20:15 +0000 UTC (0+1 container statuses recorded) Nov 26 06:20:16.313: INFO: Container netexec ready: false, restart count 0 Nov 26 06:20:16.313: INFO: execpod-dropgqznm started at 2022-11-26 06:20:11 +0000 UTC (0+1 container statuses recorded) Nov 26 06:20:16.313: INFO: Container agnhost-container ready: true, restart count 0 Nov 26 06:20:16.313: INFO: pod-secrets-87dc7cd1-3169-4f80-851b-f29da7b564c7 started at 2022-11-26 06:17:44 +0000 UTC (0+1 container statuses recorded) Nov 26 06:20:16.313: INFO: Container creates-volume-test ready: false, restart count 0 Nov 26 06:20:16.313: INFO: external-provisioner-z8sqc started at 2022-11-26 06:18:07 +0000 UTC (0+1 container statuses recorded) Nov 26 06:20:16.313: INFO: Container nfs-provisioner ready: true, restart count 3 Nov 26 06:20:16.313: INFO: pod-781f7ff1-e6ba-4e3d-8aa5-df7823c51d2b started at 2022-11-26 06:20:02 +0000 UTC (0+1 container statuses recorded) Nov 26 06:20:16.313: INFO: Container write-pod ready: true, restart count 0 Nov 26 06:20:16.313: INFO: netserver-2 started at 2022-11-26 06:18:37 +0000 UTC (0+1 container statuses recorded) Nov 26 06:20:16.313: INFO: Container webserver ready: true, restart count 0 Nov 26 06:20:16.313: INFO: host-test-container-pod started at 2022-11-26 06:18:50 +0000 UTC (0+1 container statuses recorded) Nov 26 06:20:16.313: INFO: Container agnhost-container ready: true, restart count 2 Nov 26 06:20:16.313: INFO: pod-b0905ca1-1450-47db-8fec-a330ca8c9afd started at 2022-11-26 06:20:10 +0000 UTC (0+1 container statuses recorded) Nov 26 06:20:16.313: INFO: Container write-pod ready: true, restart count 0 Nov 26 06:20:16.313: INFO: kube-proxy-bootstrap-e2e-minion-group-8xrn started at 2022-11-26 06:06:29 +0000 UTC (0+1 container statuses recorded) Nov 26 06:20:16.313: INFO: Container kube-proxy ready: true, restart count 6 Nov 26 06:20:16.313: INFO: konnectivity-agent-7ppwz started at 2022-11-26 06:06:38 +0000 UTC (0+1 container statuses recorded) Nov 26 06:20:16.313: INFO: Container konnectivity-agent ready: true, restart count 6 Nov 26 06:20:16.313: INFO: pod-subpath-test-dynamicpv-g5dx started at 2022-11-26 06:18:50 +0000 UTC (1+2 container statuses recorded) Nov 26 06:20:16.313: INFO: Init container init-volume-dynamicpv-g5dx ready: true, restart count 1 Nov 26 06:20:16.313: INFO: Container test-container-subpath-dynamicpv-g5dx ready: false, restart count 2 Nov 26 06:20:16.313: INFO: Container test-container-volume-dynamicpv-g5dx ready: false, restart count 2 Nov 26 06:20:16.313: INFO: external-provisioner-gkgcn started at 2022-11-26 06:20:01 +0000 UTC (0+1 container statuses recorded) Nov 26 06:20:16.313: INFO: Container nfs-provisioner ready: true, restart count 0 Nov 26 06:20:16.313: INFO: l7-default-backend-8549d69d99-7w7f7 started at 2022-11-26 06:06:38 +0000 UTC (0+1 container statuses recorded) Nov 26 06:20:16.313: INFO: Container default-http-backend ready: true, restart count 0 Nov 26 06:20:16.313: INFO: pod-7fcd7e2d-3908-4cd7-8b6b-b06a55b6fffb started at <nil> (0+0 container statuses recorded) Nov 26 06:20:16.313: INFO: volume-snapshot-controller-0 started at 2022-11-26 06:06:38 +0000 UTC (0+1 container statuses recorded) Nov 26 06:20:16.313: INFO: Container volume-snapshot-controller ready: true, restart count 4 Nov 26 06:20:16.313: INFO: pvc-tester-phnnw started at 2022-11-26 06:15:48 +0000 UTC (0+1 container statuses recorded) Nov 26 06:20:16.313: INFO: Container write-pod ready: false, restart count 0 Nov 26 06:20:16.313: INFO: kube-dns-autoscaler-5f6455f985-z5fph started at 2022-11-26 06:06:38 +0000 UTC (0+1 container statuses recorded) Nov 26 06:20:16.313: INFO: Container autoscaler ready: false, restart count 5 Nov 26 06:20:16.313: INFO: external-provisioner-bxp99 started at 2022-11-26 06:18:44 +0000 UTC (0+1 container statuses recorded) Nov 26 06:20:16.313: INFO: Container nfs-provisioner ready: true, restart count 1 Nov 26 06:20:17.415: INFO: Latency metrics for node bootstrap-e2e-minion-group-8xrn [DeferCleanup (Each)] [sig-apps] StatefulSet tear down framework | framework.go:193 STEP: Destroying namespace "statefulset-6296" for this suite. 11/26/22 06:20:17.415
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-auth\]\sServiceAccounts\sshould\ssupport\sInClusterConfig\swith\stoken\srotation\s\[Slow\]$'
test/e2e/framework/framework.go:241 k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc00088c3c0) test/e2e/framework/framework.go:241 +0x96ffrom junit_01.xml
[BeforeEach] [sig-auth] ServiceAccounts set up framework | framework.go:178 STEP: Creating a kubernetes client 11/26/22 06:22:51.635 Nov 26 06:22:51.635: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename svcaccounts 11/26/22 06:22:51.636 Nov 26 06:22:51.676: INFO: Unexpected error while creating namespace: Post "https://34.83.17.181/api/v1/namespaces": dial tcp 34.83.17.181:443: connect: connection refused Nov 26 06:22:53.715: INFO: Unexpected error while creating namespace: Post "https://34.83.17.181/api/v1/namespaces": dial tcp 34.83.17.181:443: connect: connection refused Nov 26 06:22:55.716: INFO: Unexpected error while creating namespace: Post "https://34.83.17.181/api/v1/namespaces": dial tcp 34.83.17.181:443: connect: connection refused Nov 26 06:22:57.716: INFO: Unexpected error while creating namespace: Post "https://34.83.17.181/api/v1/namespaces": dial tcp 34.83.17.181:443: connect: connection refused Nov 26 06:22:59.716: INFO: Unexpected error while creating namespace: Post "https://34.83.17.181/api/v1/namespaces": dial tcp 34.83.17.181:443: connect: connection refused Nov 26 06:23:01.716: INFO: Unexpected error while creating namespace: Post "https://34.83.17.181/api/v1/namespaces": dial tcp 34.83.17.181:443: connect: connection refused Nov 26 06:23:03.715: INFO: Unexpected error while creating namespace: Post "https://34.83.17.181/api/v1/namespaces": dial tcp 34.83.17.181:443: connect: connection refused Nov 26 06:23:05.716: INFO: Unexpected error while creating namespace: Post "https://34.83.17.181/api/v1/namespaces": dial tcp 34.83.17.181:443: connect: connection refused Nov 26 06:23:07.716: INFO: Unexpected error while creating namespace: Post "https://34.83.17.181/api/v1/namespaces": dial tcp 34.83.17.181:443: connect: connection refused Nov 26 06:23:09.715: INFO: Unexpected error while creating namespace: Post "https://34.83.17.181/api/v1/namespaces": dial tcp 34.83.17.181:443: connect: connection refused Nov 26 06:23:11.716: INFO: Unexpected error while creating namespace: Post "https://34.83.17.181/api/v1/namespaces": dial tcp 34.83.17.181:443: connect: connection refused Nov 26 06:23:13.716: INFO: Unexpected error while creating namespace: Post "https://34.83.17.181/api/v1/namespaces": dial tcp 34.83.17.181:443: connect: connection refused Nov 26 06:23:15.716: INFO: Unexpected error while creating namespace: Post "https://34.83.17.181/api/v1/namespaces": dial tcp 34.83.17.181:443: connect: connection refused Nov 26 06:23:17.715: INFO: Unexpected error while creating namespace: Post "https://34.83.17.181/api/v1/namespaces": dial tcp 34.83.17.181:443: connect: connection refused Nov 26 06:23:19.716: INFO: Unexpected error while creating namespace: Post "https://34.83.17.181/api/v1/namespaces": dial tcp 34.83.17.181:443: connect: connection refused Nov 26 06:23:21.715: INFO: Unexpected error while creating namespace: Post "https://34.83.17.181/api/v1/namespaces": dial tcp 34.83.17.181:443: connect: connection refused Nov 26 06:23:21.754: INFO: Unexpected error while creating namespace: Post "https://34.83.17.181/api/v1/namespaces": dial tcp 34.83.17.181:443: connect: connection refused Nov 26 06:23:21.754: INFO: Unexpected error: <*errors.errorString | 0xc00017da10>: { s: "timed out waiting for the condition", } Nov 26 06:23:21.755: FAIL: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc00088c3c0) test/e2e/framework/framework.go:241 +0x96f [AfterEach] [sig-auth] ServiceAccounts test/e2e/framework/node/init/init.go:32 Nov 26 06:23:21.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-auth] ServiceAccounts dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/26/22 06:23:21.794 [DeferCleanup (Each)] [sig-auth] ServiceAccounts tear down framework | framework.go:193
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-cli\]\sKubectl\sclient\sSimple\spod\sshould\sreturn\scommand\sexit\scodes\s\[Slow\]\srunning\sa\sfailing\scommand\swith\s\-\-leave\-stdin\-open$'
test/e2e/kubectl/kubectl.go:589 k8s.io/kubernetes/test/e2e/kubectl.glob..func1.8.7.7() test/e2e/kubectl/kubectl.go:589 +0x22dfrom junit_01.xml
[BeforeEach] [sig-cli] Kubectl client set up framework | framework.go:178 STEP: Creating a kubernetes client 11/26/22 06:25:26.688 Nov 26 06:25:26.689: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename kubectl 11/26/22 06:25:26.69 STEP: Waiting for a default service account to be provisioned in namespace 11/26/22 06:25:26.878 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/26/22 06:25:26.982 [BeforeEach] [sig-cli] Kubectl client test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-cli] Kubectl client test/e2e/kubectl/kubectl.go:274 [BeforeEach] Simple pod test/e2e/kubectl/kubectl.go:411 STEP: creating the pod from 11/26/22 06:25:27.11 Nov 26 06:25:27.110: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=kubectl-7225 create -f -' Nov 26 06:25:27.674: INFO: stderr: "" Nov 26 06:25:27.674: INFO: stdout: "pod/httpd created\n" Nov 26 06:25:27.674: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [httpd] Nov 26 06:25:27.674: INFO: Waiting up to 5m0s for pod "httpd" in namespace "kubectl-7225" to be "running and ready" Nov 26 06:25:27.728: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 54.277358ms Nov 26 06:25:27.728: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'bootstrap-e2e-minion-group-6hf3' to be 'Running' but was 'Pending' Nov 26 06:25:29.902: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.228188743s Nov 26 06:25:29.902: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'bootstrap-e2e-minion-group-6hf3' to be 'Running' but was 'Pending' Nov 26 06:25:31.785: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 4.111025492s Nov 26 06:25:31.785: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-6hf3' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:25:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:25:27 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:25:27 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:25:27 +0000 UTC }] Nov 26 06:25:33.790: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 6.116179229s Nov 26 06:25:33.790: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-6hf3' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:25:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:25:27 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:25:27 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:25:27 +0000 UTC }] Nov 26 06:25:35.826: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 8.15225557s Nov 26 06:25:35.826: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-6hf3' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:25:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:25:27 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:25:27 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:25:27 +0000 UTC }] Nov 26 06:25:37.792: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 10.118137942s Nov 26 06:25:37.792: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-6hf3' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:25:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:25:27 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:25:27 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:25:27 +0000 UTC }] Nov 26 06:25:39.789: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 12.115699582s Nov 26 06:25:39.789: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-6hf3' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:25:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:25:27 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:25:27 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:25:27 +0000 UTC }] Nov 26 06:25:41.784: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 14.110250685s Nov 26 06:25:41.784: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-6hf3' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:25:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:25:27 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:25:27 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:25:27 +0000 UTC }] Nov 26 06:25:43.790: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 16.116301546s Nov 26 06:25:43.790: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-6hf3' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:25:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:25:27 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:25:27 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:25:27 +0000 UTC }] Nov 26 06:25:45.911: INFO: Pod "httpd": Phase="Running", Reason="", readiness=true. Elapsed: 18.237472666s Nov 26 06:25:45.911: INFO: Pod "httpd" satisfied condition "running and ready" Nov 26 06:25:45.911: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [httpd] [It] [Slow] running a failing command with --leave-stdin-open test/e2e/kubectl/kubectl.go:585 Nov 26 06:25:45.911: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=kubectl-7225 run -i --image=registry.k8s.io/e2e-test-images/busybox:1.29-4 --restart=Never --pod-running-timeout=2m0s failure-4 --leave-stdin-open -- /bin/sh -c exit 42' Nov 26 06:25:51.365: INFO: rc: 1 Nov 26 06:25:51.365: INFO: Unexpected error: <exec.CodeExitError>: { Err: <*errors.errorString | 0xc000608e10>{ s: "error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=kubectl-7225 run -i --image=registry.k8s.io/e2e-test-images/busybox:1.29-4 --restart=Never --pod-running-timeout=2m0s failure-4 --leave-stdin-open -- /bin/sh -c exit 42:\nCommand stdout:\n\nstderr:\nError from server: Get \"https://10.138.0.3:10250/containerLogs/kubectl-7225/failure-4/failure-4\": No agent available\n\nerror:\nexit status 1", }, Code: 1, } Nov 26 06:25:51.365: FAIL: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=kubectl-7225 run -i --image=registry.k8s.io/e2e-test-images/busybox:1.29-4 --restart=Never --pod-running-timeout=2m0s failure-4 --leave-stdin-open -- /bin/sh -c exit 42: Command stdout: stderr: Error from server: Get "https://10.138.0.3:10250/containerLogs/kubectl-7225/failure-4/failure-4": No agent available error: exit status 1 Full Stack Trace k8s.io/kubernetes/test/e2e/kubectl.glob..func1.8.7.7() test/e2e/kubectl/kubectl.go:589 +0x22d [AfterEach] Simple pod test/e2e/kubectl/kubectl.go:417 STEP: using delete to clean up resources 11/26/22 06:25:51.366 Nov 26 06:25:51.366: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=kubectl-7225 delete --grace-period=0 --force -f -' Nov 26 06:25:51.705: INFO: stderr: "Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Nov 26 06:25:51.705: INFO: stdout: "pod \"httpd\" force deleted\n" Nov 26 06:25:51.705: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=kubectl-7225 get rc,svc -l name=httpd --no-headers' Nov 26 06:25:52.068: INFO: stderr: "No resources found in kubectl-7225 namespace.\n" Nov 26 06:25:52.068: INFO: stdout: "" Nov 26 06:25:52.068: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=kubectl-7225 get pods -l name=httpd -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Nov 26 06:25:52.358: INFO: stderr: "" Nov 26 06:25:52.358: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client test/e2e/framework/node/init/init.go:32 Nov 26 06:25:52.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-cli] Kubectl client test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-cli] Kubectl client dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/26/22 06:25:52.424 STEP: Collecting events from namespace "kubectl-7225". 11/26/22 06:25:52.424 STEP: Found 9 events. 11/26/22 06:25:52.48 Nov 26 06:25:52.480: INFO: At 2022-11-26 06:25:27 +0000 UTC - event for httpd: {default-scheduler } Scheduled: Successfully assigned kubectl-7225/httpd to bootstrap-e2e-minion-group-6hf3 Nov 26 06:25:52.480: INFO: At 2022-11-26 06:25:28 +0000 UTC - event for httpd: {kubelet bootstrap-e2e-minion-group-6hf3} Pulled: Container image "registry.k8s.io/e2e-test-images/httpd:2.4.38-4" already present on machine Nov 26 06:25:52.480: INFO: At 2022-11-26 06:25:28 +0000 UTC - event for httpd: {kubelet bootstrap-e2e-minion-group-6hf3} Created: Created container httpd Nov 26 06:25:52.480: INFO: At 2022-11-26 06:25:28 +0000 UTC - event for httpd: {kubelet bootstrap-e2e-minion-group-6hf3} Started: Started container httpd Nov 26 06:25:52.480: INFO: At 2022-11-26 06:25:28 +0000 UTC - event for httpd: {kubelet bootstrap-e2e-minion-group-6hf3} Killing: Stopping container httpd Nov 26 06:25:52.480: INFO: At 2022-11-26 06:25:46 +0000 UTC - event for failure-4: {default-scheduler } Scheduled: Successfully assigned kubectl-7225/failure-4 to bootstrap-e2e-minion-group-6hf3 Nov 26 06:25:52.480: INFO: At 2022-11-26 06:25:47 +0000 UTC - event for failure-4: {kubelet bootstrap-e2e-minion-group-6hf3} Pulled: Container image "registry.k8s.io/e2e-test-images/busybox:1.29-4" already present on machine Nov 26 06:25:52.480: INFO: At 2022-11-26 06:25:47 +0000 UTC - event for failure-4: {kubelet bootstrap-e2e-minion-group-6hf3} Created: Created container failure-4 Nov 26 06:25:52.480: INFO: At 2022-11-26 06:25:47 +0000 UTC - event for failure-4: {kubelet bootstrap-e2e-minion-group-6hf3} Started: Started container failure-4 Nov 26 06:25:52.545: INFO: POD NODE PHASE GRACE CONDITIONS Nov 26 06:25:52.545: INFO: failure-4 bootstrap-e2e-minion-group-6hf3 Failed [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:25:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:25:46 +0000 UTC PodFailed } {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:25:46 +0000 UTC PodFailed } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:25:46 +0000 UTC }] Nov 26 06:25:52.545: INFO: Nov 26 06:25:52.637: INFO: Unable to fetch kubectl-7225/failure-4/failure-4 logs: an error on the server ("unknown") has prevented the request from succeeding (get pods failure-4) Nov 26 06:25:52.716: INFO: Logging node info for node bootstrap-e2e-master Nov 26 06:25:52.794: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-master 193495c1-5b1f-409e-8da9-b1de094ba8ed 9690 0 2022-11-26 06:06:31 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-26 06:06:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2022-11-26 06:06:49 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}},"f:taints":{}}} } {kube-controller-manager Update v1 2022-11-26 06:06:49 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2022-11-26 06:22:11 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-gce-dg-1-6-1-5-dwngr-clu/us-west1-b/bootstrap-e2e-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16656896000 0} {<nil>} 16266500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3858366464 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{14991206376 0} {<nil>} 14991206376 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3596222464 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 06:06:49 +0000 UTC,LastTransitionTime:2022-11-26 06:06:49 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-26 06:22:11 +0000 UTC,LastTransitionTime:2022-11-26 06:06:31 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-26 06:22:11 +0000 UTC,LastTransitionTime:2022-11-26 06:06:31 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-26 06:22:11 +0000 UTC,LastTransitionTime:2022-11-26 06:06:31 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-26 06:22:11 +0000 UTC,LastTransitionTime:2022-11-26 06:06:33 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:34.83.17.181,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-master.c.k8s-gce-dg-1-6-1-5-dwngr-clu.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-master.c.k8s-gce-dg-1-6-1-5-dwngr-clu.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:d1b5362410d55fa2b320229b7ed22cb3,SystemUUID:d1b53624-10d5-5fa2-b320-229b7ed22cb3,BootID:13424371-b4dc-4f78-9364-1afc151da040,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:135160272,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:124990265,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:57660216,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:2c111f004bec24888d8cfa2a812a38fb8341350abac67dcd0ac64e709dfe389c registry.k8s.io/kas-network-proxy/proxy-server:v0.0.33],SizeBytes:22020129,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 26 06:25:52.795: INFO: Logging kubelet events for node bootstrap-e2e-master Nov 26 06:25:52.898: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-master Nov 26 06:25:53.016: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-master: error trying to reach service: No agent available Nov 26 06:25:53.016: INFO: Logging node info for node bootstrap-e2e-minion-group-4lvd Nov 26 06:25:53.074: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-4lvd e0d66193-5762-49e7-b872-8ea4dee27a72 11679 0 2022-11-26 06:06:29 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-4lvd kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-4lvd topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-11-26 06:06:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}}}} } {kubelet Update v1 2022-11-26 06:06:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {node-problem-detector Update v1 2022-11-26 06:21:36 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2022-11-26 06:25:13 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:volumesAttached":{}}} status} {kubelet Update v1 2022-11-26 06:25:32 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-gce-dg-1-6-1-5-dwngr-clu/us-west1-b/bootstrap-e2e-minion-group-4lvd,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-26 06:21:36 +0000 UTC,LastTransitionTime:2022-11-26 06:06:33 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-26 06:21:36 +0000 UTC,LastTransitionTime:2022-11-26 06:06:33 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-26 06:21:36 +0000 UTC,LastTransitionTime:2022-11-26 06:06:33 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-26 06:21:36 +0000 UTC,LastTransitionTime:2022-11-26 06:06:33 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-26 06:21:36 +0000 UTC,LastTransitionTime:2022-11-26 06:06:33 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-26 06:21:36 +0000 UTC,LastTransitionTime:2022-11-26 06:06:33 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-26 06:21:36 +0000 UTC,LastTransitionTime:2022-11-26 06:06:33 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 06:06:38 +0000 UTC,LastTransitionTime:2022-11-26 06:06:38 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-26 06:25:16 +0000 UTC,LastTransitionTime:2022-11-26 06:06:29 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-26 06:25:16 +0000 UTC,LastTransitionTime:2022-11-26 06:06:29 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-26 06:25:16 +0000 UTC,LastTransitionTime:2022-11-26 06:06:29 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-26 06:25:16 +0000 UTC,LastTransitionTime:2022-11-26 06:06:29 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.5,},NodeAddress{Type:ExternalIP,Address:34.83.235.77,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-4lvd.c.k8s-gce-dg-1-6-1-5-dwngr-clu.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-4lvd.c.k8s-gce-dg-1-6-1-5-dwngr-clu.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:60066c0df6854403f12a43e8820a94a9,SystemUUID:60066c0d-f685-4403-f12a-43e8820a94a9,BootID:c95b6e64-9275-42a1-b638-b99f8479d167,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:67201736,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-multivolume-6328^139fe791-6d53-11ed-8b0a-ae3fa0c45b22],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-multivolume-6328^139fe791-6d53-11ed-8b0a-ae3fa0c45b22,DevicePath:,},},Config:nil,},} Nov 26 06:25:53.074: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-4lvd Nov 26 06:25:53.144: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-4lvd Nov 26 06:25:53.259: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-4lvd: error trying to reach service: No agent available Nov 26 06:25:53.259: INFO: Logging node info for node bootstrap-e2e-minion-group-6hf3 Nov 26 06:25:53.315: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-6hf3 b331b1c1-4929-41cb-863a-a271765dc91a 11900 0 2022-11-26 06:06:35 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-6hf3 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-6hf3 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-multivolume-5397":"bootstrap-e2e-minion-group-6hf3","csi-mock-csi-mock-volumes-3236":"bootstrap-e2e-minion-group-6hf3"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-26 06:06:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-26 06:06:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}}}} } {kube-controller-manager Update v1 2022-11-26 06:20:44 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:volumesAttached":{}}} status} {node-problem-detector Update v1 2022-11-26 06:21:41 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-11-26 06:25:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-gce-dg-1-6-1-5-dwngr-clu/us-west1-b/bootstrap-e2e-minion-group-6hf3,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-26 06:21:41 +0000 UTC,LastTransitionTime:2022-11-26 06:06:38 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-26 06:21:41 +0000 UTC,LastTransitionTime:2022-11-26 06:06:38 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-26 06:21:41 +0000 UTC,LastTransitionTime:2022-11-26 06:06:38 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-26 06:21:41 +0000 UTC,LastTransitionTime:2022-11-26 06:06:38 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-26 06:21:41 +0000 UTC,LastTransitionTime:2022-11-26 06:06:38 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-26 06:21:41 +0000 UTC,LastTransitionTime:2022-11-26 06:06:38 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-26 06:21:41 +0000 UTC,LastTransitionTime:2022-11-26 06:06:38 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 06:06:49 +0000 UTC,LastTransitionTime:2022-11-26 06:06:49 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-26 06:25:50 +0000 UTC,LastTransitionTime:2022-11-26 06:06:35 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-26 06:25:50 +0000 UTC,LastTransitionTime:2022-11-26 06:06:35 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-26 06:25:50 +0000 UTC,LastTransitionTime:2022-11-26 06:06:35 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-26 06:25:50 +0000 UTC,LastTransitionTime:2022-11-26 06:06:36 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.3,},NodeAddress{Type:ExternalIP,Address:34.127.104.133,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-6hf3.c.k8s-gce-dg-1-6-1-5-dwngr-clu.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-6hf3.c.k8s-gce-dg-1-6-1-5-dwngr-clu.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:26797942e06ae330f816fe3103a312e5,SystemUUID:26797942-e06a-e330-f816-fe3103a312e5,BootID:2fb94944-0ddb-4524-8e35-935d1c8900f4,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/volume/nfs@sha256:3bda73f2428522b0e342af80a0b9679e8594c2126f2b3cca39ed787589741b9e registry.k8s.io/e2e-test-images/volume/nfs:1.3],SizeBytes:95836203,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:67201736,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[kubernetes.io/csi/csi-mock-csi-mock-volumes-3236^8356e743-6d51-11ed-b618-52bed00202b8 kubernetes.io/csi/csi-mock-csi-mock-volumes-3236^8af161f1-6d51-11ed-b618-52bed00202b8],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-mock-csi-mock-volumes-3236^8356e743-6d51-11ed-b618-52bed00202b8,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-mock-csi-mock-volumes-3236^8af161f1-6d51-11ed-b618-52bed00202b8,DevicePath:,},},Config:nil,},} Nov 26 06:25:53.316: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-6hf3 Nov 26 06:25:53.376: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-6hf3 Nov 26 06:25:53.460: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-6hf3: error trying to reach service: No agent available Nov 26 06:25:53.460: INFO: Logging node info for node bootstrap-e2e-minion-group-8xrn Nov 26 06:25:53.595: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-8xrn bdb3ca25-6ac6-4bb7-a82f-bb358d7ba8f9 11659 0 2022-11-26 06:06:29 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-8xrn kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-8xrn topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-provisioning-5148":"bootstrap-e2e-minion-group-8xrn","csi-mock-csi-mock-volumes-4529":"csi-mock-csi-mock-volumes-4529"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-11-26 06:06:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}}}} } {kubelet Update v1 2022-11-26 06:06:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {node-problem-detector Update v1 2022-11-26 06:21:35 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2022-11-26 06:22:31 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2022-11-26 06:25:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-gce-dg-1-6-1-5-dwngr-clu/us-west1-b/bootstrap-e2e-minion-group-8xrn,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-26 06:21:35 +0000 UTC,LastTransitionTime:2022-11-26 06:06:33 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-26 06:21:35 +0000 UTC,LastTransitionTime:2022-11-26 06:06:33 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-26 06:21:35 +0000 UTC,LastTransitionTime:2022-11-26 06:06:33 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-26 06:21:35 +0000 UTC,LastTransitionTime:2022-11-26 06:06:33 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-26 06:21:35 +0000 UTC,LastTransitionTime:2022-11-26 06:06:33 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-26 06:21:35 +0000 UTC,LastTransitionTime:2022-11-26 06:06:33 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-26 06:21:35 +0000 UTC,LastTransitionTime:2022-11-26 06:06:33 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 06:06:38 +0000 UTC,LastTransitionTime:2022-11-26 06:06:38 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-26 06:25:37 +0000 UTC,LastTransitionTime:2022-11-26 06:06:29 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-26 06:25:37 +0000 UTC,LastTransitionTime:2022-11-26 06:06:29 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-26 06:25:37 +0000 UTC,LastTransitionTime:2022-11-26 06:06:29 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-26 06:25:37 +0000 UTC,LastTransitionTime:2022-11-26 06:06:29 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.4,},NodeAddress{Type:ExternalIP,Address:34.105.92.105,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-8xrn.c.k8s-gce-dg-1-6-1-5-dwngr-clu.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-8xrn.c.k8s-gce-dg-1-6-1-5-dwngr-clu.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:1e49116bed901a0b492cc2a872edd7a8,SystemUUID:1e49116b-ed90-1a0b-492c-c2a872edd7a8,BootID:24bc719e-5b36-4d61-bc89-44455ca28dd7,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:67201736,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 26 06:25:53.595: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-8xrn Nov 26 06:25:53.684: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-8xrn Nov 26 06:25:53.890: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-8xrn: error trying to reach service: No agent available [DeferCleanup (Each)] [sig-cli] Kubectl client tear down framework | framework.go:193 STEP: Destroying namespace "kubectl-7225" for this suite. 11/26/22 06:25:53.89
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-cli\]\sKubectl\sclient\sSimple\spod\sshould\sreturn\scommand\sexit\scodes\s\[Slow\]\srunning\sa\sfailing\scommand\swithout\s\-\-restart\=Never\,\sbut\swith\s\-\-rm$'
test/e2e/kubectl/kubectl.go:415 k8s.io/kubernetes/test/e2e/kubectl.glob..func1.8.1() test/e2e/kubectl/kubectl.go:415 +0x245 There were additional failures detected after the initial failure: [FAILED] Nov 26 06:22:49.837: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=kubectl-2358 delete --grace-period=0 --force -f -: Command stdout: stderr: Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. error: error when deleting "STDIN": Delete "https://34.83.17.181/api/v1/namespaces/kubectl-2358/pods/httpd": dial tcp 34.83.17.181:443: connect: connection refused error: exit status 1 In [AfterEach] at: test/e2e/framework/kubectl/builder.go:87 ---------- [FAILED] Nov 26 06:22:49.917: failed to list events in namespace "kubectl-2358": Get "https://34.83.17.181/api/v1/namespaces/kubectl-2358/events": dial tcp 34.83.17.181:443: connect: connection refused In [DeferCleanup (Each)] at: test/e2e/framework/debug/dump.go:44 ---------- [FAILED] Nov 26 06:22:49.956: Couldn't delete ns: "kubectl-2358": Delete "https://34.83.17.181/api/v1/namespaces/kubectl-2358": dial tcp 34.83.17.181:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://34.83.17.181/api/v1/namespaces/kubectl-2358", Err:(*net.OpError)(0xc0046aa370)}) In [DeferCleanup (Each)] at: test/e2e/framework/framework.go:370from junit_01.xml
[BeforeEach] [sig-cli] Kubectl client set up framework | framework.go:178 STEP: Creating a kubernetes client 11/26/22 06:21:14.589 Nov 26 06:21:14.589: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename kubectl 11/26/22 06:21:14.59 Nov 26 06:21:14.630: INFO: Unexpected error while creating namespace: Post "https://34.83.17.181/api/v1/namespaces": dial tcp 34.83.17.181:443: connect: connection refused Nov 26 06:21:16.674: INFO: Unexpected error while creating namespace: Post "https://34.83.17.181/api/v1/namespaces": dial tcp 34.83.17.181:443: connect: connection refused STEP: Waiting for a default service account to be provisioned in namespace 11/26/22 06:22:16.577 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/26/22 06:22:16.662 [BeforeEach] [sig-cli] Kubectl client test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-cli] Kubectl client test/e2e/kubectl/kubectl.go:274 [BeforeEach] Simple pod test/e2e/kubectl/kubectl.go:411 STEP: creating the pod from 11/26/22 06:22:19.052 Nov 26 06:22:19.052: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=kubectl-2358 create -f -' Nov 26 06:22:19.640: INFO: stderr: "" Nov 26 06:22:19.640: INFO: stdout: "pod/httpd created\n" Nov 26 06:22:19.640: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [httpd] Nov 26 06:22:19.640: INFO: Waiting up to 5m0s for pod "httpd" in namespace "kubectl-2358" to be "running and ready" Nov 26 06:22:19.685: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 44.740711ms Nov 26 06:22:19.685: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'bootstrap-e2e-minion-group-6hf3' to be 'Running' but was 'Pending' Nov 26 06:22:21.757: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.117147268s Nov 26 06:22:21.757: INFO: Error evaluating pod condition running and ready: want pod 'httpd' on 'bootstrap-e2e-minion-group-6hf3' to be 'Running' but was 'Pending' Nov 26 06:22:23.738: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 4.098104933s Nov 26 06:22:23.738: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-6hf3' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:22:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:22:19 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:22:19 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:22:19 +0000 UTC }] Nov 26 06:22:25.772: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 6.131621511s Nov 26 06:22:25.772: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-6hf3' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:22:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:22:19 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:22:19 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:22:19 +0000 UTC }] Nov 26 06:22:27.761: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 8.120498521s Nov 26 06:22:27.761: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-6hf3' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:22:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:22:19 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:22:19 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:22:19 +0000 UTC }] Nov 26 06:22:29.817: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 10.176714371s Nov 26 06:22:29.817: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-6hf3' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:22:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:22:19 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:22:19 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:22:19 +0000 UTC }] Nov 26 06:22:31.768: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 12.127773933s Nov 26 06:22:31.768: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-6hf3' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:22:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:22:19 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:22:19 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:22:19 +0000 UTC }] Nov 26 06:22:33.759: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 14.118898104s Nov 26 06:22:33.759: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-6hf3' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:22:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:22:19 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:22:19 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:22:19 +0000 UTC }] Nov 26 06:22:35.747: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 16.106878352s Nov 26 06:22:35.747: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-6hf3' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:22:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:22:19 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:22:19 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:22:19 +0000 UTC }] Nov 26 06:22:37.739: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 18.098775464s Nov 26 06:22:37.739: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-6hf3' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:22:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:22:19 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:22:19 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:22:19 +0000 UTC }] Nov 26 06:22:39.800: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 20.159847016s Nov 26 06:22:39.800: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-6hf3' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:22:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:22:19 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:22:19 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:22:19 +0000 UTC }] Nov 26 06:22:41.749: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 22.109327217s Nov 26 06:22:41.749: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-6hf3' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:22:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:22:19 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:22:19 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:22:19 +0000 UTC }] Nov 26 06:22:43.763: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 24.122550575s Nov 26 06:22:43.763: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-6hf3' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:22:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:22:19 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:22:19 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:22:19 +0000 UTC }] Nov 26 06:22:45.815: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 26.175348759s Nov 26 06:22:45.815: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-6hf3' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:22:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:22:19 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:22:19 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:22:19 +0000 UTC }] Nov 26 06:22:47.803: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 28.163373083s Nov 26 06:22:47.803: INFO: Error evaluating pod condition running and ready: pod 'httpd' on 'bootstrap-e2e-minion-group-6hf3' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:22:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:22:19 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:22:19 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:22:19 +0000 UTC }] Nov 26 06:22:49.725: INFO: Encountered non-retryable error while getting pod kubectl-2358/httpd: Get "https://34.83.17.181/api/v1/namespaces/kubectl-2358/pods/httpd": dial tcp 34.83.17.181:443: connect: connection refused Nov 26 06:22:49.725: INFO: Pod httpd failed to be running and ready. Nov 26 06:22:49.725: INFO: Wanted all 1 pods to be running and ready. Result: false. Pods: [httpd] Nov 26 06:22:49.725: FAIL: Expected <bool>: false to equal <bool>: true Full Stack Trace k8s.io/kubernetes/test/e2e/kubectl.glob..func1.8.1() test/e2e/kubectl/kubectl.go:415 +0x245 [AfterEach] Simple pod test/e2e/kubectl/kubectl.go:417 STEP: using delete to clean up resources 11/26/22 06:22:49.726 Nov 26 06:22:49.726: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=kubectl-2358 delete --grace-period=0 --force -f -' Nov 26 06:22:49.837: INFO: rc: 1 Nov 26 06:22:49.837: INFO: Unexpected error: <exec.CodeExitError>: { Err: <*errors.errorString | 0xc0016e5cf0>{ s: "error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=kubectl-2358 delete --grace-period=0 --force -f -:\nCommand stdout:\n\nstderr:\nWarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\nerror: error when deleting \"STDIN\": Delete \"https://34.83.17.181/api/v1/namespaces/kubectl-2358/pods/httpd\": dial tcp 34.83.17.181:443: connect: connection refused\n\nerror:\nexit status 1", }, Code: 1, } Nov 26 06:22:49.837: FAIL: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=kubectl-2358 delete --grace-period=0 --force -f -: Command stdout: stderr: Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. error: error when deleting "STDIN": Delete "https://34.83.17.181/api/v1/namespaces/kubectl-2358/pods/httpd": dial tcp 34.83.17.181:443: connect: connection refused error: exit status 1 Full Stack Trace k8s.io/kubernetes/test/e2e/framework/kubectl.KubectlBuilder.ExecOrDie({0xc0009ef1e0?, 0x0?}, {0xc00355cdb0, 0xc}) test/e2e/framework/kubectl/builder.go:87 +0x1b4 k8s.io/kubernetes/test/e2e/framework/kubectl.RunKubectlOrDieInput({0xc00355cdb0, 0xc}, {0xc003492160, 0x145}, {0xc0039adec0?, 0x8?, 0x7fa75be685b8?}) test/e2e/framework/kubectl/builder.go:165 +0xd6 k8s.io/kubernetes/test/e2e/kubectl.cleanupKubectlInputs({0xc003492160, 0x145}, {0xc00355cdb0, 0xc}, {0xc0017d5dc0, 0x1, 0x1}) test/e2e/kubectl/kubectl.go:201 +0x132 k8s.io/kubernetes/test/e2e/kubectl.glob..func1.8.2() test/e2e/kubectl/kubectl.go:418 +0x76 [AfterEach] [sig-cli] Kubectl client test/e2e/framework/node/init/init.go:32 Nov 26 06:22:49.837: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-cli] Kubectl client test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-cli] Kubectl client dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/26/22 06:22:49.877 STEP: Collecting events from namespace "kubectl-2358". 11/26/22 06:22:49.877 Nov 26 06:22:49.916: INFO: Unexpected error: failed to list events in namespace "kubectl-2358": <*url.Error | 0xc003510900>: { Op: "Get", URL: "https://34.83.17.181/api/v1/namespaces/kubectl-2358/events", Err: <*net.OpError | 0xc0048fdea0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0048923f0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 34, 83, 17, 181], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc00369eb00>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Nov 26 06:22:49.917: FAIL: failed to list events in namespace "kubectl-2358": Get "https://34.83.17.181/api/v1/namespaces/kubectl-2358/events": dial tcp 34.83.17.181:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework/debug.dumpEventsInNamespace(0xc0018545c0, {0xc00355cdb0, 0xc}) test/e2e/framework/debug/dump.go:44 +0x191 k8s.io/kubernetes/test/e2e/framework/debug.DumpAllNamespaceInfo({0x801de88, 0xc0031291e0}, {0xc00355cdb0, 0xc}) test/e2e/framework/debug/dump.go:62 +0x8d k8s.io/kubernetes/test/e2e/framework/debug/init.init.0.func1.1(0xc001854650?, {0xc00355cdb0?, 0x7fa7740?}) test/e2e/framework/debug/init/init.go:34 +0x32 k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo.func1() test/e2e/framework/framework.go:274 +0x6d k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo(0xc00109e2d0) test/e2e/framework/framework.go:271 +0x179 reflect.Value.call({0x6627cc0?, 0xc00173cb60?, 0xc002387fb0?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0xc003055c28?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc00173cb60?, 0x29449fc?}, {0xae73300?, 0xc002387f80?, 0x2fdb5c0?}) /usr/local/go/src/reflect/value.go:368 +0xbc [DeferCleanup (Each)] [sig-cli] Kubectl client tear down framework | framework.go:193 STEP: Destroying namespace "kubectl-2358" for this suite. 11/26/22 06:22:49.917 Nov 26 06:22:49.956: FAIL: Couldn't delete ns: "kubectl-2358": Delete "https://34.83.17.181/api/v1/namespaces/kubectl-2358": dial tcp 34.83.17.181:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://34.83.17.181/api/v1/namespaces/kubectl-2358", Err:(*net.OpError)(0xc0046aa370)}) Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach.func1() test/e2e/framework/framework.go:370 +0x4fe k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc00109e2d0) test/e2e/framework/framework.go:383 +0x1ca reflect.Value.call({0x6627cc0?, 0xc00173cac0?, 0xc0045ddfb0?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0x0?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc00173cac0?, 0x0?}, {0xae73300?, 0x5?, 0xc0026738e0?}) /usr/local/go/src/reflect/value.go:368 +0xbc
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-network\]\sLoadBalancers\sESIPP\s\[Slow\]\sshould\shandle\supdates\sto\sExternalTrafficPolicy\sfield$'
test/e2e/framework/network/utils.go:834 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setup(0xc000f387e0, 0x3c?) test/e2e/framework/network/utils.go:834 +0x545 k8s.io/kubernetes/test/e2e/framework/network.NewNetworkingTestConfig(0xc000f6cb40, {0x0, 0x0, 0x7f8f6d0?}) test/e2e/framework/network/utils.go:131 +0x125 k8s.io/kubernetes/test/e2e/network.glob..func20.7() test/e2e/network/loadbalancer.go:1544 +0x417from junit_01.xml
[BeforeEach] [sig-network] LoadBalancers ESIPP [Slow] set up framework | framework.go:178 STEP: Creating a kubernetes client 11/26/22 06:13:35.257 Nov 26 06:13:35.257: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename esipp 11/26/22 06:13:35.258 STEP: Waiting for a default service account to be provisioned in namespace 11/26/22 06:13:43.049 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/26/22 06:13:43.142 [BeforeEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/network/loadbalancer.go:1250 [It] should handle updates to ExternalTrafficPolicy field test/e2e/network/loadbalancer.go:1480 STEP: creating a service esipp-9299/external-local-update with type=LoadBalancer 11/26/22 06:13:43.406 STEP: setting ExternalTrafficPolicy=Local 11/26/22 06:13:43.406 STEP: waiting for loadbalancer for service esipp-9299/external-local-update 11/26/22 06:13:43.463 Nov 26 06:13:43.464: INFO: Waiting up to 15m0s for service "external-local-update" to have a LoadBalancer STEP: creating a pod to be part of the service external-local-update 11/26/22 06:15:49.641 Nov 26 06:15:49.831: INFO: Waiting up to 2m0s for 1 pods to be created Nov 26 06:15:49.951: INFO: Found all 1 pods Nov 26 06:15:49.951: INFO: Waiting up to 2m0s for 1 pods to be running and ready: [external-local-update-w2stv] Nov 26 06:15:49.951: INFO: Waiting up to 2m0s for pod "external-local-update-w2stv" in namespace "esipp-9299" to be "running and ready" Nov 26 06:15:50.115: INFO: Pod "external-local-update-w2stv": Phase="Pending", Reason="", readiness=false. Elapsed: 163.8676ms Nov 26 06:15:50.115: INFO: Error evaluating pod condition running and ready: want pod 'external-local-update-w2stv' on 'bootstrap-e2e-minion-group-8xrn' to be 'Running' but was 'Pending' Nov 26 06:15:52.165: INFO: Pod "external-local-update-w2stv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.214123049s Nov 26 06:15:52.165: INFO: Error evaluating pod condition running and ready: want pod 'external-local-update-w2stv' on 'bootstrap-e2e-minion-group-8xrn' to be 'Running' but was 'Pending' Nov 26 06:15:54.170: INFO: Pod "external-local-update-w2stv": Phase="Running", Reason="", readiness=true. Elapsed: 4.218598323s Nov 26 06:15:54.170: INFO: Pod "external-local-update-w2stv" satisfied condition "running and ready" Nov 26 06:15:54.170: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [external-local-update-w2stv] STEP: waiting for loadbalancer for service esipp-9299/external-local-update 11/26/22 06:15:54.17 Nov 26 06:15:54.170: INFO: Waiting up to 15m0s for service "external-local-update" to have a LoadBalancer STEP: turning ESIPP off 11/26/22 06:15:54.225 STEP: Performing setup for networking test in namespace esipp-9299 11/26/22 06:15:55.771 STEP: creating a selector 11/26/22 06:15:55.771 STEP: Creating the service pods in kubernetes 11/26/22 06:15:55.771 Nov 26 06:15:55.772: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Nov 26 06:15:56.157: INFO: Waiting up to 5m0s for pod "netserver-0" in namespace "esipp-9299" to be "running and ready" Nov 26 06:15:56.251: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 94.125974ms Nov 26 06:15:56.251: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 26 06:15:58.315: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 2.15808822s Nov 26 06:15:58.315: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 06:16:00.318: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 4.161306864s Nov 26 06:16:00.318: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 06:16:02.451: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 6.294823027s Nov 26 06:16:02.451: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 06:16:04.307: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 8.150073696s Nov 26 06:16:04.307: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 06:16:06.315: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 10.15802606s Nov 26 06:16:06.315: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 06:16:08.305: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 12.148471685s Nov 26 06:16:08.305: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 06:16:10.303: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 14.145994454s Nov 26 06:16:10.303: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 06:16:12.301: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 16.144760341s Nov 26 06:16:12.301: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 06:16:14.308: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 18.151318633s Nov 26 06:16:14.308: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 06:16:16.303: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 20.1468845s Nov 26 06:16:16.303: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 06:16:18.301: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 22.144035774s Nov 26 06:16:18.301: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 06:16:20.379: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 24.222320438s Nov 26 06:16:20.379: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 06:16:22.307: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 26.149988032s Nov 26 06:16:22.307: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 06:16:24.306: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 28.149445281s Nov 26 06:16:24.306: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 06:16:26.314: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 30.157524196s Nov 26 06:16:26.314: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 06:16:28.343: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 32.186635359s Nov 26 06:16:28.343: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 06:16:30.354: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 34.197290609s Nov 26 06:16:30.354: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 06:16:32.307: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 36.150580298s Nov 26 06:16:32.307: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 06:16:34.309: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 38.152033632s Nov 26 06:16:34.309: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 06:16:36.336: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 40.179879102s Nov 26 06:16:36.336: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 06:16:38.307: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 42.150867544s Nov 26 06:16:38.307: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 06:16:40.328: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 44.171254282s Nov 26 06:16:40.328: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 06:16:42.309: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 46.152922366s Nov 26 06:16:42.309: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 06:16:44.332: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 48.174986155s Nov 26 06:16:44.332: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 06:16:46.309: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 50.152605001s Nov 26 06:16:46.309: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 06:16:48.363: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 52.206061173s Nov 26 06:16:48.363: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 06:16:50.348: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 54.191089572s Nov 26 06:16:50.348: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 06:16:52.340: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 56.183658019s Nov 26 06:16:52.340: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 06:16:54.309: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 58.152954712s Nov 26 06:16:54.310: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 06:16:56.431: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m0.274876314s Nov 26 06:16:56.431: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 06:16:58.312: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m2.155667496s Nov 26 06:16:58.312: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 06:17:00.347: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m4.190514059s Nov 26 06:17:00.347: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 06:17:02.304: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m6.147542572s Nov 26 06:17:02.304: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 06:17:04.451: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m8.294784052s Nov 26 06:17:04.451: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 06:17:06.310: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m10.153663714s Nov 26 06:17:06.310: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 06:17:08.335: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m12.178094803s Nov 26 06:17:08.335: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 06:17:10.310: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m14.153626695s Nov 26 06:17:10.310: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 06:17:12.319: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m16.162582436s Nov 26 06:17:12.319: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 06:17:14.328: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m18.171265767s Nov 26 06:17:14.328: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 06:17:16.339: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m20.18225488s Nov 26 06:17:16.339: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 06:17:18.320: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=true. Elapsed: 1m22.163854214s Nov 26 06:17:18.320: INFO: The phase of Pod netserver-0 is Running (Ready = true) Nov 26 06:17:18.320: INFO: Pod "netserver-0" satisfied condition "running and ready" Nov 26 06:17:18.385: INFO: Waiting up to 5m0s for pod "netserver-1" in namespace "esipp-9299" to be "running and ready" Nov 26 06:17:18.466: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=true. Elapsed: 81.051787ms Nov 26 06:17:18.466: INFO: The phase of Pod netserver-1 is Running (Ready = true) Nov 26 06:17:18.466: INFO: Pod "netserver-1" satisfied condition "running and ready" Nov 26 06:17:18.512: INFO: Waiting up to 5m0s for pod "netserver-2" in namespace "esipp-9299" to be "running and ready" Nov 26 06:17:18.564: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 51.842579ms Nov 26 06:17:18.564: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 26 06:17:20.623: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 2.11077873s Nov 26 06:17:20.623: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 26 06:17:22.608: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 4.096165816s Nov 26 06:17:22.608: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 26 06:17:24.659: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 6.146503961s Nov 26 06:17:24.659: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 26 06:17:26.615: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 8.102651814s Nov 26 06:17:26.615: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 26 06:17:28.617: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 10.104482213s Nov 26 06:17:28.617: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 26 06:17:30.649: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 12.136659366s Nov 26 06:17:30.649: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 26 06:17:32.641: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 14.128436944s Nov 26 06:17:32.641: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 26 06:17:34.643: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 16.1307831s Nov 26 06:17:34.643: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 26 06:17:36.621: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 18.108468675s Nov 26 06:17:36.621: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 26 06:17:38.612: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 20.099634724s Nov 26 06:17:38.612: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 26 06:17:40.756: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 22.244300221s Nov 26 06:17:40.756: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 26 06:17:42.616: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 24.103565183s Nov 26 06:17:42.616: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 26 06:17:44.712: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 26.19994115s Nov 26 06:17:44.712: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 26 06:17:46.622: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 28.110217354s Nov 26 06:17:46.622: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 26 06:17:48.612: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 30.100346911s Nov 26 06:17:48.612: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 26 06:17:50.626: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 32.113962014s Nov 26 06:17:50.626: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 26 06:17:52.616: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 34.103721902s Nov 26 06:17:52.616: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 26 06:17:54.660: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 36.147811437s Nov 26 06:17:54.660: INFO: The phase of Pod netserver-2 is Running (Ready = false) Nov 26 06:17:56.631: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=true. Elapsed: 38.119191859s Nov 26 06:17:56.631: INFO: The phase of Pod netserver-2 is Running (Ready = true) Nov 26 06:17:56.631: INFO: Pod "netserver-2" satisfied condition "running and ready" STEP: Creating test pods 11/26/22 06:17:56.694 Nov 26 06:17:56.792: INFO: Waiting up to 5m0s for pod "test-container-pod" in namespace "esipp-9299" to be "running" Nov 26 06:17:56.852: INFO: Pod "test-container-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 59.394164ms Nov 26 06:17:58.900: INFO: Pod "test-container-pod": Phase="Running", Reason="", readiness=true. Elapsed: 2.107916274s Nov 26 06:17:58.900: INFO: Pod "test-container-pod" satisfied condition "running" Nov 26 06:17:58.951: INFO: Setting MaxTries for pod polling to 39 for networking test based on endpoint count 3 STEP: Getting node addresses 11/26/22 06:17:58.951 Nov 26 06:17:58.951: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating the service on top of the pods in kubernetes 11/26/22 06:17:59.079 Nov 26 06:17:59.384: INFO: Service node-port-service in namespace esipp-9299 found. Nov 26 06:17:59.894: INFO: Service session-affinity-service in namespace esipp-9299 found. STEP: Waiting for NodePort service to expose endpoint 11/26/22 06:17:59.986 Nov 26 06:18:00.987: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 26 06:18:01.987: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 26 06:18:02.987: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 26 06:18:03.987: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 26 06:18:04.987: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 26 06:18:05.987: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 26 06:18:06.987: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 26 06:18:07.987: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 26 06:18:08.987: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 26 06:18:09.987: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 26 06:18:10.987: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 26 06:18:11.987: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 26 06:18:12.986: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 26 06:18:13.987: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 26 06:18:14.987: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 26 06:18:15.987: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 26 06:18:16.987: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 26 06:18:17.987: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 26 06:18:18.987: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 26 06:18:19.987: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 26 06:18:20.987: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 26 06:18:21.987: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 26 06:18:22.987: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 26 06:18:23.987: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 26 06:18:24.987: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 26 06:18:25.987: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 26 06:18:26.987: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 26 06:18:27.987: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 26 06:18:28.987: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 26 06:18:29.987: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 26 06:18:30.055: INFO: Waiting for amount of service:node-port-service endpoints to be 3 Nov 26 06:18:30.107: INFO: Unexpected error: failed to validate endpoints for service node-port-service in namespace: esipp-9299: <*errors.errorString | 0xc00011dd80>: { s: "timed out waiting for the condition", } Nov 26 06:18:30.107: FAIL: failed to validate endpoints for service node-port-service in namespace: esipp-9299: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setup(0xc000f387e0, 0x3c?) test/e2e/framework/network/utils.go:834 +0x545 k8s.io/kubernetes/test/e2e/framework/network.NewNetworkingTestConfig(0xc000f6cb40, {0x0, 0x0, 0x7f8f6d0?}) test/e2e/framework/network/utils.go:131 +0x125 k8s.io/kubernetes/test/e2e/network.glob..func20.7() test/e2e/network/loadbalancer.go:1544 +0x417 Nov 26 06:18:30.295: INFO: Waiting up to 15m0s for service "external-local-update" to have no LoadBalancer [AfterEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/framework/node/init/init.go:32 Nov 26 06:18:40.613: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/network/loadbalancer.go:1260 Nov 26 06:18:40.661: INFO: Output of kubectl describe svc: Nov 26 06:18:40.661: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=esipp-9299 describe svc --namespace=esipp-9299' Nov 26 06:18:41.283: INFO: stderr: "" Nov 26 06:18:41.283: INFO: stdout: "Name: external-local-update\nNamespace: esipp-9299\nLabels: testid=external-local-update-a69d167a-1b6d-487b-90ba-9b8d84cdd94d\nAnnotations: <none>\nSelector: testid=external-local-update-a69d167a-1b6d-487b-90ba-9b8d84cdd94d\nType: ClusterIP\nIP Family Policy: SingleStack\nIP Families: IPv4\nIP: 10.0.56.5\nIPs: 10.0.56.5\nPort: <unset> 80/TCP\nTargetPort: 80/TCP\nEndpoints: 10.64.1.94:80\nSession Affinity: None\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal ExternalTrafficPolicy 2m46s service-controller Local -> Cluster\n Normal EnsuringLoadBalancer 2m18s (x2 over 3m27s) service-controller Ensuring load balancer\n Normal EnsuredLoadBalancer 87s (x2 over 2m51s) service-controller Ensured load balancer\n Normal Type 10s service-controller LoadBalancer -> ClusterIP\n Normal DeletingLoadBalancer 10s service-controller Deleting load balancer\n\n\nName: node-port-service\nNamespace: esipp-9299\nLabels: <none>\nAnnotations: <none>\nSelector: selector-e6773b2c-c90b-4183-acd5-17382ec9e4fb=true\nType: NodePort\nIP Family Policy: SingleStack\nIP Families: IPv4\nIP: 10.0.145.78\nIPs: 10.0.145.78\nPort: http 80/TCP\nTargetPort: 8083/TCP\nNodePort: http 30749/TCP\nEndpoints: 10.64.0.77:8083\nPort: udp 90/UDP\nTargetPort: 8081/UDP\nNodePort: udp 32582/UDP\nEndpoints: 10.64.0.77:8081\nSession Affinity: None\nExternal Traffic Policy: Cluster\nEvents: <none>\n\n\nName: session-affinity-service\nNamespace: esipp-9299\nLabels: <none>\nAnnotations: <none>\nSelector: selector-e6773b2c-c90b-4183-acd5-17382ec9e4fb=true\nType: NodePort\nIP Family Policy: SingleStack\nIP Families: IPv4\nIP: 10.0.247.0\nIPs: 10.0.247.0\nPort: http 80/TCP\nTargetPort: 8083/TCP\nNodePort: http 32049/TCP\nEndpoints: 10.64.0.77:8083\nPort: udp 90/UDP\nTargetPort: 8081/UDP\nNodePort: udp 31970/UDP\nEndpoints: 10.64.0.77:8081\nSession Affinity: ClientIP\nExternal Traffic Policy: Cluster\nEvents: <none>\n" Nov 26 06:18:41.283: INFO: Name: external-local-update Namespace: esipp-9299 Labels: testid=external-local-update-a69d167a-1b6d-487b-90ba-9b8d84cdd94d Annotations: <none> Selector: testid=external-local-update-a69d167a-1b6d-487b-90ba-9b8d84cdd94d Type: ClusterIP IP Family Policy: SingleStack IP Families: IPv4 IP: 10.0.56.5 IPs: 10.0.56.5 Port: <unset> 80/TCP TargetPort: 80/TCP Endpoints: 10.64.1.94:80 Session Affinity: None Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ExternalTrafficPolicy 2m46s service-controller Local -> Cluster Normal EnsuringLoadBalancer 2m18s (x2 over 3m27s) service-controller Ensuring load balancer Normal EnsuredLoadBalancer 87s (x2 over 2m51s) service-controller Ensured load balancer Normal Type 10s service-controller LoadBalancer -> ClusterIP Normal DeletingLoadBalancer 10s service-controller Deleting load balancer Name: node-port-service Namespace: esipp-9299 Labels: <none> Annotations: <none> Selector: selector-e6773b2c-c90b-4183-acd5-17382ec9e4fb=true Type: NodePort IP Family Policy: SingleStack IP Families: IPv4 IP: 10.0.145.78 IPs: 10.0.145.78 Port: http 80/TCP TargetPort: 8083/TCP NodePort: http 30749/TCP Endpoints: 10.64.0.77:8083 Port: udp 90/UDP TargetPort: 8081/UDP NodePort: udp 32582/UDP Endpoints: 10.64.0.77:8081 Session Affinity: None External Traffic Policy: Cluster Events: <none> Name: session-affinity-service Namespace: esipp-9299 Labels: <none> Annotations: <none> Selector: selector-e6773b2c-c90b-4183-acd5-17382ec9e4fb=true Type: NodePort IP Family Policy: SingleStack IP Families: IPv4 IP: 10.0.247.0 IPs: 10.0.247.0 Port: http 80/TCP TargetPort: 8083/TCP NodePort: http 32049/TCP Endpoints: 10.64.0.77:8083 Port: udp 90/UDP TargetPort: 8081/UDP NodePort: udp 31970/UDP Endpoints: 10.64.0.77:8081 Session Affinity: ClientIP External Traffic Policy: Cluster Events: <none> [DeferCleanup (Each)] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-network] LoadBalancers ESIPP [Slow] dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/26/22 06:18:41.283 STEP: Collecting events from namespace "esipp-9299". 11/26/22 06:18:41.283 STEP: Found 37 events. 11/26/22 06:18:41.329 Nov 26 06:18:41.329: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for external-local-update-w2stv: { } Scheduled: Successfully assigned esipp-9299/external-local-update-w2stv to bootstrap-e2e-minion-group-8xrn Nov 26 06:18:41.329: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for netserver-0: { } Scheduled: Successfully assigned esipp-9299/netserver-0 to bootstrap-e2e-minion-group-4lvd Nov 26 06:18:41.329: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for netserver-1: { } Scheduled: Successfully assigned esipp-9299/netserver-1 to bootstrap-e2e-minion-group-6hf3 Nov 26 06:18:41.329: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for netserver-2: { } Scheduled: Successfully assigned esipp-9299/netserver-2 to bootstrap-e2e-minion-group-8xrn Nov 26 06:18:41.329: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for test-container-pod: { } Scheduled: Successfully assigned esipp-9299/test-container-pod to bootstrap-e2e-minion-group-8xrn Nov 26 06:18:41.329: INFO: At 2022-11-26 06:15:13 +0000 UTC - event for external-local-update: {service-controller } EnsuringLoadBalancer: Ensuring load balancer Nov 26 06:18:41.329: INFO: At 2022-11-26 06:15:49 +0000 UTC - event for external-local-update: {service-controller } EnsuredLoadBalancer: Ensured load balancer Nov 26 06:18:41.329: INFO: At 2022-11-26 06:15:49 +0000 UTC - event for external-local-update: {replication-controller } SuccessfulCreate: Created pod: external-local-update-w2stv Nov 26 06:18:41.329: INFO: At 2022-11-26 06:15:51 +0000 UTC - event for external-local-update-w2stv: {kubelet bootstrap-e2e-minion-group-8xrn} FailedMount: MountVolume.SetUp failed for volume "kube-api-access-tp9v6" : failed to sync configmap cache: timed out waiting for the condition Nov 26 06:18:41.329: INFO: At 2022-11-26 06:15:52 +0000 UTC - event for external-local-update-w2stv: {kubelet bootstrap-e2e-minion-group-8xrn} Started: Started container netexec Nov 26 06:18:41.329: INFO: At 2022-11-26 06:15:52 +0000 UTC - event for external-local-update-w2stv: {kubelet bootstrap-e2e-minion-group-8xrn} Created: Created container netexec Nov 26 06:18:41.329: INFO: At 2022-11-26 06:15:52 +0000 UTC - event for external-local-update-w2stv: {kubelet bootstrap-e2e-minion-group-8xrn} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.43" already present on machine Nov 26 06:18:41.329: INFO: At 2022-11-26 06:15:54 +0000 UTC - event for external-local-update: {service-controller } ExternalTrafficPolicy: Local -> Cluster Nov 26 06:18:41.329: INFO: At 2022-11-26 06:15:56 +0000 UTC - event for netserver-2: {kubelet bootstrap-e2e-minion-group-8xrn} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.43" already present on machine Nov 26 06:18:41.329: INFO: At 2022-11-26 06:15:57 +0000 UTC - event for netserver-0: {kubelet bootstrap-e2e-minion-group-4lvd} Killing: Stopping container webserver Nov 26 06:18:41.329: INFO: At 2022-11-26 06:15:57 +0000 UTC - event for netserver-0: {kubelet bootstrap-e2e-minion-group-4lvd} Started: Started container webserver Nov 26 06:18:41.329: INFO: At 2022-11-26 06:15:57 +0000 UTC - event for netserver-0: {kubelet bootstrap-e2e-minion-group-4lvd} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.43" already present on machine Nov 26 06:18:41.329: INFO: At 2022-11-26 06:15:57 +0000 UTC - event for netserver-0: {kubelet bootstrap-e2e-minion-group-4lvd} Created: Created container webserver Nov 26 06:18:41.329: INFO: At 2022-11-26 06:15:57 +0000 UTC - event for netserver-1: {kubelet bootstrap-e2e-minion-group-6hf3} FailedMount: MountVolume.SetUp failed for volume "kube-api-access-d9zjp" : failed to sync configmap cache: timed out waiting for the condition Nov 26 06:18:41.329: INFO: At 2022-11-26 06:15:57 +0000 UTC - event for netserver-2: {kubelet bootstrap-e2e-minion-group-8xrn} Started: Started container webserver Nov 26 06:18:41.329: INFO: At 2022-11-26 06:15:57 +0000 UTC - event for netserver-2: {kubelet bootstrap-e2e-minion-group-8xrn} Created: Created container webserver Nov 26 06:18:41.329: INFO: At 2022-11-26 06:15:58 +0000 UTC - event for netserver-0: {kubelet bootstrap-e2e-minion-group-4lvd} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Nov 26 06:18:41.329: INFO: At 2022-11-26 06:15:58 +0000 UTC - event for netserver-1: {kubelet bootstrap-e2e-minion-group-6hf3} Created: Created container webserver Nov 26 06:18:41.329: INFO: At 2022-11-26 06:15:58 +0000 UTC - event for netserver-1: {kubelet bootstrap-e2e-minion-group-6hf3} Started: Started container webserver Nov 26 06:18:41.329: INFO: At 2022-11-26 06:15:58 +0000 UTC - event for netserver-1: {kubelet bootstrap-e2e-minion-group-6hf3} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.43" already present on machine Nov 26 06:18:41.329: INFO: At 2022-11-26 06:15:59 +0000 UTC - event for netserver-2: {kubelet bootstrap-e2e-minion-group-8xrn} Killing: Stopping container webserver Nov 26 06:18:41.329: INFO: At 2022-11-26 06:16:00 +0000 UTC - event for netserver-0: {kubelet bootstrap-e2e-minion-group-4lvd} BackOff: Back-off restarting failed container webserver in pod netserver-0_esipp-9299(6f222c36-53e5-40cb-9759-13b7087b3c1b) Nov 26 06:18:41.329: INFO: At 2022-11-26 06:16:00 +0000 UTC - event for netserver-1: {kubelet bootstrap-e2e-minion-group-6hf3} Killing: Stopping container webserver Nov 26 06:18:41.329: INFO: At 2022-11-26 06:16:00 +0000 UTC - event for netserver-2: {kubelet bootstrap-e2e-minion-group-8xrn} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Nov 26 06:18:41.329: INFO: At 2022-11-26 06:16:02 +0000 UTC - event for netserver-1: {kubelet bootstrap-e2e-minion-group-6hf3} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Nov 26 06:18:41.329: INFO: At 2022-11-26 06:16:04 +0000 UTC - event for netserver-2: {kubelet bootstrap-e2e-minion-group-8xrn} BackOff: Back-off restarting failed container webserver in pod netserver-2_esipp-9299(c9f23cd7-445f-444f-8548-2ba57e33b863) Nov 26 06:18:41.329: INFO: At 2022-11-26 06:16:05 +0000 UTC - event for netserver-1: {kubelet bootstrap-e2e-minion-group-6hf3} BackOff: Back-off restarting failed container webserver in pod netserver-1_esipp-9299(bd9be559-4b2b-4fc1-b538-91514020d174) Nov 26 06:18:41.329: INFO: At 2022-11-26 06:17:57 +0000 UTC - event for test-container-pod: {kubelet bootstrap-e2e-minion-group-8xrn} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.43" already present on machine Nov 26 06:18:41.329: INFO: At 2022-11-26 06:17:57 +0000 UTC - event for test-container-pod: {kubelet bootstrap-e2e-minion-group-8xrn} Created: Created container webserver Nov 26 06:18:41.329: INFO: At 2022-11-26 06:17:57 +0000 UTC - event for test-container-pod: {kubelet bootstrap-e2e-minion-group-8xrn} Started: Started container webserver Nov 26 06:18:41.329: INFO: At 2022-11-26 06:18:30 +0000 UTC - event for external-local-update: {service-controller } DeletingLoadBalancer: Deleting load balancer Nov 26 06:18:41.329: INFO: At 2022-11-26 06:18:30 +0000 UTC - event for external-local-update: {service-controller } Type: LoadBalancer -> ClusterIP Nov 26 06:18:41.373: INFO: POD NODE PHASE GRACE CONDITIONS Nov 26 06:18:41.373: INFO: external-local-update-w2stv bootstrap-e2e-minion-group-8xrn Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:15:49 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:15:52 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:15:52 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:15:49 +0000 UTC }] Nov 26 06:18:41.373: INFO: netserver-0 bootstrap-e2e-minion-group-4lvd Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:15:56 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:17:16 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:17:16 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:15:55 +0000 UTC }] Nov 26 06:18:41.373: INFO: netserver-1 bootstrap-e2e-minion-group-6hf3 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:15:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:17:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:17:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:15:56 +0000 UTC }] Nov 26 06:18:41.373: INFO: netserver-2 bootstrap-e2e-minion-group-8xrn Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:15:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:17:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:17:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:15:56 +0000 UTC }] Nov 26 06:18:41.373: INFO: test-container-pod bootstrap-e2e-minion-group-8xrn Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:17:56 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:17:57 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:17:57 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:17:56 +0000 UTC }] Nov 26 06:18:41.373: INFO: Nov 26 06:18:41.711: INFO: Logging node info for node bootstrap-e2e-master Nov 26 06:18:41.753: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-master 193495c1-5b1f-409e-8da9-b1de094ba8ed 7957 0 2022-11-26 06:06:31 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-26 06:06:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2022-11-26 06:06:49 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}},"f:taints":{}}} } {kube-controller-manager Update v1 2022-11-26 06:06:49 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2022-11-26 06:17:05 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-gce-dg-1-6-1-5-dwngr-clu/us-west1-b/bootstrap-e2e-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16656896000 0} {<nil>} 16266500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3858366464 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{14991206376 0} {<nil>} 14991206376 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3596222464 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 06:06:49 +0000 UTC,LastTransitionTime:2022-11-26 06:06:49 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-26 06:17:05 +0000 UTC,LastTransitionTime:2022-11-26 06:06:31 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-26 06:17:05 +0000 UTC,LastTransitionTime:2022-11-26 06:06:31 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-26 06:17:05 +0000 UTC,LastTransitionTime:2022-11-26 06:06:31 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-26 06:17:05 +0000 UTC,LastTransitionTime:2022-11-26 06:06:33 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:34.83.17.181,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-master.c.k8s-gce-dg-1-6-1-5-dwngr-clu.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-master.c.k8s-gce-dg-1-6-1-5-dwngr-clu.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:d1b5362410d55fa2b320229b7ed22cb3,SystemUUID:d1b53624-10d5-5fa2-b320-229b7ed22cb3,BootID:13424371-b4dc-4f78-9364-1afc151da040,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:135160272,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:124990265,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:57660216,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:2c111f004bec24888d8cfa2a812a38fb8341350abac67dcd0ac64e709dfe389c registry.k8s.io/kas-network-proxy/proxy-server:v0.0.33],SizeBytes:22020129,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 26 06:18:41.753: INFO: Logging kubelet events for node bootstrap-e2e-master Nov 26 06:18:41.799: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-master Nov 26 06:18:41.861: INFO: etcd-server-events-bootstrap-e2e-master started at 2022-11-26 06:05:48 +0000 UTC (0+1 container statuses recorded) Nov 26 06:18:41.861: INFO: Container etcd-container ready: true, restart count 1 Nov 26 06:18:41.861: INFO: kube-apiserver-bootstrap-e2e-master started at 2022-11-26 06:05:48 +0000 UTC (0+1 container statuses recorded) Nov 26 06:18:41.861: INFO: Container kube-apiserver ready: true, restart count 0 Nov 26 06:18:41.861: INFO: kube-controller-manager-bootstrap-e2e-master started at 2022-11-26 06:05:48 +0000 UTC (0+1 container statuses recorded) Nov 26 06:18:41.861: INFO: Container kube-controller-manager ready: true, restart count 2 Nov 26 06:18:41.861: INFO: kube-addon-manager-bootstrap-e2e-master started at 2022-11-26 06:06:05 +0000 UTC (0+1 container statuses recorded) Nov 26 06:18:41.861: INFO: Container kube-addon-manager ready: true, restart count 1 Nov 26 06:18:41.861: INFO: l7-lb-controller-bootstrap-e2e-master started at 2022-11-26 06:06:05 +0000 UTC (0+1 container statuses recorded) Nov 26 06:18:41.861: INFO: Container l7-lb-controller ready: false, restart count 5 Nov 26 06:18:41.861: INFO: etcd-server-bootstrap-e2e-master started at 2022-11-26 06:05:48 +0000 UTC (0+1 container statuses recorded) Nov 26 06:18:41.861: INFO: Container etcd-container ready: true, restart count 1 Nov 26 06:18:41.861: INFO: konnectivity-server-bootstrap-e2e-master started at 2022-11-26 06:05:48 +0000 UTC (0+1 container statuses recorded) Nov 26 06:18:41.861: INFO: Container konnectivity-server-container ready: true, restart count 1 Nov 26 06:18:41.861: INFO: kube-scheduler-bootstrap-e2e-master started at 2022-11-26 06:05:48 +0000 UTC (0+1 container statuses recorded) Nov 26 06:18:41.861: INFO: Container kube-scheduler ready: true, restart count 1 Nov 26 06:18:41.861: INFO: metadata-proxy-v0.1-gg5tl started at 2022-11-26 06:06:31 +0000 UTC (0+2 container statuses recorded) Nov 26 06:18:41.861: INFO: Container metadata-proxy ready: true, restart count 0 Nov 26 06:18:41.861: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 26 06:18:42.057: INFO: Latency metrics for node bootstrap-e2e-master Nov 26 06:18:42.057: INFO: Logging node info for node bootstrap-e2e-minion-group-4lvd Nov 26 06:18:42.100: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-4lvd e0d66193-5762-49e7-b872-8ea4dee27a72 8082 0 2022-11-26 06:06:29 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-4lvd kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-4lvd topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-4587":"bootstrap-e2e-minion-group-4lvd"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-11-26 06:06:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}}}} } {kubelet Update v1 2022-11-26 06:06:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-26 06:11:57 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {node-problem-detector Update v1 2022-11-26 06:16:35 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-11-26 06:17:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-gce-dg-1-6-1-5-dwngr-clu/us-west1-b/bootstrap-e2e-minion-group-4lvd,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-26 06:16:35 +0000 UTC,LastTransitionTime:2022-11-26 06:06:33 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-26 06:16:35 +0000 UTC,LastTransitionTime:2022-11-26 06:06:33 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-26 06:16:35 +0000 UTC,LastTransitionTime:2022-11-26 06:06:33 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-26 06:16:35 +0000 UTC,LastTransitionTime:2022-11-26 06:06:33 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-26 06:16:35 +0000 UTC,LastTransitionTime:2022-11-26 06:06:33 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-26 06:16:35 +0000 UTC,LastTransitionTime:2022-11-26 06:06:33 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-26 06:16:35 +0000 UTC,LastTransitionTime:2022-11-26 06:06:33 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 06:06:38 +0000 UTC,LastTransitionTime:2022-11-26 06:06:38 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-26 06:17:04 +0000 UTC,LastTransitionTime:2022-11-26 06:06:29 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-26 06:17:04 +0000 UTC,LastTransitionTime:2022-11-26 06:06:29 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-26 06:17:04 +0000 UTC,LastTransitionTime:2022-11-26 06:06:29 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-26 06:17:04 +0000 UTC,LastTransitionTime:2022-11-26 06:06:29 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.5,},NodeAddress{Type:ExternalIP,Address:34.83.235.77,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-4lvd.c.k8s-gce-dg-1-6-1-5-dwngr-clu.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-4lvd.c.k8s-gce-dg-1-6-1-5-dwngr-clu.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:60066c0df6854403f12a43e8820a94a9,SystemUUID:60066c0d-f685-4403-f12a-43e8820a94a9,BootID:c95b6e64-9275-42a1-b638-b99f8479d167,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:67201736,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 26 06:18:42.101: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-4lvd Nov 26 06:18:42.144: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-4lvd Nov 26 06:18:42.204: INFO: metadata-proxy-v0.1-z77w4 started at 2022-11-26 06:06:30 +0000 UTC (0+2 container statuses recorded) Nov 26 06:18:42.204: INFO: Container metadata-proxy ready: true, restart count 0 Nov 26 06:18:42.204: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 26 06:18:42.204: INFO: konnectivity-agent-dx4vl started at 2022-11-26 06:06:38 +0000 UTC (0+1 container statuses recorded) Nov 26 06:18:42.204: INFO: Container konnectivity-agent ready: false, restart count 5 Nov 26 06:18:42.204: INFO: ss-2 started at 2022-11-26 06:09:41 +0000 UTC (0+1 container statuses recorded) Nov 26 06:18:42.204: INFO: Container webserver ready: true, restart count 3 Nov 26 06:18:42.204: INFO: csi-mockplugin-0 started at 2022-11-26 06:14:37 +0000 UTC (0+3 container statuses recorded) Nov 26 06:18:42.204: INFO: Container csi-provisioner ready: true, restart count 2 Nov 26 06:18:42.204: INFO: Container driver-registrar ready: true, restart count 2 Nov 26 06:18:42.204: INFO: Container mock ready: true, restart count 2 Nov 26 06:18:42.204: INFO: pvc-volume-tester-vcrtr started at 2022-11-26 06:14:49 +0000 UTC (0+1 container statuses recorded) Nov 26 06:18:42.204: INFO: Container volume-tester ready: false, restart count 0 Nov 26 06:18:42.204: INFO: pod-2c115652-4636-4797-8221-d1eea046cf6a started at 2022-11-26 06:08:36 +0000 UTC (0+1 container statuses recorded) Nov 26 06:18:42.204: INFO: Container write-pod ready: false, restart count 0 Nov 26 06:18:42.204: INFO: csi-hostpathplugin-0 started at 2022-11-26 06:08:55 +0000 UTC (0+7 container statuses recorded) Nov 26 06:18:42.204: INFO: Container csi-attacher ready: false, restart count 4 Nov 26 06:18:42.204: INFO: Container csi-provisioner ready: false, restart count 4 Nov 26 06:18:42.204: INFO: Container csi-resizer ready: false, restart count 4 Nov 26 06:18:42.204: INFO: Container csi-snapshotter ready: false, restart count 4 Nov 26 06:18:42.204: INFO: Container hostpath ready: false, restart count 4 Nov 26 06:18:42.204: INFO: Container liveness-probe ready: false, restart count 4 Nov 26 06:18:42.204: INFO: Container node-driver-registrar ready: false, restart count 4 Nov 26 06:18:42.204: INFO: kube-proxy-bootstrap-e2e-minion-group-4lvd started at 2022-11-26 06:06:29 +0000 UTC (0+1 container statuses recorded) Nov 26 06:18:42.204: INFO: Container kube-proxy ready: false, restart count 6 Nov 26 06:18:42.204: INFO: coredns-6d97d5ddb-n6d4l started at 2022-11-26 06:06:46 +0000 UTC (0+1 container statuses recorded) Nov 26 06:18:42.204: INFO: Container coredns ready: false, restart count 6 Nov 26 06:18:42.204: INFO: netserver-0 started at 2022-11-26 06:18:37 +0000 UTC (0+1 container statuses recorded) Nov 26 06:18:42.204: INFO: Container webserver ready: false, restart count 0 Nov 26 06:18:42.204: INFO: netserver-0 started at 2022-11-26 06:15:56 +0000 UTC (0+1 container statuses recorded) Nov 26 06:18:42.204: INFO: Container webserver ready: true, restart count 3 Nov 26 06:18:42.419: INFO: Latency metrics for node bootstrap-e2e-minion-group-4lvd Nov 26 06:18:42.419: INFO: Logging node info for node bootstrap-e2e-minion-group-6hf3 Nov 26 06:18:42.462: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-6hf3 b331b1c1-4929-41cb-863a-a271765dc91a 8161 0 2022-11-26 06:06:35 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-6hf3 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-6hf3 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-multivolume-8219":"bootstrap-e2e-minion-group-6hf3","csi-hostpath-provisioning-4288":"bootstrap-e2e-minion-group-6hf3","csi-mock-csi-mock-volumes-3236":"bootstrap-e2e-minion-group-6hf3"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-26 06:06:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-26 06:06:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}}}} } {node-problem-detector Update v1 2022-11-26 06:16:40 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2022-11-26 06:17:20 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:volumesAttached":{}}} status} {kubelet Update v1 2022-11-26 06:17:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-gce-dg-1-6-1-5-dwngr-clu/us-west1-b/bootstrap-e2e-minion-group-6hf3,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-26 06:16:40 +0000 UTC,LastTransitionTime:2022-11-26 06:06:38 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-26 06:16:40 +0000 UTC,LastTransitionTime:2022-11-26 06:06:38 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-26 06:16:40 +0000 UTC,LastTransitionTime:2022-11-26 06:06:38 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-26 06:16:40 +0000 UTC,LastTransitionTime:2022-11-26 06:06:38 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-26 06:16:40 +0000 UTC,LastTransitionTime:2022-11-26 06:06:38 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-26 06:16:40 +0000 UTC,LastTransitionTime:2022-11-26 06:06:38 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-26 06:16:40 +0000 UTC,LastTransitionTime:2022-11-26 06:06:38 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 06:06:49 +0000 UTC,LastTransitionTime:2022-11-26 06:06:49 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-26 06:17:19 +0000 UTC,LastTransitionTime:2022-11-26 06:06:35 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-26 06:17:19 +0000 UTC,LastTransitionTime:2022-11-26 06:06:35 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-26 06:17:19 +0000 UTC,LastTransitionTime:2022-11-26 06:06:35 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-26 06:17:19 +0000 UTC,LastTransitionTime:2022-11-26 06:06:36 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.3,},NodeAddress{Type:ExternalIP,Address:34.127.104.133,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-6hf3.c.k8s-gce-dg-1-6-1-5-dwngr-clu.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-6hf3.c.k8s-gce-dg-1-6-1-5-dwngr-clu.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:26797942e06ae330f816fe3103a312e5,SystemUUID:26797942-e06a-e330-f816-fe3103a312e5,BootID:2fb94944-0ddb-4524-8e35-935d1c8900f4,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/volume/nfs@sha256:3bda73f2428522b0e342af80a0b9679e8594c2126f2b3cca39ed787589741b9e registry.k8s.io/e2e-test-images/volume/nfs:1.3],SizeBytes:95836203,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:67201736,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[kubernetes.io/csi/csi-mock-csi-mock-volumes-3236^8356e743-6d51-11ed-b618-52bed00202b8 kubernetes.io/csi/csi-mock-csi-mock-volumes-3236^8af161f1-6d51-11ed-b618-52bed00202b8],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-mock-csi-mock-volumes-3236^8356e743-6d51-11ed-b618-52bed00202b8,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-mock-csi-mock-volumes-3236^8af161f1-6d51-11ed-b618-52bed00202b8,DevicePath:,},},Config:nil,},} Nov 26 06:18:42.463: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-6hf3 Nov 26 06:18:42.507: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-6hf3 Nov 26 06:18:42.604: INFO: pvc-volume-tester-btbrd started at 2022-11-26 06:13:53 +0000 UTC (0+1 container statuses recorded) Nov 26 06:18:42.604: INFO: Container volume-tester ready: false, restart count 0 Nov 26 06:18:42.604: INFO: pvc-volume-tester-pg8h9 started at 2022-11-26 06:14:13 +0000 UTC (0+1 container statuses recorded) Nov 26 06:18:42.604: INFO: Container volume-tester ready: true, restart count 0 Nov 26 06:18:42.604: INFO: konnectivity-agent-czjjn started at 2022-11-26 06:06:49 +0000 UTC (0+1 container statuses recorded) Nov 26 06:18:42.604: INFO: Container konnectivity-agent ready: true, restart count 5 Nov 26 06:18:42.604: INFO: pvc-volume-tester-bl2qj started at 2022-11-26 06:14:00 +0000 UTC (0+1 container statuses recorded) Nov 26 06:18:42.604: INFO: Container volume-tester ready: true, restart count 0 Nov 26 06:18:42.604: INFO: netserver-1 started at 2022-11-26 06:15:56 +0000 UTC (0+1 container statuses recorded) Nov 26 06:18:42.604: INFO: Container webserver ready: false, restart count 3 Nov 26 06:18:42.604: INFO: csi-mockplugin-attacher-0 started at 2022-11-26 06:13:45 +0000 UTC (0+1 container statuses recorded) Nov 26 06:18:42.604: INFO: Container csi-attacher ready: true, restart count 3 Nov 26 06:18:42.604: INFO: csi-mockplugin-0 started at 2022-11-26 06:13:45 +0000 UTC (0+3 container statuses recorded) Nov 26 06:18:42.604: INFO: Container csi-provisioner ready: true, restart count 3 Nov 26 06:18:42.604: INFO: Container driver-registrar ready: true, restart count 3 Nov 26 06:18:42.604: INFO: Container mock ready: true, restart count 3 Nov 26 06:18:42.604: INFO: kube-proxy-bootstrap-e2e-minion-group-6hf3 started at 2022-11-26 06:06:35 +0000 UTC (0+1 container statuses recorded) Nov 26 06:18:42.604: INFO: Container kube-proxy ready: true, restart count 6 Nov 26 06:18:42.604: INFO: metrics-server-v0.5.2-867b8754b9-hr966 started at 2022-11-26 06:07:02 +0000 UTC (0+2 container statuses recorded) Nov 26 06:18:42.604: INFO: Container metrics-server ready: false, restart count 5 Nov 26 06:18:42.604: INFO: Container metrics-server-nanny ready: false, restart count 5 Nov 26 06:18:42.604: INFO: ss-1 started at 2022-11-26 06:09:25 +0000 UTC (0+1 container statuses recorded) Nov 26 06:18:42.604: INFO: Container webserver ready: true, restart count 6 Nov 26 06:18:42.604: INFO: nfs-server started at 2022-11-26 06:09:26 +0000 UTC (0+1 container statuses recorded) Nov 26 06:18:42.604: INFO: Container nfs-server ready: false, restart count 4 Nov 26 06:18:42.604: INFO: metadata-proxy-v0.1-hgwt5 started at 2022-11-26 06:06:36 +0000 UTC (0+2 container statuses recorded) Nov 26 06:18:42.604: INFO: Container metadata-proxy ready: true, restart count 0 Nov 26 06:18:42.604: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 26 06:18:42.604: INFO: csi-hostpathplugin-0 started at 2022-11-26 06:15:56 +0000 UTC (0+7 container statuses recorded) Nov 26 06:18:42.604: INFO: Container csi-attacher ready: true, restart count 2 Nov 26 06:18:42.604: INFO: Container csi-provisioner ready: true, restart count 2 Nov 26 06:18:42.604: INFO: Container csi-resizer ready: true, restart count 2 Nov 26 06:18:42.604: INFO: Container csi-snapshotter ready: true, restart count 2 Nov 26 06:18:42.604: INFO: Container hostpath ready: true, restart count 2 Nov 26 06:18:42.604: INFO: Container liveness-probe ready: true, restart count 2 Nov 26 06:18:42.604: INFO: Container node-driver-registrar ready: true, restart count 2 Nov 26 06:18:42.604: INFO: csi-hostpathplugin-0 started at 2022-11-26 06:16:55 +0000 UTC (0+7 container statuses recorded) Nov 26 06:18:42.604: INFO: Container csi-attacher ready: true, restart count 0 Nov 26 06:18:42.604: INFO: Container csi-provisioner ready: true, restart count 0 Nov 26 06:18:42.604: INFO: Container csi-resizer ready: true, restart count 0 Nov 26 06:18:42.604: INFO: Container csi-snapshotter ready: true, restart count 0 Nov 26 06:18:42.604: INFO: Container hostpath ready: true, restart count 0 Nov 26 06:18:42.604: INFO: Container liveness-probe ready: true, restart count 0 Nov 26 06:18:42.604: INFO: Container node-driver-registrar ready: true, restart count 0 Nov 26 06:18:42.604: INFO: netserver-1 started at 2022-11-26 06:18:37 +0000 UTC (0+1 container statuses recorded) Nov 26 06:18:42.604: INFO: Container webserver ready: false, restart count 0 Nov 26 06:18:42.891: INFO: Latency metrics for node bootstrap-e2e-minion-group-6hf3 Nov 26 06:18:42.891: INFO: Logging node info for node bootstrap-e2e-minion-group-8xrn Nov 26 06:18:42.936: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-8xrn bdb3ca25-6ac6-4bb7-a82f-bb358d7ba8f9 8090 0 2022-11-26 06:06:29 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-8xrn kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-8xrn topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-11-26 06:06:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}}}} } {kubelet Update v1 2022-11-26 06:06:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-26 06:10:28 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {node-problem-detector Update v1 2022-11-26 06:16:35 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-11-26 06:17:34 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-gce-dg-1-6-1-5-dwngr-clu/us-west1-b/bootstrap-e2e-minion-group-8xrn,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-26 06:16:35 +0000 UTC,LastTransitionTime:2022-11-26 06:06:33 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-26 06:16:35 +0000 UTC,LastTransitionTime:2022-11-26 06:06:33 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-26 06:16:35 +0000 UTC,LastTransitionTime:2022-11-26 06:06:33 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-26 06:16:35 +0000 UTC,LastTransitionTime:2022-11-26 06:06:33 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-26 06:16:35 +0000 UTC,LastTransitionTime:2022-11-26 06:06:33 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-26 06:16:35 +0000 UTC,LastTransitionTime:2022-11-26 06:06:33 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-26 06:16:35 +0000 UTC,LastTransitionTime:2022-11-26 06:06:33 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 06:06:38 +0000 UTC,LastTransitionTime:2022-11-26 06:06:38 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-26 06:17:34 +0000 UTC,LastTransitionTime:2022-11-26 06:06:29 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-26 06:17:34 +0000 UTC,LastTransitionTime:2022-11-26 06:06:29 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-26 06:17:34 +0000 UTC,LastTransitionTime:2022-11-26 06:06:29 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-26 06:17:34 +0000 UTC,LastTransitionTime:2022-11-26 06:06:29 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.4,},NodeAddress{Type:ExternalIP,Address:34.105.92.105,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-8xrn.c.k8s-gce-dg-1-6-1-5-dwngr-clu.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-8xrn.c.k8s-gce-dg-1-6-1-5-dwngr-clu.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:1e49116bed901a0b492cc2a872edd7a8,SystemUUID:1e49116b-ed90-1a0b-492c-c2a872edd7a8,BootID:24bc719e-5b36-4d61-bc89-44455ca28dd7,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:67201736,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 26 06:18:42.937: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-8xrn Nov 26 06:18:42.987: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-8xrn Nov 26 06:18:43.059: INFO: ss-0 started at 2022-11-26 06:08:35 +0000 UTC (0+1 container statuses recorded) Nov 26 06:18:43.059: INFO: Container webserver ready: false, restart count 5 Nov 26 06:18:43.059: INFO: external-provisioner-php8v started at 2022-11-26 06:16:58 +0000 UTC (0+1 container statuses recorded) Nov 26 06:18:43.059: INFO: Container nfs-provisioner ready: false, restart count 3 Nov 26 06:18:43.059: INFO: pod-secrets-87dc7cd1-3169-4f80-851b-f29da7b564c7 started at 2022-11-26 06:17:44 +0000 UTC (0+1 container statuses recorded) Nov 26 06:18:43.059: INFO: Container creates-volume-test ready: false, restart count 0 Nov 26 06:18:43.059: INFO: external-provisioner-z8sqc started at 2022-11-26 06:18:07 +0000 UTC (0+1 container statuses recorded) Nov 26 06:18:43.059: INFO: Container nfs-provisioner ready: false, restart count 2 Nov 26 06:18:43.059: INFO: netserver-2 started at 2022-11-26 06:18:37 +0000 UTC (0+1 container statuses recorded) Nov 26 06:18:43.059: INFO: Container webserver ready: false, restart count 0 Nov 26 06:18:43.059: INFO: kube-proxy-bootstrap-e2e-minion-group-8xrn started at 2022-11-26 06:06:29 +0000 UTC (0+1 container statuses recorded) Nov 26 06:18:43.059: INFO: Container kube-proxy ready: false, restart count 5 Nov 26 06:18:43.059: INFO: konnectivity-agent-7ppwz started at 2022-11-26 06:06:38 +0000 UTC (0+1 container statuses recorded) Nov 26 06:18:43.059: INFO: Container konnectivity-agent ready: false, restart count 5 Nov 26 06:18:43.059: INFO: netserver-2 started at 2022-11-26 06:15:56 +0000 UTC (0+1 container statuses recorded) Nov 26 06:18:43.059: INFO: Container webserver ready: false, restart count 4 Nov 26 06:18:43.059: INFO: l7-default-backend-8549d69d99-7w7f7 started at 2022-11-26 06:06:38 +0000 UTC (0+1 container statuses recorded) Nov 26 06:18:43.059: INFO: Container default-http-backend ready: true, restart count 0 Nov 26 06:18:43.059: INFO: volume-snapshot-controller-0 started at 2022-11-26 06:06:38 +0000 UTC (0+1 container statuses recorded) Nov 26 06:18:43.059: INFO: Container volume-snapshot-controller ready: true, restart count 4 Nov 26 06:18:43.059: INFO: pvc-tester-phnnw started at 2022-11-26 06:15:48 +0000 UTC (0+1 container statuses recorded) Nov 26 06:18:43.059: INFO: Container write-pod ready: false, restart count 0 Nov 26 06:18:43.059: INFO: kube-dns-autoscaler-5f6455f985-z5fph started at 2022-11-26 06:06:38 +0000 UTC (0+1 container statuses recorded) Nov 26 06:18:43.059: INFO: Container autoscaler ready: false, restart count 5 Nov 26 06:18:43.059: INFO: metadata-proxy-v0.1-h465b started at 2022-11-26 06:06:30 +0000 UTC (0+2 container statuses recorded) Nov 26 06:18:43.059: INFO: Container metadata-proxy ready: true, restart count 0 Nov 26 06:18:43.059: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 26 06:18:43.059: INFO: test-container-pod started at 2022-11-26 06:17:56 +0000 UTC (0+1 container statuses recorded) Nov 26 06:18:43.059: INFO: Container webserver ready: true, restart count 0 Nov 26 06:18:43.059: INFO: coredns-6d97d5ddb-rr67j started at 2022-11-26 06:06:38 +0000 UTC (0+1 container statuses recorded) Nov 26 06:18:43.059: INFO: Container coredns ready: false, restart count 5 Nov 26 06:18:43.059: INFO: external-local-update-w2stv started at 2022-11-26 06:15:49 +0000 UTC (0+1 container statuses recorded) Nov 26 06:18:43.059: INFO: Container netexec ready: true, restart count 0 Nov 26 06:18:43.312: INFO: Latency metrics for node bootstrap-e2e-minion-group-8xrn [DeferCleanup (Each)] [sig-network] LoadBalancers ESIPP [Slow] tear down framework | framework.go:193 STEP: Destroying namespace "esipp-9299" for this suite. 11/26/22 06:18:43.312
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-network\]\sLoadBalancers\sESIPP\s\[Slow\]\sshould\sonly\starget\snodes\swith\sendpoints$'
test/e2e/network/loadbalancer.go:1416 k8s.io/kubernetes/test/e2e/network.glob..func20.5() test/e2e/network/loadbalancer.go:1416 +0x9a8from junit_01.xml
[BeforeEach] [sig-network] LoadBalancers ESIPP [Slow] set up framework | framework.go:178 STEP: Creating a kubernetes client 11/26/22 06:10:44.541 Nov 26 06:10:44.541: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename esipp 11/26/22 06:10:44.543 STEP: Waiting for a default service account to be provisioned in namespace 11/26/22 06:10:44.725 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/26/22 06:10:44.824 [BeforeEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/network/loadbalancer.go:1250 [It] should only target nodes with endpoints test/e2e/network/loadbalancer.go:1346 STEP: creating a service esipp-3192/external-local-nodes with type=LoadBalancer 11/26/22 06:10:45.152 STEP: setting ExternalTrafficPolicy=Local 11/26/22 06:10:45.152 STEP: waiting for loadbalancer for service esipp-3192/external-local-nodes 11/26/22 06:10:45.953 Nov 26 06:10:45.954: INFO: Waiting up to 15m0s for service "external-local-nodes" to have a LoadBalancer STEP: waiting for loadbalancer for service esipp-3192/external-local-nodes 11/26/22 06:12:04.197 Nov 26 06:12:04.197: INFO: Waiting up to 15m0s for service "external-local-nodes" to have a LoadBalancer STEP: Performing setup for networking test in namespace esipp-3192 11/26/22 06:12:04.264 STEP: creating a selector 11/26/22 06:12:04.264 STEP: Creating the service pods in kubernetes 11/26/22 06:12:04.264 Nov 26 06:12:04.264: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Nov 26 06:12:04.751: INFO: Waiting up to 5m0s for pod "netserver-0" in namespace "esipp-3192" to be "running and ready" Nov 26 06:12:04.941: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 189.884782ms Nov 26 06:12:04.941: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 26 06:12:07.008: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 2.256995817s Nov 26 06:12:07.008: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 06:12:09.014: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 4.262837551s Nov 26 06:12:09.014: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 06:12:11.015: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 6.26463749s Nov 26 06:12:11.015: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 06:12:12.997: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 8.246648171s Nov 26 06:12:12.997: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 06:12:15.004: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 10.252864179s Nov 26 06:12:15.004: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 06:12:17.014: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 12.263539032s Nov 26 06:12:17.014: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 06:12:18.993: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 14.241990501s Nov 26 06:12:18.993: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 06:12:20.995: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 16.243947738s Nov 26 06:12:20.995: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 06:12:22.994: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 18.243149135s Nov 26 06:12:22.994: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 06:12:25.012: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 20.261095072s Nov 26 06:12:25.012: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 06:12:26.982: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 22.231305602s Nov 26 06:12:26.982: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 06:12:28.983: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 24.231873618s Nov 26 06:12:28.983: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 06:12:30.982: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 26.231303779s Nov 26 06:12:30.982: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 06:12:32.983: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 28.232322902s Nov 26 06:12:32.983: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 06:12:34.996: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 30.245006102s Nov 26 06:12:34.996: INFO: The phase of Pod netserver-0 is Running (Ready = false) Nov 26 06:12:36.985: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=true. Elapsed: 32.234703188s Nov 26 06:12:36.986: INFO: The phase of Pod netserver-0 is Running (Ready = true) Nov 26 06:12:36.986: INFO: Pod "netserver-0" satisfied condition "running and ready" Nov 26 06:12:37.027: INFO: Waiting up to 5m0s for pod "netserver-1" in namespace "esipp-3192" to be "running and ready" Nov 26 06:12:37.072: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=true. Elapsed: 44.773063ms Nov 26 06:12:37.072: INFO: The phase of Pod netserver-1 is Running (Ready = true) Nov 26 06:12:37.072: INFO: Pod "netserver-1" satisfied condition "running and ready" Nov 26 06:12:37.113: INFO: Waiting up to 5m0s for pod "netserver-2" in namespace "esipp-3192" to be "running and ready" Nov 26 06:12:37.154: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=true. Elapsed: 41.083736ms Nov 26 06:12:37.154: INFO: The phase of Pod netserver-2 is Running (Ready = true) Nov 26 06:12:37.154: INFO: Pod "netserver-2" satisfied condition "running and ready" STEP: Creating test pods 11/26/22 06:12:37.195 Nov 26 06:12:37.250: INFO: Waiting up to 5m0s for pod "test-container-pod" in namespace "esipp-3192" to be "running" Nov 26 06:12:37.291: INFO: Pod "test-container-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 41.325193ms Nov 26 06:12:39.334: INFO: Pod "test-container-pod": Phase="Running", Reason="", readiness=true. Elapsed: 2.083922267s Nov 26 06:12:39.334: INFO: Pod "test-container-pod" satisfied condition "running" Nov 26 06:12:39.375: INFO: Setting MaxTries for pod polling to 39 for networking test based on endpoint count 3 STEP: Getting node addresses 11/26/22 06:12:39.375 Nov 26 06:12:39.376: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating the service on top of the pods in kubernetes 11/26/22 06:12:39.46 Nov 26 06:12:39.577: INFO: Service node-port-service in namespace esipp-3192 found. Nov 26 06:12:39.797: INFO: Service session-affinity-service in namespace esipp-3192 found. STEP: Waiting for NodePort service to expose endpoint 11/26/22 06:12:39.844 Nov 26 06:12:40.845: INFO: Waiting for amount of service:node-port-service endpoints to be 3 STEP: Waiting for Session Affinity service to expose endpoint 11/26/22 06:12:40.886 Nov 26 06:12:41.887: INFO: Waiting for amount of service:session-affinity-service endpoints to be 3 STEP: creating a pod to be part of the service external-local-nodes on node bootstrap-e2e-minion-group-4lvd 11/26/22 06:12:41.93 Nov 26 06:12:41.988: INFO: Waiting up to 2m0s for 1 pods to be created Nov 26 06:12:42.049: INFO: Found all 1 pods Nov 26 06:12:42.049: INFO: Waiting up to 2m0s for 1 pods to be running and ready: [external-local-nodes-6krsb] Nov 26 06:12:42.049: INFO: Waiting up to 2m0s for pod "external-local-nodes-6krsb" in namespace "esipp-3192" to be "running and ready" Nov 26 06:12:42.107: INFO: Pod "external-local-nodes-6krsb": Phase="Pending", Reason="", readiness=false. Elapsed: 57.908417ms Nov 26 06:12:42.107: INFO: Error evaluating pod condition running and ready: want pod 'external-local-nodes-6krsb' on 'bootstrap-e2e-minion-group-4lvd' to be 'Running' but was 'Pending' Nov 26 06:12:44.154: INFO: Pod "external-local-nodes-6krsb": Phase="Running", Reason="", readiness=true. Elapsed: 2.105102349s Nov 26 06:12:44.154: INFO: Pod "external-local-nodes-6krsb" satisfied condition "running and ready" Nov 26 06:12:44.154: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [external-local-nodes-6krsb] STEP: waiting for service endpoint on node bootstrap-e2e-minion-group-4lvd 11/26/22 06:12:44.154 Nov 26 06:12:44.197: INFO: Pod for service esipp-3192/external-local-nodes is on node bootstrap-e2e-minion-group-4lvd Nov 26 06:12:44.197: INFO: Poking "http://34.145.17.103:8081/echo?msg=hello" Nov 26 06:12:54.198: INFO: Poke("http://34.145.17.103:8081/echo?msg=hello"): Get "http://34.145.17.103:8081/echo?msg=hello": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Nov 26 06:12:56.199: INFO: Poking "http://34.145.17.103:8081/echo?msg=hello" Nov 26 06:12:56.240: INFO: Poke("http://34.145.17.103:8081/echo?msg=hello"): Get "http://34.145.17.103:8081/echo?msg=hello": dial tcp 34.145.17.103:8081: connect: connection refused Nov 26 06:12:58.199: INFO: Poking "http://34.145.17.103:8081/echo?msg=hello" Nov 26 06:12:58.239: INFO: Poke("http://34.145.17.103:8081/echo?msg=hello"): Get "http://34.145.17.103:8081/echo?msg=hello": dial tcp 34.145.17.103:8081: connect: connection refused Nov 26 06:13:00.199: INFO: Poking "http://34.145.17.103:8081/echo?msg=hello" Nov 26 06:13:00.238: INFO: Poke("http://34.145.17.103:8081/echo?msg=hello"): Get "http://34.145.17.103:8081/echo?msg=hello": dial tcp 34.145.17.103:8081: connect: connection refused Nov 26 06:13:02.199: INFO: Poking "http://34.145.17.103:8081/echo?msg=hello" Nov 26 06:13:02.239: INFO: Poke("http://34.145.17.103:8081/echo?msg=hello"): Get "http://34.145.17.103:8081/echo?msg=hello": dial tcp 34.145.17.103:8081: connect: connection refused Nov 26 06:13:04.198: INFO: Poking "http://34.145.17.103:8081/echo?msg=hello" Nov 26 06:13:04.238: INFO: Poke("http://34.145.17.103:8081/echo?msg=hello"): Get "http://34.145.17.103:8081/echo?msg=hello": dial tcp 34.145.17.103:8081: connect: connection refused Nov 26 06:13:06.199: INFO: Poking "http://34.145.17.103:8081/echo?msg=hello" Nov 26 06:13:13.557: INFO: Poke("http://34.145.17.103:8081/echo?msg=hello"): success Nov 26 06:13:13.557: INFO: Health checking bootstrap-e2e-minion-group-4lvd, http://10.138.0.5:32495/healthz, expectedSuccess true Nov 26 06:13:13.600: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s -o /dev/null -w %{http_code} http://10.138.0.5:32495/healthz] Namespace:esipp-3192 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 26 06:13:13.600: INFO: >>> kubeConfig: /workspace/.kube/config Nov 26 06:13:13.601: INFO: ExecWithOptions: Clientset creation Nov 26 06:13:13.601: INFO: ExecWithOptions: execute(POST https://34.83.17.181/api/v1/namespaces/esipp-3192/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+-o+%2Fdev%2Fnull+-w+%25%7Bhttp_code%7D+http%3A%2F%2F10.138.0.5%3A32495%2Fhealthz&container=webserver&container=webserver&stderr=true&stdout=true) Nov 26 06:13:13.930: INFO: Got status code from http://10.138.0.5:32495/healthz via test container: 200 Nov 26 06:13:14.977: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s -o /dev/null -w %{http_code} http://10.138.0.5:32495/healthz] Namespace:esipp-3192 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 26 06:13:14.977: INFO: >>> kubeConfig: /workspace/.kube/config Nov 26 06:13:14.979: INFO: ExecWithOptions: Clientset creation Nov 26 06:13:14.979: INFO: ExecWithOptions: execute(POST https://34.83.17.181/api/v1/namespaces/esipp-3192/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+-o+%2Fdev%2Fnull+-w+%25%7Bhttp_code%7D+http%3A%2F%2F10.138.0.5%3A32495%2Fhealthz&container=webserver&container=webserver&stderr=true&stdout=true) Nov 26 06:13:15.281: INFO: Got status code from http://10.138.0.5:32495/healthz via test container: 200 Nov 26 06:13:15.281: INFO: Poking "http://34.145.17.103:8081/echo?msg=hello" Nov 26 06:13:15.359: INFO: Poke("http://34.145.17.103:8081/echo?msg=hello"): success Nov 26 06:13:15.359: INFO: Health checking bootstrap-e2e-minion-group-6hf3, http://10.138.0.3:32495/healthz, expectedSuccess false Nov 26 06:13:15.400: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s -o /dev/null -w %{http_code} http://10.138.0.3:32495/healthz] Namespace:esipp-3192 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 26 06:13:15.401: INFO: >>> kubeConfig: /workspace/.kube/config Nov 26 06:13:15.401: INFO: ExecWithOptions: Clientset creation Nov 26 06:13:15.402: INFO: ExecWithOptions: execute(POST https://34.83.17.181/api/v1/namespaces/esipp-3192/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+-o+%2Fdev%2Fnull+-w+%25%7Bhttp_code%7D+http%3A%2F%2F10.138.0.3%3A32495%2Fhealthz&container=webserver&container=webserver&stderr=true&stdout=true) Nov 26 06:13:15.706: INFO: Got status code from http://10.138.0.3:32495/healthz via test container: 503 Nov 26 06:13:16.748: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s -o /dev/null -w %{http_code} http://10.138.0.3:32495/healthz] Namespace:esipp-3192 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 26 06:13:16.748: INFO: >>> kubeConfig: /workspace/.kube/config Nov 26 06:13:16.749: INFO: ExecWithOptions: Clientset creation Nov 26 06:13:16.749: INFO: ExecWithOptions: execute(POST https://34.83.17.181/api/v1/namespaces/esipp-3192/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+-o+%2Fdev%2Fnull+-w+%25%7Bhttp_code%7D+http%3A%2F%2F10.138.0.3%3A32495%2Fhealthz&container=webserver&container=webserver&stderr=true&stdout=true) Nov 26 06:13:17.055: INFO: Got status code from http://10.138.0.3:32495/healthz via test container: 503 Nov 26 06:13:17.055: INFO: Poking "http://34.145.17.103:8081/echo?msg=hello" Nov 26 06:13:17.134: INFO: Poke("http://34.145.17.103:8081/echo?msg=hello"): success Nov 26 06:13:17.134: INFO: Health checking bootstrap-e2e-minion-group-8xrn, http://10.138.0.4:32495/healthz, expectedSuccess false Nov 26 06:13:17.198: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s -o /dev/null -w %{http_code} http://10.138.0.4:32495/healthz] Namespace:esipp-3192 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 26 06:13:17.198: INFO: >>> kubeConfig: /workspace/.kube/config Nov 26 06:13:17.199: INFO: ExecWithOptions: Clientset creation Nov 26 06:13:17.199: INFO: ExecWithOptions: execute(POST https://34.83.17.181/api/v1/namespaces/esipp-3192/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+-o+%2Fdev%2Fnull+-w+%25%7Bhttp_code%7D+http%3A%2F%2F10.138.0.4%3A32495%2Fhealthz&container=webserver&container=webserver&stderr=true&stdout=true) Nov 26 06:13:17.508: INFO: Got status code from http://10.138.0.4:32495/healthz via test container: 503 Nov 26 06:13:18.550: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s -o /dev/null -w %{http_code} http://10.138.0.4:32495/healthz] Namespace:esipp-3192 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 26 06:13:18.550: INFO: >>> kubeConfig: /workspace/.kube/config Nov 26 06:13:18.551: INFO: ExecWithOptions: Clientset creation Nov 26 06:13:18.551: INFO: ExecWithOptions: execute(POST https://34.83.17.181/api/v1/namespaces/esipp-3192/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+-o+%2Fdev%2Fnull+-w+%25%7Bhttp_code%7D+http%3A%2F%2F10.138.0.4%3A32495%2Fhealthz&container=webserver&container=webserver&stderr=true&stdout=true) Nov 26 06:13:18.879: INFO: Got status code from http://10.138.0.4:32495/healthz via test container: 503 STEP: deleting ReplicationController external-local-nodes in namespace esipp-3192, will wait for the garbage collector to delete the pods 11/26/22 06:13:18.879 Nov 26 06:13:19.018: INFO: Deleting ReplicationController external-local-nodes took: 46.767941ms Nov 26 06:13:42.820: INFO: Terminating ReplicationController external-local-nodes pods took: 23.801053061s STEP: creating a pod to be part of the service external-local-nodes on node bootstrap-e2e-minion-group-6hf3 11/26/22 06:13:44.221 Nov 26 06:13:44.284: INFO: Waiting up to 2m0s for 1 pods to be created Nov 26 06:13:44.332: INFO: Found all 1 pods Nov 26 06:13:44.332: INFO: Waiting up to 2m0s for 1 pods to be running and ready: [external-local-nodes-4jdpc] Nov 26 06:13:44.332: INFO: Waiting up to 2m0s for pod "external-local-nodes-4jdpc" in namespace "esipp-3192" to be "running and ready" Nov 26 06:13:44.375: INFO: Pod "external-local-nodes-4jdpc": Phase="Pending", Reason="", readiness=false. Elapsed: 42.873419ms Nov 26 06:13:44.375: INFO: Error evaluating pod condition running and ready: want pod 'external-local-nodes-4jdpc' on 'bootstrap-e2e-minion-group-6hf3' to be 'Running' but was 'Pending' Nov 26 06:13:46.417: INFO: Pod "external-local-nodes-4jdpc": Phase="Running", Reason="", readiness=true. Elapsed: 2.085110303s Nov 26 06:13:46.417: INFO: Pod "external-local-nodes-4jdpc" satisfied condition "running and ready" Nov 26 06:13:46.417: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [external-local-nodes-4jdpc] STEP: waiting for service endpoint on node bootstrap-e2e-minion-group-6hf3 11/26/22 06:13:46.417 Nov 26 06:13:46.458: INFO: Pod for service esipp-3192/external-local-nodes is on node bootstrap-e2e-minion-group-6hf3 Nov 26 06:13:46.458: INFO: Poking "http://34.145.17.103:8081/echo?msg=hello" Nov 26 06:13:56.459: INFO: Poke("http://34.145.17.103:8081/echo?msg=hello"): Get "http://34.145.17.103:8081/echo?msg=hello": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Nov 26 06:13:58.460: INFO: Poking "http://34.145.17.103:8081/echo?msg=hello" Nov 26 06:14:01.560: INFO: Poke("http://34.145.17.103:8081/echo?msg=hello"): success Nov 26 06:14:01.560: INFO: Health checking bootstrap-e2e-minion-group-4lvd, http://10.138.0.5:32495/healthz, expectedSuccess false Nov 26 06:14:01.631: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s -o /dev/null -w %{http_code} http://10.138.0.5:32495/healthz] Namespace:esipp-3192 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 26 06:14:01.631: INFO: >>> kubeConfig: /workspace/.kube/config Nov 26 06:14:01.632: INFO: ExecWithOptions: Clientset creation Nov 26 06:14:01.632: INFO: ExecWithOptions: execute(POST https://34.83.17.181/api/v1/namespaces/esipp-3192/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+-o+%2Fdev%2Fnull+-w+%25%7Bhttp_code%7D+http%3A%2F%2F10.138.0.5%3A32495%2Fhealthz&container=webserver&container=webserver&stderr=true&stdout=true) Nov 26 06:14:02.071: INFO: Got status code from http://10.138.0.5:32495/healthz via test container: 503 Nov 26 06:14:03.125: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s -o /dev/null -w %{http_code} http://10.138.0.5:32495/healthz] Namespace:esipp-3192 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 26 06:14:03.125: INFO: >>> kubeConfig: /workspace/.kube/config Nov 26 06:14:03.126: INFO: ExecWithOptions: Clientset creation Nov 26 06:14:03.126: INFO: ExecWithOptions: execute(POST https://34.83.17.181/api/v1/namespaces/esipp-3192/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+-o+%2Fdev%2Fnull+-w+%25%7Bhttp_code%7D+http%3A%2F%2F10.138.0.5%3A32495%2Fhealthz&container=webserver&container=webserver&stderr=true&stdout=true) Nov 26 06:14:03.819: INFO: Got status code from http://10.138.0.5:32495/healthz via test container: 503 Nov 26 06:14:03.819: INFO: Poking "http://34.145.17.103:8081/echo?msg=hello" Nov 26 06:14:13.820: INFO: Poke("http://34.145.17.103:8081/echo?msg=hello"): Get "http://34.145.17.103:8081/echo?msg=hello": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Nov 26 06:14:15.821: INFO: Poking "http://34.145.17.103:8081/echo?msg=hello" Nov 26 06:14:15.900: INFO: Poke("http://34.145.17.103:8081/echo?msg=hello"): success Nov 26 06:14:15.900: INFO: Health checking bootstrap-e2e-minion-group-6hf3, http://10.138.0.3:32495/healthz, expectedSuccess true Nov 26 06:14:16.104: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s -o /dev/null -w %{http_code} http://10.138.0.3:32495/healthz] Namespace:esipp-3192 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 26 06:14:16.104: INFO: >>> kubeConfig: /workspace/.kube/config Nov 26 06:14:16.105: INFO: ExecWithOptions: Clientset creation Nov 26 06:14:16.105: INFO: ExecWithOptions: execute(POST https://34.83.17.181/api/v1/namespaces/esipp-3192/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+-o+%2Fdev%2Fnull+-w+%25%7Bhttp_code%7D+http%3A%2F%2F10.138.0.3%3A32495%2Fhealthz&container=webserver&container=webserver&stderr=true&stdout=true) Nov 26 06:14:16.553: INFO: Got status code from http://10.138.0.3:32495/healthz via test container: 200 Nov 26 06:14:17.629: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s -o /dev/null -w %{http_code} http://10.138.0.3:32495/healthz] Namespace:esipp-3192 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 26 06:14:17.629: INFO: >>> kubeConfig: /workspace/.kube/config Nov 26 06:14:17.631: INFO: ExecWithOptions: Clientset creation Nov 26 06:14:17.631: INFO: ExecWithOptions: execute(POST https://34.83.17.181/api/v1/namespaces/esipp-3192/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+-o+%2Fdev%2Fnull+-w+%25%7Bhttp_code%7D+http%3A%2F%2F10.138.0.3%3A32495%2Fhealthz&container=webserver&container=webserver&stderr=true&stdout=true) Nov 26 06:14:18.097: INFO: Got status code from http://10.138.0.3:32495/healthz via test container: 200 Nov 26 06:14:18.097: INFO: Poking "http://34.145.17.103:8081/echo?msg=hello" Nov 26 06:14:18.175: INFO: Poke("http://34.145.17.103:8081/echo?msg=hello"): success Nov 26 06:14:18.175: INFO: Health checking bootstrap-e2e-minion-group-8xrn, http://10.138.0.4:32495/healthz, expectedSuccess false Nov 26 06:14:18.273: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s -o /dev/null -w %{http_code} http://10.138.0.4:32495/healthz] Namespace:esipp-3192 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 26 06:14:18.273: INFO: >>> kubeConfig: /workspace/.kube/config Nov 26 06:14:18.274: INFO: ExecWithOptions: Clientset creation Nov 26 06:14:18.274: INFO: ExecWithOptions: execute(POST https://34.83.17.181/api/v1/namespaces/esipp-3192/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+-o+%2Fdev%2Fnull+-w+%25%7Bhttp_code%7D+http%3A%2F%2F10.138.0.4%3A32495%2Fhealthz&container=webserver&container=webserver&stderr=true&stdout=true) Nov 26 06:14:19.171: INFO: Got status code from http://10.138.0.4:32495/healthz via test container: 0 Nov 26 06:14:20.237: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s -o /dev/null -w %{http_code} http://10.138.0.4:32495/healthz] Namespace:esipp-3192 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 26 06:14:20.237: INFO: >>> kubeConfig: /workspace/.kube/config Nov 26 06:14:20.238: INFO: ExecWithOptions: Clientset creation Nov 26 06:14:20.238: INFO: ExecWithOptions: execute(POST https://34.83.17.181/api/v1/namespaces/esipp-3192/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+-o+%2Fdev%2Fnull+-w+%25%7Bhttp_code%7D+http%3A%2F%2F10.138.0.4%3A32495%2Fhealthz&container=webserver&container=webserver&stderr=true&stdout=true) Nov 26 06:14:20.854: INFO: Got status code from http://10.138.0.4:32495/healthz via test container: 0 STEP: deleting ReplicationController external-local-nodes in namespace esipp-3192, will wait for the garbage collector to delete the pods 11/26/22 06:14:20.854 Nov 26 06:14:21.276: INFO: Deleting ReplicationController external-local-nodes took: 226.391396ms Nov 26 06:14:21.476: INFO: Terminating ReplicationController external-local-nodes pods took: 200.36459ms STEP: creating a pod to be part of the service external-local-nodes on node bootstrap-e2e-minion-group-8xrn 11/26/22 06:14:22.677 Nov 26 06:14:22.828: INFO: Waiting up to 2m0s for 1 pods to be created Nov 26 06:14:22.899: INFO: Found 0/1 pods - will retry Nov 26 06:14:24.953: INFO: Found all 1 pods Nov 26 06:14:24.953: INFO: Waiting up to 2m0s for 1 pods to be running and ready: [external-local-nodes-cwmn7] Nov 26 06:14:24.953: INFO: Waiting up to 2m0s for pod "external-local-nodes-cwmn7" in namespace "esipp-3192" to be "running and ready" Nov 26 06:14:25.007: INFO: Pod "external-local-nodes-cwmn7": Phase="Running", Reason="", readiness=true. Elapsed: 53.827489ms Nov 26 06:14:25.007: INFO: Pod "external-local-nodes-cwmn7" satisfied condition "running and ready" Nov 26 06:14:25.007: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [external-local-nodes-cwmn7] STEP: waiting for service endpoint on node bootstrap-e2e-minion-group-8xrn 11/26/22 06:14:25.007 Nov 26 06:14:25.064: INFO: Pod for service esipp-3192/external-local-nodes is on node bootstrap-e2e-minion-group-8xrn Nov 26 06:14:25.064: INFO: Poking "http://34.145.17.103:8081/echo?msg=hello" Nov 26 06:14:25.105: INFO: Poke("http://34.145.17.103:8081/echo?msg=hello"): Get "http://34.145.17.103:8081/echo?msg=hello": dial tcp 34.145.17.103:8081: connect: connection refused Nov 26 06:14:27.105: INFO: Poking "http://34.145.17.103:8081/echo?msg=hello" Nov 26 06:14:27.144: INFO: Poke("http://34.145.17.103:8081/echo?msg=hello"): Get "http://34.145.17.103:8081/echo?msg=hello": dial tcp 34.145.17.103:8081: connect: connection refused Nov 26 06:14:29.105: INFO: Poking "http://34.145.17.103:8081/echo?msg=hello" Nov 26 06:14:29.144: INFO: Poke("http://34.145.17.103:8081/echo?msg=hello"): Get "http://34.145.17.103:8081/echo?msg=hello": dial tcp 34.145.17.103:8081: connect: connection refused Nov 26 06:14:31.106: INFO: Poking "http://34.145.17.103:8081/echo?msg=hello" Nov 26 06:14:31.145: INFO: Poke("http://34.145.17.103:8081/echo?msg=hello"): Get "http://34.145.17.103:8081/echo?msg=hello": dial tcp 34.145.17.103:8081: connect: connection refused Nov 26 06:14:33.105: INFO: Poking "http://34.145.17.103:8081/echo?msg=hello" Nov 26 06:14:33.144: INFO: Poke("http://34.145.17.103:8081/echo?msg=hello"): Get "http://34.145.17.103:8081/echo?msg=hello": dial tcp 34.145.17.103:8081: connect: connection refused Nov 26 06:14:35.105: INFO: Poking "http://34.145.17.103:8081/echo?msg=hello" Nov 26 06:14:35.145: INFO: Poke("http://34.145.17.103:8081/echo?msg=hello"): Get "http://34.145.17.103:8081/echo?msg=hello": dial tcp 34.145.17.103:8081: connect: connection refused Nov 26 06:14:37.105: INFO: Poking "http://34.145.17.103:8081/echo?msg=hello" Nov 26 06:14:37.144: INFO: Poke("http://34.145.17.103:8081/echo?msg=hello"): Get "http://34.145.17.103:8081/echo?msg=hello": dial tcp 34.145.17.103:8081: connect: connection refused Nov 26 06:14:39.105: INFO: Poking "http://34.145.17.103:8081/echo?msg=hello" Nov 26 06:14:39.145: INFO: Poke("http://34.145.17.103:8081/echo?msg=hello"): Get "http://34.145.17.103:8081/echo?msg=hello": dial tcp 34.145.17.103:8081: connect: connection refused Nov 26 06:14:41.105: INFO: Poking "http://34.145.17.103:8081/echo?msg=hello" Nov 26 06:14:41.145: INFO: Poke("http://34.145.17.103:8081/echo?msg=hello"): Get "http://34.145.17.103:8081/echo?msg=hello": dial tcp 34.145.17.103:8081: connect: connection refused Nov 26 06:14:43.106: INFO: Poking "http://34.145.17.103:8081/echo?msg=hello" Nov 26 06:14:43.145: INFO: Poke("http://34.145.17.103:8081/echo?msg=hello"): Get "http://34.145.17.103:8081/echo?msg=hello": dial tcp 34.145.17.103:8081: connect: connection refused Nov 26 06:14:45.105: INFO: Poking "http://34.145.17.103:8081/echo?msg=hello" Nov 26 06:14:55.106: INFO: Poke("http://34.145.17.103:8081/echo?msg=hello"): Get "http://34.145.17.103:8081/echo?msg=hello": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Nov 26 06:14:57.105: INFO: Poking "http://34.145.17.103:8081/echo?msg=hello" Nov 26 06:15:07.107: INFO: Poke("http://34.145.17.103:8081/echo?msg=hello"): Get "http://34.145.17.103:8081/echo?msg=hello": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Nov 26 06:15:09.105: INFO: Poking "http://34.145.17.103:8081/echo?msg=hello" Nov 26 06:15:16.437: INFO: Poke("http://34.145.17.103:8081/echo?msg=hello"): success Nov 26 06:15:16.437: INFO: Health checking bootstrap-e2e-minion-group-4lvd, http://10.138.0.5:32495/healthz, expectedSuccess false Nov 26 06:15:16.486: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s -o /dev/null -w %{http_code} http://10.138.0.5:32495/healthz] Namespace:esipp-3192 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 26 06:15:16.486: INFO: >>> kubeConfig: /workspace/.kube/config Nov 26 06:15:16.488: INFO: ExecWithOptions: Clientset creation Nov 26 06:15:16.488: INFO: ExecWithOptions: execute(POST https://34.83.17.181/api/v1/namespaces/esipp-3192/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+-o+%2Fdev%2Fnull+-w+%25%7Bhttp_code%7D+http%3A%2F%2F10.138.0.5%3A32495%2Fhealthz&container=webserver&container=webserver&stderr=true&stdout=true) Nov 26 06:15:16.912: INFO: Got status code from http://10.138.0.5:32495/healthz via test container: 0 Nov 26 06:15:17.964: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s -o /dev/null -w %{http_code} http://10.138.0.5:32495/healthz] Namespace:esipp-3192 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 26 06:15:17.964: INFO: >>> kubeConfig: /workspace/.kube/config Nov 26 06:15:17.965: INFO: ExecWithOptions: Clientset creation Nov 26 06:15:17.965: INFO: ExecWithOptions: execute(POST https://34.83.17.181/api/v1/namespaces/esipp-3192/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+-o+%2Fdev%2Fnull+-w+%25%7Bhttp_code%7D+http%3A%2F%2F10.138.0.5%3A32495%2Fhealthz&container=webserver&container=webserver&stderr=true&stdout=true) Nov 26 06:15:18.382: INFO: Got status code from http://10.138.0.5:32495/healthz via test container: 0 Nov 26 06:15:18.382: INFO: Poking "http://34.145.17.103:8081/echo?msg=hello" Nov 26 06:15:18.460: INFO: Poke("http://34.145.17.103:8081/echo?msg=hello"): success Nov 26 06:15:18.460: INFO: Health checking bootstrap-e2e-minion-group-6hf3, http://10.138.0.3:32495/healthz, expectedSuccess false Nov 26 06:15:18.505: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s -o /dev/null -w %{http_code} http://10.138.0.3:32495/healthz] Namespace:esipp-3192 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 26 06:15:18.505: INFO: >>> kubeConfig: /workspace/.kube/config Nov 26 06:15:18.506: INFO: ExecWithOptions: Clientset creation Nov 26 06:15:18.506: INFO: ExecWithOptions: execute(POST https://34.83.17.181/api/v1/namespaces/esipp-3192/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+-o+%2Fdev%2Fnull+-w+%25%7Bhttp_code%7D+http%3A%2F%2F10.138.0.3%3A32495%2Fhealthz&container=webserver&container=webserver&stderr=true&stdout=true) Nov 26 06:15:18.737: INFO: Got error reading status code from http://10.138.0.3:32495/healthz via test container: failed to execute "curl -g -q -s -o /dev/null -w %{http_code} http://10.138.0.3:32495/healthz": error dialing backend: No agent available, stderr: "" Nov 26 06:15:19.891: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s -o /dev/null -w %{http_code} http://10.138.0.3:32495/healthz] Namespace:esipp-3192 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 26 06:15:19.891: INFO: >>> kubeConfig: /workspace/.kube/config Nov 26 06:15:19.892: INFO: ExecWithOptions: Clientset creation Nov 26 06:15:19.892: INFO: ExecWithOptions: execute(POST https://34.83.17.181/api/v1/namespaces/esipp-3192/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+-o+%2Fdev%2Fnull+-w+%25%7Bhttp_code%7D+http%3A%2F%2F10.138.0.3%3A32495%2Fhealthz&container=webserver&container=webserver&stderr=true&stdout=true) Nov 26 06:15:20.136: INFO: Got error reading status code from http://10.138.0.3:32495/healthz via test container: failed to execute "curl -g -q -s -o /dev/null -w %{http_code} http://10.138.0.3:32495/healthz": error dialing backend: No agent available, stderr: "" Nov 26 06:15:20.811: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s -o /dev/null -w %{http_code} http://10.138.0.3:32495/healthz] Namespace:esipp-3192 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 26 06:15:20.811: INFO: >>> kubeConfig: /workspace/.kube/config Nov 26 06:15:20.812: INFO: ExecWithOptions: Clientset creation Nov 26 06:15:20.812: INFO: ExecWithOptions: execute(POST https://34.83.17.181/api/v1/namespaces/esipp-3192/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+-o+%2Fdev%2Fnull+-w+%25%7Bhttp_code%7D+http%3A%2F%2F10.138.0.3%3A32495%2Fhealthz&container=webserver&container=webserver&stderr=true&stdout=true) Nov 26 06:15:21.083: INFO: Got error reading status code from http://10.138.0.3:32495/healthz via test container: failed to execute "curl -g -q -s -o /dev/null -w %{http_code} http://10.138.0.3:32495/healthz": error dialing backend: No agent available, stderr: "" Nov 26 06:15:21.794: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s -o /dev/null -w %{http_code} http://10.138.0.3:32495/healthz] Namespace:esipp-3192 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 26 06:15:21.794: INFO: >>> kubeConfig: /workspace/.kube/config Nov 26 06:15:21.795: INFO: ExecWithOptions: Clientset creation Nov 26 06:15:21.795: INFO: ExecWithOptions: execute(POST https://34.83.17.181/api/v1/namespaces/esipp-3192/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+-o+%2Fdev%2Fnull+-w+%25%7Bhttp_code%7D+http%3A%2F%2F10.138.0.3%3A32495%2Fhealthz&container=webserver&container=webserver&stderr=true&stdout=true) Nov 26 06:15:21.992: INFO: Got error reading status code from http://10.138.0.3:32495/healthz via test container: failed to execute "curl -g -q -s -o /dev/null -w %{http_code} http://10.138.0.3:32495/healthz": error dialing backend: No agent available, stderr: "" Nov 26 06:15:22.787: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s -o /dev/null -w %{http_code} http://10.138.0.3:32495/healthz] Namespace:esipp-3192 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 26 06:15:22.787: INFO: >>> kubeConfig: /workspace/.kube/config Nov 26 06:15:22.788: INFO: ExecWithOptions: Clientset creation Nov 26 06:15:22.788: INFO: ExecWithOptions: execute(POST https://34.83.17.181/api/v1/namespaces/esipp-3192/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+-o+%2Fdev%2Fnull+-w+%25%7Bhttp_code%7D+http%3A%2F%2F10.138.0.3%3A32495%2Fhealthz&container=webserver&container=webserver&stderr=true&stdout=true) Nov 26 06:15:23.004: INFO: Got error reading status code from http://10.138.0.3:32495/healthz via test container: failed to execute "curl -g -q -s -o /dev/null -w %{http_code} http://10.138.0.3:32495/healthz": error dialing backend: No agent available, stderr: "" Nov 26 06:15:23.815: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s -o /dev/null -w %{http_code} http://10.138.0.3:32495/healthz] Namespace:esipp-3192 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 26 06:15:23.815: INFO: >>> kubeConfig: /workspace/.kube/config Nov 26 06:15:23.816: INFO: ExecWithOptions: Clientset creation Nov 26 06:15:23.816: INFO: ExecWithOptions: execute(POST https://34.83.17.181/api/v1/namespaces/esipp-3192/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+-o+%2Fdev%2Fnull+-w+%25%7Bhttp_code%7D+http%3A%2F%2F10.138.0.3%3A32495%2Fhealthz&container=webserver&container=webserver&stderr=true&stdout=true) Nov 26 06:15:24.017: INFO: Got error reading status code from http://10.138.0.3:32495/healthz via test container: failed to execute "curl -g -q -s -o /dev/null -w %{http_code} http://10.138.0.3:32495/healthz": error dialing backend: No agent available, stderr: "" Nov 26 06:15:24.810: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s -o /dev/null -w %{http_code} http://10.138.0.3:32495/healthz] Namespace:esipp-3192 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 26 06:15:24.810: INFO: >>> kubeConfig: /workspace/.kube/config Nov 26 06:15:24.811: INFO: ExecWithOptions: Clientset creation Nov 26 06:15:24.811: INFO: ExecWithOptions: execute(POST https://34.83.17.181/api/v1/namespaces/esipp-3192/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+-o+%2Fdev%2Fnull+-w+%25%7Bhttp_code%7D+http%3A%2F%2F10.138.0.3%3A32495%2Fhealthz&container=webserver&container=webserver&stderr=true&stdout=true) Nov 26 06:15:25.068: INFO: Got error reading status code from http://10.138.0.3:32495/healthz via test container: failed to execute "curl -g -q -s -o /dev/null -w %{http_code} http://10.138.0.3:32495/healthz": error dialing backend: No agent available, stderr: "" Nov 26 06:15:25.800: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s -o /dev/null -w %{http_code} http://10.138.0.3:32495/healthz] Namespace:esipp-3192 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 26 06:15:25.800: INFO: >>> kubeConfig: /workspace/.kube/config Nov 26 06:15:25.801: INFO: ExecWithOptions: Clientset creation Nov 26 06:15:25.801: INFO: ExecWithOptions: execute(POST https://34.83.17.181/api/v1/namespaces/esipp-3192/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+-o+%2Fdev%2Fnull+-w+%25%7Bhttp_code%7D+http%3A%2F%2F10.138.0.3%3A32495%2Fhealthz&container=webserver&container=webserver&stderr=true&stdout=true) Nov 26 06:15:26.012: INFO: Got error reading status code from http://10.138.0.3:32495/healthz via test container: failed to execute "curl -g -q -s -o /dev/null -w %{http_code} http://10.138.0.3:32495/healthz": error dialing backend: No agent available, stderr: "" Nov 26 06:15:26.789: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s -o /dev/null -w %{http_code} http://10.138.0.3:32495/healthz] Namespace:esipp-3192 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 26 06:15:26.789: INFO: >>> kubeConfig: /workspace/.kube/config Nov 26 06:15:26.790: INFO: ExecWithOptions: Clientset creation Nov 26 06:15:26.790: INFO: ExecWithOptions: execute(POST https://34.83.17.181/api/v1/namespaces/esipp-3192/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+-o+%2Fdev%2Fnull+-w+%25%7Bhttp_code%7D+http%3A%2F%2F10.138.0.3%3A32495%2Fhealthz&container=webserver&container=webserver&stderr=true&stdout=true) Nov 26 06:15:26.985: INFO: Got error reading status code from http://10.138.0.3:32495/healthz via test container: failed to execute "curl -g -q -s -o /dev/null -w %{http_code} http://10.138.0.3:32495/healthz": error dialing backend: No agent available, stderr: "" Nov 26 06:15:27.800: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s -o /dev/null -w %{http_code} http://10.138.0.3:32495/healthz] Namespace:esipp-3192 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 26 06:15:27.800: INFO: >>> kubeConfig: /workspace/.kube/config Nov 26 06:15:27.801: INFO: ExecWithOptions: Clientset creation Nov 26 06:15:27.801: INFO: ExecWithOptions: execute(POST https://34.83.17.181/api/v1/namespaces/esipp-3192/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+-o+%2Fdev%2Fnull+-w+%25%7Bhttp_code%7D+http%3A%2F%2F10.138.0.3%3A32495%2Fhealthz&container=webserver&container=webserver&stderr=true&stdout=true) Nov 26 06:15:27.965: INFO: Got error reading status code from http://10.138.0.3:32495/healthz via test container: failed to execute "curl -g -q -s -o /dev/null -w %{http_code} http://10.138.0.3:32495/healthz": error dialing backend: No agent available, stderr: "" Nov 26 06:15:28.802: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s -o /dev/null -w %{http_code} http://10.138.0.3:32495/healthz] Namespace:esipp-3192 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 26 06:15:28.802: INFO: >>> kubeConfig: /workspace/.kube/config Nov 26 06:15:28.803: INFO: ExecWithOptions: Clientset creation Nov 26 06:15:28.803: INFO: ExecWithOptions: execute(POST https://34.83.17.181/api/v1/namespaces/esipp-3192/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+-o+%2Fdev%2Fnull+-w+%25%7Bhttp_code%7D+http%3A%2F%2F10.138.0.3%3A32495%2Fhealthz&container=webserver&container=webserver&stderr=true&stdout=true) Nov 26 06:15:28.984: INFO: Got error reading status code from http://10.138.0.3:32495/healthz via test container: failed to execute "curl -g -q -s -o /dev/null -w %{http_code} http://10.138.0.3:32495/healthz": error dialing backend: No agent available, stderr: "" Nov 26 06:15:29.807: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s -o /dev/null -w %{http_code} http://10.138.0.3:32495/healthz] Namespace:esipp-3192 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 26 06:15:29.807: INFO: >>> kubeConfig: /workspace/.kube/config Nov 26 06:15:29.809: INFO: ExecWithOptions: Clientset creation Nov 26 06:15:29.809: INFO: ExecWithOptions: execute(POST https://34.83.17.181/api/v1/namespaces/esipp-3192/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+-o+%2Fdev%2Fnull+-w+%25%7Bhttp_code%7D+http%3A%2F%2F10.138.0.3%3A32495%2Fhealthz&container=webserver&container=webserver&stderr=true&stdout=true) Nov 26 06:15:30.102: INFO: Got error reading status code from http://10.138.0.3:32495/healthz via test container: failed to execute "curl -g -q -s -o /dev/null -w %{http_code} http://10.138.0.3:32495/healthz": error dialing backend: No agent available, stderr: "" Nov 26 06:15:30.895: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s -o /dev/null -w %{http_code} http://10.138.0.3:32495/healthz] Namespace:esipp-3192 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 26 06:15:30.895: INFO: >>> kubeConfig: /workspace/.kube/config Nov 26 06:15:30.896: INFO: ExecWithOptions: Clientset creation Nov 26 06:15:30.896: INFO: ExecWithOptions: execute(POST https://34.83.17.181/api/v1/namespaces/esipp-3192/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+-o+%2Fdev%2Fnull+-w+%25%7Bhttp_code%7D+http%3A%2F%2F10.138.0.3%3A32495%2Fhealthz&container=webserver&container=webserver&stderr=true&stdout=true) Nov 26 06:15:31.232: INFO: Got error reading status code from http://10.138.0.3:32495/healthz via test container: failed to execute "curl -g -q -s -o /dev/null -w %{http_code} http://10.138.0.3:32495/healthz": error dialing backend: No agent available, stderr: "" Nov 26 06:15:31.788: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s -o /dev/null -w %{http_code} http://10.138.0.3:32495/healthz] Namespace:esipp-3192 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 26 06:15:31.788: INFO: >>> kubeConfig: /workspace/.kube/config Nov 26 06:15:31.789: INFO: ExecWithOptions: Clientset creation Nov 26 06:15:31.789: INFO: ExecWithOptions: execute(POST https://34.83.17.181/api/v1/namespaces/esipp-3192/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+-o+%2Fdev%2Fnull+-w+%25%7Bhttp_code%7D+http%3A%2F%2F10.138.0.3%3A32495%2Fhealthz&container=webserver&container=webserver&stderr=true&stdout=true) Nov 26 06:15:31.959: INFO: Got error reading status code from http://10.138.0.3:32495/healthz via test container: failed to execute "curl -g -q -s -o /dev/null -w %{http_code} http://10.138.0.3:32495/healthz": error dialing backend: No agent available, stderr: "" Nov 26 06:15:32.780: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s -o /dev/null -w %{http_code} http://10.138.0.3:32495/healthz] Namespace:esipp-3192 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 26 06:15:32.780: INFO: >>> kubeConfig: /workspace/.kube/config Nov 26 06:15:32.782: INFO: ExecWithOptions: Clientset creation Nov 26 06:15:32.782: INFO: ExecWithOptions: execute(POST https://34.83.17.181/api/v1/namespaces/esipp-3192/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+-o+%2Fdev%2Fnull+-w+%25%7Bhttp_code%7D+http%3A%2F%2F10.138.0.3%3A32495%2Fhealthz&container=webserver&container=webserver&stderr=true&stdout=true) Nov 26 06:15:32.946: INFO: Got error reading status code from http://10.138.0.3:32495/healthz via test container: failed to execute "curl -g -q -s -o /dev/null -w %{http_code} http://10.138.0.3:32495/healthz": error dialing backend: No agent available, stderr: "" Nov 26 06:15:33.824: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s -o /dev/null -w %{http_code} http://10.138.0.3:32495/healthz] Namespace:esipp-3192 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 26 06:15:33.824: INFO: >>> kubeConfig: /workspace/.kube/config Nov 26 06:15:33.825: INFO: ExecWithOptions: Clientset creation Nov 26 06:15:33.825: INFO: ExecWithOptions: execute(POST https://34.83.17.181/api/v1/namespaces/esipp-3192/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+-o+%2Fdev%2Fnull+-w+%25%7Bhttp_code%7D+http%3A%2F%2F10.138.0.3%3A32495%2Fhealthz&container=webserver&container=webserver&stderr=true&stdout=true) Nov 26 06:15:34.110: INFO: Got error reading status code from http://10.138.0.3:32495/healthz via test container: failed to execute "curl -g -q -s -o /dev/null -w %{http_code} http://10.138.0.3:32495/healthz": error dialing backend: No agent available, stderr: "" Nov 26 06:15:34.874: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s -o /dev/null -w %{http_code} http://10.138.0.3:32495/healthz] Namespace:esipp-3192 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 26 06:15:34.874: INFO: >>> kubeConfig: /workspace/.kube/config Nov 26 06:15:34.876: INFO: ExecWithOptions: Clientset creation Nov 26 06:15:34.876: INFO: ExecWithOptions: execute(POST https://34.83.17.181/api/v1/namespaces/esipp-3192/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+-o+%2Fdev%2Fnull+-w+%25%7Bhttp_code%7D+http%3A%2F%2F10.138.0.3%3A32495%2Fhealthz&container=webserver&container=webserver&stderr=true&stdout=true) Nov 26 06:15:35.168: INFO: Got error reading status code from http://10.138.0.3:32495/healthz via test container: failed to execute "curl -g -q -s -o /dev/null -w %{http_code} http://10.138.0.3:32495/healthz": error dialing backend: No agent available, stderr: "" Nov 26 06:15:35.801: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s -o /dev/null -w %{http_code} http://10.138.0.3:32495/healthz] Namespace:esipp-3192 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 26 06:15:35.801: INFO: >>> kubeConfig: /workspace/.kube/config Nov 26 06:15:35.803: INFO: ExecWithOptions: Clientset creation Nov 26 06:15:35.803: INFO: ExecWithOptions: execute(POST https://34.83.17.181/api/v1/namespaces/esipp-3192/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+-o+%2Fdev%2Fnull+-w+%25%7Bhttp_code%7D+http%3A%2F%2F10.138.0.3%3A32495%2Fhealthz&container=webserver&container=webserver&stderr=true&stdout=true) Nov 26 06:15:35.987: INFO: Got error reading status code from http://10.138.0.3:32495/healthz via test container: failed to execute "curl -g -q -s -o /dev/null -w %{http_code} http://10.138.0.3:32495/healthz": error dialing backend: No agent available, stderr: "" Nov 26 06:15:36.807: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s -o /dev/null -w %{http_code} http://10.138.0.3:32495/healthz] Namespace:esipp-3192 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 26 06:15:36.807: INFO: >>> kubeConfig: /workspace/.kube/config Nov 26 06:15:36.808: INFO: ExecWithOptions: Clientset creation Nov 26 06:15:36.808: INFO: ExecWithOptions: execute(POST https://34.83.17.181/api/v1/namespaces/esipp-3192/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+-o+%2Fdev%2Fnull+-w+%25%7Bhttp_code%7D+http%3A%2F%2F10.138.0.3%3A32495%2Fhealthz&container=webserver&container=webserver&stderr=true&stdout=true) Nov 26 06:15:36.969: INFO: Got error reading status code from http://10.138.0.3:32495/healthz via test container: failed to execute "curl -g -q -s -o /dev/null -w %{http_code} http://10.138.0.3:32495/healthz": error dialing backend: No agent available, stderr: "" Nov 26 06:15:37.786: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s -o /dev/null -w %{http_code} http://10.138.0.3:32495/healthz] Namespace:esipp-3192 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 26 06:15:37.786: INFO: >>> kubeConfig: /workspace/.kube/config Nov 26 06:15:37.787: INFO: ExecWithOptions: Clientset creation Nov 26 06:15:37.787: INFO: ExecWithOptions: execute(POST https://34.83.17.181/api/v1/namespaces/esipp-3192/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+-o+%2Fdev%2Fnull+-w+%25%7Bhttp_code%7D+http%3A%2F%2F10.138.0.3%3A32495%2Fhealthz&container=webserver&container=webserver&stderr=true&stdout=true) Nov 26 06:15:38.017: INFO: Got error reading status code from http://10.138.0.3:32495/healthz via test container: failed to execute "curl -g -q -s -o /dev/null -w %{http_code} http://10.138.0.3:32495/healthz": error dialing backend: No agent available, stderr: "" Nov 26 06:15:38.794: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s -o /dev/null -w %{http_code} http://10.138.0.3:32495/healthz] Namespace:esipp-3192 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 26 06:15:38.794: INFO: >>> kubeConfig: /workspace/.kube/config Nov 26 06:15:38.795: INFO: ExecWithOptions: Clientset creation Nov 26 06:15:38.795: INFO: ExecWithOptions: execute(POST https://34.83.17.181/api/v1/namespaces/esipp-3192/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+-o+%2Fdev%2Fnull+-w+%25%7Bhttp_code%7D+http%3A%2F%2F10.138.0.3%3A32495%2Fhealthz&container=webserver&container=webserver&stderr=true&stdout=true) Nov 26 06:15:39.040: INFO: Got error reading status code from http://10.138.0.3:32495/healthz via test container: failed to execute "curl -g -q -s -o /dev/null -w %{http_code} http://10.138.0.3:32495/healthz": error dialing backend: No agent available, stderr: "" Nov 26 06:15:39.795: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s -o /dev/null -w %{http_code} http://10.138.0.3:32495/healthz] Namespace:esipp-3192 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 26 06:15:39.795: INFO: >>> kubeConfig: /workspace/.kube/config Nov 26 06:15:39.797: INFO: ExecWithOptions: Clientset creation Nov 26 06:15:39.797: INFO: ExecWithOptions: execute(POST https://34.83.17.181/api/v1/namespaces/esipp-3192/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+-o+%2Fdev%2Fnull+-w+%25%7Bhttp_code%7D+http%3A%2F%2F10.138.0.3%3A32495%2Fhealthz&container=webserver&container=webserver&stderr=true&stdout=true) Nov 26 06:15:40.017: INFO: Got error reading status code from http://10.138.0.3:32495/healthz via test container: failed to execute "curl -g -q -s -o /dev/null -w %{http_code} http://10.138.0.3:32495/healthz": error dialing backend: No agent available, stderr: "" Nov 26 06:15:40.832: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s -o /dev/null -w %{http_code} http://10.138.0.3:32495/healthz] Namespace:esipp-3192 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 26 06:15:40.832: INFO: >>> kubeConfig: /workspace/.kube/config Nov 26 06:15:40.833: INFO: ExecWithOptions: Clientset creation Nov 26 06:15:40.833: INFO: ExecWithOptions: execute(POST https://34.83.17.181/api/v1/namespaces/esipp-3192/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+-o+%2Fdev%2Fnull+-w+%25%7Bhttp_code%7D+http%3A%2F%2F10.138.0.3%3A32495%2Fhealthz&container=webserver&container=webserver&stderr=true&stdout=true) Nov 26 06:15:41.006: INFO: Got error reading status code from http://10.138.0.3:32495/healthz via test container: failed to execute "curl -g -q -s -o /dev/null -w %{http_code} http://10.138.0.3:32495/healthz": error dialing backend: No agent available, stderr: "" Nov 26 06:15:41.795: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s -o /dev/null -w %{http_code} http://10.138.0.3:32495/healthz] Namespace:esipp-3192 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 26 06:15:41.795: INFO: >>> kubeConfig: /workspace/.kube/config Nov 26 06:15:41.796: INFO: ExecWithOptions: Clientset creation Nov 26 06:15:41.796: INFO: ExecWithOptions: execute(POST https://34.83.17.181/api/v1/namespaces/esipp-3192/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+-o+%2Fdev%2Fnull+-w+%25%7Bhttp_code%7D+http%3A%2F%2F10.138.0.3%3A32495%2Fhealthz&container=webserver&container=webserver&stderr=true&stdout=true) Nov 26 06:15:42.009: INFO: Got error reading status code from http://10.138.0.3:32495/healthz via test container: failed to execute "curl -g -q -s -o /dev/null -w %{http_code} http://10.138.0.3:32495/healthz": error dialing backend: No agent available, stderr: "" Nov 26 06:15:42.799: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s -o /dev/null -w %{http_code} http://10.138.0.3:32495/healthz] Namespace:esipp-3192 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 26 06:15:42.799: INFO: >>> kubeConfig: /workspace/.kube/config Nov 26 06:15:42.800: INFO: ExecWithOptions: Clientset creation Nov 26 06:15:42.800: INFO: ExecWithOptions: execute(POST https://34.83.17.181/api/v1/namespaces/esipp-3192/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+-o+%2Fdev%2Fnull+-w+%25%7Bhttp_code%7D+http%3A%2F%2F10.138.0.3%3A32495%2Fhealthz&container=webserver&container=webserver&stderr=true&stdout=true) Nov 26 06:15:42.981: INFO: Got error reading status code from http://10.138.0.3:32495/healthz via test container: failed to execute "curl -g -q -s -o /dev/null -w %{http_code} http://10.138.0.3:32495/healthz": error dialing backend: No agent available, stderr: "" Nov 26 06:15:43.789: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s -o /dev/null -w %{http_code} http://10.138.0.3:32495/healthz] Namespace:esipp-3192 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 26 06:15:43.789: INFO: >>> kubeConfig: /workspace/.kube/config Nov 26 06:15:43.790: INFO: ExecWithOptions: Clientset creation Nov 26 06:15:43.790: INFO: ExecWithOptions: execute(POST https://34.83.17.181/api/v1/namespaces/esipp-3192/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+-o+%2Fdev%2Fnull+-w+%25%7Bhttp_code%7D+http%3A%2F%2F10.138.0.3%3A32495%2Fhealthz&container=webserver&container=webserver&stderr=true&stdout=true) Nov 26 06:15:43.972: INFO: Got error reading status code from http://10.138.0.3:32495/healthz via test container: failed to execute "curl -g -q -s -o /dev/null -w %{http_code} http://10.138.0.3:32495/healthz": error dialing backend: No agent available, stderr: "" Nov 26 06:15:44.803: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s -o /dev/null -w %{http_code} http://10.138.0.3:32495/healthz] Namespace:esipp-3192 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 26 06:15:44.803: INFO: >>> kubeConfig: /workspace/.kube/config Nov 26 06:15:44.804: INFO: ExecWithOptions: Clientset creation Nov 26 06:15:44.804: INFO: ExecWithOptions: execute(POST https://34.83.17.181/api/v1/namespaces/esipp-3192/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+-o+%2Fdev%2Fnull+-w+%25%7Bhttp_code%7D+http%3A%2F%2F10.138.0.3%3A32495%2Fhealthz&container=webserver&container=webserver&stderr=true&stdout=true) Nov 26 06:15:45.031: INFO: Got error reading status code from http://10.138.0.3:32495/healthz via test container: failed to execute "curl -g -q -s -o /dev/null -w %{http_code} http://10.138.0.3:32495/healthz": error dialing backend: No agent available, stderr: "" ------------------------------ Progress Report for Ginkgo Process #6 Automatically polling progress: [sig-network] LoadBalancers ESIPP [Slow] should only target nodes with endpoints (Spec Runtime: 5m0.535s) test/e2e/network/loadbalancer.go:1346 In [It] (Node Runtime: 5m0.001s) test/e2e/network/loadbalancer.go:1346 At [By Step] waiting for service endpoint on node bootstrap-e2e-minion-group-8xrn (Step Runtime: 1m20.069s) test/e2e/network/loadbalancer.go:1395 Spec Goroutine goroutine 1008 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc000136000}, 0xc003b302b8, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc000136000}, 0x38?, 0x2fd9d05?, 0x38?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc000136000}, 0x7fa7740?, 0xc0030efc88?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc001d5e240?, 0xc001d5e24b?, 0xc0030efd10?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.testHTTPHealthCheckNodePortFromTestContainer(0xc0011e02a0, {0xc004f7b130, 0xa}, 0x7eef, 0xc0030eff00?, 0x0, 0x2) test/e2e/network/service.go:705 > k8s.io/kubernetes/test/e2e/network.glob..func20.5() test/e2e/network/loadbalancer.go:1409 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc000562f00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 06:15:45.909: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s -o /dev/null -w %{http_code} http://10.138.0.3:32495/healthz] Namespace:esipp-3192 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 26 06:15:45.909: INFO: >>> kubeConfig: /workspace/.kube/config Nov 26 06:15:45.911: INFO: ExecWithOptions: Clientset creation Nov 26 06:15:45.911: INFO: ExecWithOptions: execute(POST https://34.83.17.181/api/v1/namespaces/esipp-3192/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+-o+%2Fdev%2Fnull+-w+%25%7Bhttp_code%7D+http%3A%2F%2F10.138.0.3%3A32495%2Fhealthz&container=webserver&container=webserver&stderr=true&stdout=true) Nov 26 06:15:46.282: INFO: Got error reading status code from http://10.138.0.3:32495/healthz via test container: failed to execute "curl -g -q -s -o /dev/null -w %{http_code} http://10.138.0.3:32495/healthz": error dialing backend: No agent available, stderr: "" Nov 26 06:15:46.802: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s -o /dev/null -w %{http_code} http://10.138.0.3:32495/healthz] Namespace:esipp-3192 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 26 06:15:46.802: INFO: >>> kubeConfig: /workspace/.kube/config Nov 26 06:15:46.804: INFO: ExecWithOptions: Clientset creation Nov 26 06:15:46.804: INFO: ExecWithOptions: execute(POST https://34.83.17.181/api/v1/namespaces/esipp-3192/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+-o+%2Fdev%2Fnull+-w+%25%7Bhttp_code%7D+http%3A%2F%2F10.138.0.3%3A32495%2Fhealthz&container=webserver&container=webserver&stderr=true&stdout=true) Nov 26 06:15:46.988: INFO: Got error reading status code from http://10.138.0.3:32495/healthz via test container: failed to execute "curl -g -q -s -o /dev/null -w %{http_code} http://10.138.0.3:32495/healthz": error dialing backend: No agent available, stderr: "" Nov 26 06:15:47.799: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s -o /dev/null -w %{http_code} http://10.138.0.3:32495/healthz] Namespace:esipp-3192 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 26 06:15:47.799: INFO: >>> kubeConfig: /workspace/.kube/config Nov 26 06:15:47.800: INFO: ExecWithOptions: Clientset creation Nov 26 06:15:47.800: INFO: ExecWithOptions: execute(POST https://34.83.17.181/api/v1/namespaces/esipp-3192/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+-o+%2Fdev%2Fnull+-w+%25%7Bhttp_code%7D+http%3A%2F%2F10.138.0.3%3A32495%2Fhealthz&container=webserver&container=webserver&stderr=true&stdout=true) Nov 26 06:15:48.025: INFO: Got error reading status code from http://10.138.0.3:32495/healthz via test container: failed to execute "curl -g -q -s -o /dev/null -w %{http_code} http://10.138.0.3:32495/healthz": error dialing backend: No agent available, stderr: "" Nov 26 06:15:48.792: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s -o /dev/null -w %{http_code} http://10.138.0.3:32495/healthz] Namespace:esipp-3192 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 26 06:15:48.792: INFO: >>> kubeConfig: /workspace/.kube/config Nov 26 06:15:48.793: INFO: ExecWithOptions: Clientset creation Nov 26 06:15:48.793: INFO: ExecWithOptions: execute(POST https://34.83.17.181/api/v1/namespaces/esipp-3192/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+-o+%2Fdev%2Fnull+-w+%25%7Bhttp_code%7D+http%3A%2F%2F10.138.0.3%3A32495%2Fhealthz&container=webserver&container=webserver&stderr=true&stdout=true) Nov 26 06:15:49.002: INFO: Got error reading status code from http://10.138.0.3:32495/healthz via test container: failed to execute "curl -g -q -s -o /dev/null -w %{http_code} http://10.138.0.3:32495/healthz": error dialing backend: No agent available, stderr: "" Nov 26 06:15:49.086: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s -o /dev/null -w %{http_code} http://10.138.0.3:32495/healthz] Namespace:esipp-3192 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 26 06:15:49.086: INFO: >>> kubeConfig: /workspace/.kube/config Nov 26 06:15:49.087: INFO: ExecWithOptions: Clientset creation Nov 26 06:15:49.087: INFO: ExecWithOptions: execute(POST https://34.83.17.181/api/v1/namespaces/esipp-3192/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+-o+%2Fdev%2Fnull+-w+%25%7Bhttp_code%7D+http%3A%2F%2F10.138.0.3%3A32495%2Fhealthz&container=webserver&container=webserver&stderr=true&stdout=true) Nov 26 06:15:49.413: INFO: Got error reading status code from http://10.138.0.3:32495/healthz via test container: failed to execute "curl -g -q -s -o /dev/null -w %{http_code} http://10.138.0.3:32495/healthz": error dialing backend: No agent available, stderr: "" Nov 26 06:15:49.413: INFO: Unexpected error: <*errors.errorString | 0xc000d25390>: { s: "error waiting for healthCheckNodePort: expected at least 2 succeed=false on 10.138.0.3:32495/healthz, got 0", } Nov 26 06:15:49.413: FAIL: error waiting for healthCheckNodePort: expected at least 2 succeed=false on 10.138.0.3:32495/healthz, got 0 Full Stack Trace k8s.io/kubernetes/test/e2e/network.glob..func20.5() test/e2e/network/loadbalancer.go:1416 +0x9a8 Nov 26 06:15:49.903: INFO: Waiting up to 15m0s for service "external-local-nodes" to have no LoadBalancer [AfterEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/framework/node/init/init.go:32 Nov 26 06:16:00.344: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/network/loadbalancer.go:1260 Nov 26 06:16:00.402: INFO: Output of kubectl describe svc: Nov 26 06:16:00.402: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=esipp-3192 describe svc --namespace=esipp-3192' Nov 26 06:16:01.840: INFO: stderr: "" Nov 26 06:16:01.840: INFO: stdout: "Name: external-local-nodes\nNamespace: esipp-3192\nLabels: testid=external-local-nodes-e4465dea-9e12-4e32-bc19-fff5f2498f3f\nAnnotations: <none>\nSelector: testid=external-local-nodes-e4465dea-9e12-4e32-bc19-fff5f2498f3f\nType: ClusterIP\nIP Family Policy: SingleStack\nIP Families: IPv4\nIP: 10.0.10.99\nIPs: 10.0.10.99\nPort: <unset> 8081/TCP\nTargetPort: 80/TCP\nEndpoints: 10.64.1.84:80\nSession Affinity: None\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal EnsuringLoadBalancer 4m34s service-controller Ensuring load balancer\n Normal EnsuredLoadBalancer 3m57s service-controller Ensured load balancer\n Normal EnsuringLoadBalancer 117s service-controller Ensuring load balancer\n Normal EnsuredLoadBalancer 113s service-controller Ensured load balancer\n Normal Type 11s service-controller LoadBalancer -> ClusterIP\n Normal DeletingLoadBalancer 11s service-controller Deleting load balancer\n\n\nName: node-port-service\nNamespace: esipp-3192\nLabels: <none>\nAnnotations: <none>\nSelector: selector-fb28cb38-5dcb-4f56-9fae-b3ec0c3fc452=true\nType: NodePort\nIP Family Policy: SingleStack\nIP Families: IPv4\nIP: 10.0.204.196\nIPs: 10.0.204.196\nPort: http 80/TCP\nTargetPort: 8083/TCP\nNodePort: http 31228/TCP\nEndpoints: 10.64.0.53:8083,10.64.3.111:8083\nPort: udp 90/UDP\nTargetPort: 8081/UDP\nNodePort: udp 30974/UDP\nEndpoints: 10.64.0.53:8081,10.64.3.111:8081\nSession Affinity: None\nExternal Traffic Policy: Cluster\nEvents: <none>\n\n\nName: session-affinity-service\nNamespace: esipp-3192\nLabels: <none>\nAnnotations: <none>\nSelector: selector-fb28cb38-5dcb-4f56-9fae-b3ec0c3fc452=true\nType: NodePort\nIP Family Policy: SingleStack\nIP Families: IPv4\nIP: 10.0.63.121\nIPs: 10.0.63.121\nPort: http 80/TCP\nTargetPort: 8083/TCP\nNodePort: http 30030/TCP\nEndpoints: 10.64.0.53:8083,10.64.3.111:8083\nPort: udp 90/UDP\nTargetPort: 8081/UDP\nNodePort: udp 32071/UDP\nEndpoints: 10.64.0.53:8081,10.64.3.111:8081\nSession Affinity: ClientIP\nExternal Traffic Policy: Cluster\nEvents: <none>\n" Nov 26 06:16:01.840: INFO: Name: external-local-nodes Namespace: esipp-3192 Labels: testid=external-local-nodes-e4465dea-9e12-4e32-bc19-fff5f2498f3f Annotations: <none> Selector: testid=external-local-nodes-e4465dea-9e12-4e32-bc19-fff5f2498f3f Type: ClusterIP IP Family Policy: SingleStack IP Families: IPv4 IP: 10.0.10.99 IPs: 10.0.10.99 Port: <unset> 8081/TCP TargetPort: 80/TCP Endpoints: 10.64.1.84:80 Session Affinity: None Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal EnsuringLoadBalancer 4m34s service-controller Ensuring load balancer Normal EnsuredLoadBalancer 3m57s service-controller Ensured load balancer Normal EnsuringLoadBalancer 117s service-controller Ensuring load balancer Normal EnsuredLoadBalancer 113s service-controller Ensured load balancer Normal Type 11s service-controller LoadBalancer -> ClusterIP Normal DeletingLoadBalancer 11s service-controller Deleting load balancer Name: node-port-service Namespace: esipp-3192 Labels: <none> Annotations: <none> Selector: selector-fb28cb38-5dcb-4f56-9fae-b3ec0c3fc452=true Type: NodePort IP Family Policy: SingleStack IP Families: IPv4 IP: 10.0.204.196 IPs: 10.0.204.196 Port: http 80/TCP TargetPort: 8083/TCP NodePort: http 31228/TCP Endpoints: 10.64.0.53:8083,10.64.3.111:8083 Port: udp 90/UDP TargetPort: 8081/UDP NodePort: udp 30974/UDP Endpoints: 10.64.0.53:8081,10.64.3.111:8081 Session Affinity: None External Traffic Policy: Cluster Events: <none> Name: session-affinity-service Namespace: esipp-3192 Labels: <none> Annotations: <none> Selector: selector-fb28cb38-5dcb-4f56-9fae-b3ec0c3fc452=true Type: NodePort IP Family Policy: SingleStack IP Families: IPv4 IP: 10.0.63.121 IPs: 10.0.63.121 Port: http 80/TCP TargetPort: 8083/TCP NodePort: http 30030/TCP Endpoints: 10.64.0.53:8083,10.64.3.111:8083 Port: udp 90/UDP TargetPort: 8081/UDP NodePort: udp 32071/UDP Endpoints: 10.64.0.53:8081,10.64.3.111:8081 Session Affinity: ClientIP External Traffic Policy: Cluster Events: <none> [DeferCleanup (Each)] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-network] LoadBalancers ESIPP [Slow] dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/26/22 06:16:01.84 STEP: Collecting events from namespace "esipp-3192". 11/26/22 06:16:01.84 STEP: Found 53 events. 11/26/22 06:16:01.883 Nov 26 06:16:01.883: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for netserver-0: { } Scheduled: Successfully assigned esipp-3192/netserver-0 to bootstrap-e2e-minion-group-4lvd Nov 26 06:16:01.883: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for netserver-1: { } Scheduled: Successfully assigned esipp-3192/netserver-1 to bootstrap-e2e-minion-group-6hf3 Nov 26 06:16:01.883: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for netserver-2: { } Scheduled: Successfully assigned esipp-3192/netserver-2 to bootstrap-e2e-minion-group-8xrn Nov 26 06:16:01.883: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for test-container-pod: { } Scheduled: Successfully assigned esipp-3192/test-container-pod to bootstrap-e2e-minion-group-6hf3 Nov 26 06:16:01.883: INFO: At 2022-11-26 06:11:26 +0000 UTC - event for external-local-nodes: {service-controller } EnsuringLoadBalancer: Ensuring load balancer Nov 26 06:16:01.883: INFO: At 2022-11-26 06:12:03 +0000 UTC - event for external-local-nodes: {service-controller } EnsuredLoadBalancer: Ensured load balancer Nov 26 06:16:01.883: INFO: At 2022-11-26 06:12:05 +0000 UTC - event for netserver-0: {kubelet bootstrap-e2e-minion-group-4lvd} Started: Started container webserver Nov 26 06:16:01.883: INFO: At 2022-11-26 06:12:05 +0000 UTC - event for netserver-0: {kubelet bootstrap-e2e-minion-group-4lvd} Created: Created container webserver Nov 26 06:16:01.883: INFO: At 2022-11-26 06:12:05 +0000 UTC - event for netserver-0: {kubelet bootstrap-e2e-minion-group-4lvd} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.43" already present on machine Nov 26 06:16:01.883: INFO: At 2022-11-26 06:12:06 +0000 UTC - event for netserver-1: {kubelet bootstrap-e2e-minion-group-6hf3} Started: Started container webserver Nov 26 06:16:01.883: INFO: At 2022-11-26 06:12:06 +0000 UTC - event for netserver-1: {kubelet bootstrap-e2e-minion-group-6hf3} Created: Created container webserver Nov 26 06:16:01.883: INFO: At 2022-11-26 06:12:06 +0000 UTC - event for netserver-1: {kubelet bootstrap-e2e-minion-group-6hf3} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.43" already present on machine Nov 26 06:16:01.883: INFO: At 2022-11-26 06:12:06 +0000 UTC - event for netserver-2: {kubelet bootstrap-e2e-minion-group-8xrn} FailedMount: MountVolume.SetUp failed for volume "kube-api-access-nmxgq" : failed to sync configmap cache: timed out waiting for the condition Nov 26 06:16:01.883: INFO: At 2022-11-26 06:12:07 +0000 UTC - event for netserver-2: {kubelet bootstrap-e2e-minion-group-8xrn} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.43" already present on machine Nov 26 06:16:01.883: INFO: At 2022-11-26 06:12:07 +0000 UTC - event for netserver-2: {kubelet bootstrap-e2e-minion-group-8xrn} Started: Started container webserver Nov 26 06:16:01.883: INFO: At 2022-11-26 06:12:07 +0000 UTC - event for netserver-2: {kubelet bootstrap-e2e-minion-group-8xrn} Created: Created container webserver Nov 26 06:16:01.883: INFO: At 2022-11-26 06:12:15 +0000 UTC - event for netserver-0: {kubelet bootstrap-e2e-minion-group-4lvd} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Nov 26 06:16:01.883: INFO: At 2022-11-26 06:12:15 +0000 UTC - event for netserver-0: {kubelet bootstrap-e2e-minion-group-4lvd} Killing: Stopping container webserver Nov 26 06:16:01.883: INFO: At 2022-11-26 06:12:38 +0000 UTC - event for test-container-pod: {kubelet bootstrap-e2e-minion-group-6hf3} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.43" already present on machine Nov 26 06:16:01.883: INFO: At 2022-11-26 06:12:38 +0000 UTC - event for test-container-pod: {kubelet bootstrap-e2e-minion-group-6hf3} Started: Started container webserver Nov 26 06:16:01.883: INFO: At 2022-11-26 06:12:38 +0000 UTC - event for test-container-pod: {kubelet bootstrap-e2e-minion-group-6hf3} Killing: Stopping container webserver Nov 26 06:16:01.883: INFO: At 2022-11-26 06:12:38 +0000 UTC - event for test-container-pod: {kubelet bootstrap-e2e-minion-group-6hf3} Created: Created container webserver Nov 26 06:16:01.883: INFO: At 2022-11-26 06:12:39 +0000 UTC - event for test-container-pod: {kubelet bootstrap-e2e-minion-group-6hf3} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Nov 26 06:16:01.883: INFO: At 2022-11-26 06:12:41 +0000 UTC - event for external-local-nodes: {replication-controller } SuccessfulCreate: Created pod: external-local-nodes-6krsb Nov 26 06:16:01.883: INFO: At 2022-11-26 06:12:42 +0000 UTC - event for external-local-nodes-6krsb: {kubelet bootstrap-e2e-minion-group-4lvd} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.43" already present on machine Nov 26 06:16:01.883: INFO: At 2022-11-26 06:12:42 +0000 UTC - event for external-local-nodes-6krsb: {kubelet bootstrap-e2e-minion-group-4lvd} Started: Started container netexec Nov 26 06:16:01.883: INFO: At 2022-11-26 06:12:42 +0000 UTC - event for external-local-nodes-6krsb: {kubelet bootstrap-e2e-minion-group-4lvd} Created: Created container netexec Nov 26 06:16:01.883: INFO: At 2022-11-26 06:12:43 +0000 UTC - event for test-container-pod: {kubelet bootstrap-e2e-minion-group-6hf3} BackOff: Back-off restarting failed container webserver in pod test-container-pod_esipp-3192(2a395b0a-953e-4b2c-8bb8-cf369f5bcec4) Nov 26 06:16:01.883: INFO: At 2022-11-26 06:13:42 +0000 UTC - event for external-local-nodes-6krsb: {kubelet bootstrap-e2e-minion-group-4lvd} Killing: Stopping container netexec Nov 26 06:16:01.883: INFO: At 2022-11-26 06:13:44 +0000 UTC - event for external-local-nodes: {replication-controller } SuccessfulCreate: Created pod: external-local-nodes-4jdpc Nov 26 06:16:01.883: INFO: At 2022-11-26 06:13:45 +0000 UTC - event for external-local-nodes-4jdpc: {kubelet bootstrap-e2e-minion-group-6hf3} Started: Started container netexec Nov 26 06:16:01.883: INFO: At 2022-11-26 06:13:45 +0000 UTC - event for external-local-nodes-4jdpc: {kubelet bootstrap-e2e-minion-group-6hf3} Created: Created container netexec Nov 26 06:16:01.883: INFO: At 2022-11-26 06:13:45 +0000 UTC - event for external-local-nodes-4jdpc: {kubelet bootstrap-e2e-minion-group-6hf3} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.43" already present on machine Nov 26 06:16:01.883: INFO: At 2022-11-26 06:13:47 +0000 UTC - event for external-local-nodes-4jdpc: {kubelet bootstrap-e2e-minion-group-6hf3} Unhealthy: Readiness probe failed: Get "http://10.64.3.96:80/hostName": dial tcp 10.64.3.96:80: connect: connection refused Nov 26 06:16:01.883: INFO: At 2022-11-26 06:13:47 +0000 UTC - event for external-local-nodes-4jdpc: {kubelet bootstrap-e2e-minion-group-6hf3} Killing: Stopping container netexec Nov 26 06:16:01.883: INFO: At 2022-11-26 06:13:48 +0000 UTC - event for external-local-nodes-4jdpc: {kubelet bootstrap-e2e-minion-group-6hf3} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Nov 26 06:16:01.883: INFO: At 2022-11-26 06:14:03 +0000 UTC - event for external-local-nodes: {service-controller } EnsuringLoadBalancer: Ensuring load balancer Nov 26 06:16:01.883: INFO: At 2022-11-26 06:14:07 +0000 UTC - event for external-local-nodes: {service-controller } EnsuredLoadBalancer: Ensured load balancer Nov 26 06:16:01.883: INFO: At 2022-11-26 06:14:22 +0000 UTC - event for external-local-nodes: {replication-controller } SuccessfulCreate: Created pod: external-local-nodes-cwmn7 Nov 26 06:16:01.883: INFO: At 2022-11-26 06:14:23 +0000 UTC - event for external-local-nodes-cwmn7: {kubelet bootstrap-e2e-minion-group-8xrn} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.43" already present on machine Nov 26 06:16:01.883: INFO: At 2022-11-26 06:14:23 +0000 UTC - event for external-local-nodes-cwmn7: {kubelet bootstrap-e2e-minion-group-8xrn} Created: Created container netexec Nov 26 06:16:01.883: INFO: At 2022-11-26 06:14:23 +0000 UTC - event for external-local-nodes-cwmn7: {kubelet bootstrap-e2e-minion-group-8xrn} Started: Started container netexec Nov 26 06:16:01.883: INFO: At 2022-11-26 06:14:24 +0000 UTC - event for external-local-nodes-cwmn7: {kubelet bootstrap-e2e-minion-group-8xrn} Killing: Stopping container netexec Nov 26 06:16:01.883: INFO: At 2022-11-26 06:14:25 +0000 UTC - event for external-local-nodes-cwmn7: {kubelet bootstrap-e2e-minion-group-8xrn} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Nov 26 06:16:01.883: INFO: At 2022-11-26 06:14:27 +0000 UTC - event for external-local-nodes-cwmn7: {kubelet bootstrap-e2e-minion-group-8xrn} BackOff: Back-off restarting failed container netexec in pod external-local-nodes-cwmn7_esipp-3192(7b9911fa-9f1e-433f-84dc-2f8c752a3312) Nov 26 06:16:01.883: INFO: At 2022-11-26 06:14:31 +0000 UTC - event for netserver-1: {kubelet bootstrap-e2e-minion-group-6hf3} Killing: Stopping container webserver Nov 26 06:16:01.883: INFO: At 2022-11-26 06:14:32 +0000 UTC - event for netserver-1: {kubelet bootstrap-e2e-minion-group-6hf3} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Nov 26 06:16:01.883: INFO: At 2022-11-26 06:14:37 +0000 UTC - event for netserver-1: {kubelet bootstrap-e2e-minion-group-6hf3} BackOff: Back-off restarting failed container webserver in pod netserver-1_esipp-3192(000e2cce-a80e-4b25-b66e-44acfa0e9d10) Nov 26 06:16:01.883: INFO: At 2022-11-26 06:15:08 +0000 UTC - event for netserver-2: {kubelet bootstrap-e2e-minion-group-8xrn} Killing: Stopping container webserver Nov 26 06:16:01.883: INFO: At 2022-11-26 06:15:09 +0000 UTC - event for netserver-2: {kubelet bootstrap-e2e-minion-group-8xrn} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Nov 26 06:16:01.883: INFO: At 2022-11-26 06:15:12 +0000 UTC - event for netserver-2: {kubelet bootstrap-e2e-minion-group-8xrn} BackOff: Back-off restarting failed container webserver in pod netserver-2_esipp-3192(de6fb65f-6164-4819-a45c-37d70dbdd2bd) Nov 26 06:16:01.883: INFO: At 2022-11-26 06:15:49 +0000 UTC - event for external-local-nodes: {service-controller } DeletingLoadBalancer: Deleting load balancer Nov 26 06:16:01.883: INFO: At 2022-11-26 06:15:49 +0000 UTC - event for external-local-nodes: {service-controller } Type: LoadBalancer -> ClusterIP Nov 26 06:16:02.421: INFO: POD NODE PHASE GRACE CONDITIONS Nov 26 06:16:02.421: INFO: external-local-nodes-cwmn7 bootstrap-e2e-minion-group-8xrn Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:14:22 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:14:45 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:14:45 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:14:22 +0000 UTC }] Nov 26 06:16:02.421: INFO: netserver-0 bootstrap-e2e-minion-group-4lvd Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:12:04 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:12:35 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:12:35 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:12:04 +0000 UTC }] Nov 26 06:16:02.421: INFO: netserver-1 bootstrap-e2e-minion-group-6hf3 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:12:04 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:14:55 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:14:55 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:12:04 +0000 UTC }] Nov 26 06:16:02.421: INFO: netserver-2 bootstrap-e2e-minion-group-8xrn Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:12:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:15:09 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:15:09 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:12:04 +0000 UTC }] Nov 26 06:16:02.421: INFO: test-container-pod bootstrap-e2e-minion-group-6hf3 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:12:37 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:12:57 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:12:57 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:12:37 +0000 UTC }] Nov 26 06:16:02.421: INFO: Nov 26 06:16:28.343: INFO: Logging node info for node bootstrap-e2e-master Nov 26 06:16:28.456: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-master 193495c1-5b1f-409e-8da9-b1de094ba8ed 4760 0 2022-11-26 06:06:31 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-26 06:06:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2022-11-26 06:06:49 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}},"f:taints":{}}} } {kube-controller-manager Update v1 2022-11-26 06:06:49 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2022-11-26 06:11:58 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-gce-dg-1-6-1-5-dwngr-clu/us-west1-b/bootstrap-e2e-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16656896000 0} {<nil>} 16266500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3858366464 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{14991206376 0} {<nil>} 14991206376 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3596222464 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 06:06:49 +0000 UTC,LastTransitionTime:2022-11-26 06:06:49 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-26 06:11:58 +0000 UTC,LastTransitionTime:2022-11-26 06:06:31 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-26 06:11:58 +0000 UTC,LastTransitionTime:2022-11-26 06:06:31 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-26 06:11:58 +0000 UTC,LastTransitionTime:2022-11-26 06:06:31 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-26 06:11:58 +0000 UTC,LastTransitionTime:2022-11-26 06:06:33 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:34.83.17.181,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-master.c.k8s-gce-dg-1-6-1-5-dwngr-clu.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-master.c.k8s-gce-dg-1-6-1-5-dwngr-clu.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:d1b5362410d55fa2b320229b7ed22cb3,SystemUUID:d1b53624-10d5-5fa2-b320-229b7ed22cb3,BootID:13424371-b4dc-4f78-9364-1afc151da040,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:135160272,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:124990265,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:57660216,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:2c111f004bec24888d8cfa2a812a38fb8341350abac67dcd0ac64e709dfe389c registry.k8s.io/kas-network-proxy/proxy-server:v0.0.33],SizeBytes:22020129,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 26 06:16:28.457: INFO: Logging kubelet events for node bootstrap-e2e-master Nov 26 06:16:28.592: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-master Nov 26 06:16:28.823: INFO: etcd-server-events-bootstrap-e2e-master started at 2022-11-26 06:05:48 +0000 UTC (0+1 container statuses recorded) Nov 26 06:16:28.823: INFO: Container etcd-container ready: true, restart count 1 Nov 26 06:16:28.823: INFO: kube-apiserver-bootstrap-e2e-master started at 2022-11-26 06:05:48 +0000 UTC (0+1 container statuses recorded) Nov 26 06:16:28.823: INFO: Container kube-apiserver ready: true, restart count 0 Nov 26 06:16:28.823: INFO: kube-controller-manager-bootstrap-e2e-master started at 2022-11-26 06:05:48 +0000 UTC (0+1 container statuses recorded) Nov 26 06:16:28.823: INFO: Container kube-controller-manager ready: true, restart count 2 Nov 26 06:16:28.823: INFO: kube-addon-manager-bootstrap-e2e-master started at 2022-11-26 06:06:05 +0000 UTC (0+1 container statuses recorded) Nov 26 06:16:28.823: INFO: Container kube-addon-manager ready: true, restart count 1 Nov 26 06:16:28.823: INFO: l7-lb-controller-bootstrap-e2e-master started at 2022-11-26 06:06:05 +0000 UTC (0+1 container statuses recorded) Nov 26 06:16:28.823: INFO: Container l7-lb-controller ready: true, restart count 5 Nov 26 06:16:28.823: INFO: etcd-server-bootstrap-e2e-master started at 2022-11-26 06:05:48 +0000 UTC (0+1 container statuses recorded) Nov 26 06:16:28.823: INFO: Container etcd-container ready: true, restart count 1 Nov 26 06:16:28.823: INFO: konnectivity-server-bootstrap-e2e-master started at 2022-11-26 06:05:48 +0000 UTC (0+1 container statuses recorded) Nov 26 06:16:28.823: INFO: Container konnectivity-server-container ready: true, restart count 1 Nov 26 06:16:28.823: INFO: kube-scheduler-bootstrap-e2e-master started at 2022-11-26 06:05:48 +0000 UTC (0+1 container statuses recorded) Nov 26 06:16:28.823: INFO: Container kube-scheduler ready: true, restart count 1 Nov 26 06:16:28.823: INFO: metadata-proxy-v0.1-gg5tl started at 2022-11-26 06:06:31 +0000 UTC (0+2 container statuses recorded) Nov 26 06:16:28.823: INFO: Container metadata-proxy ready: true, restart count 0 Nov 26 06:16:28.823: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 26 06:16:29.097: INFO: Latency metrics for node bootstrap-e2e-master Nov 26 06:16:29.097: INFO: Logging node info for node bootstrap-e2e-minion-group-4lvd Nov 26 06:16:29.212: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-4lvd e0d66193-5762-49e7-b872-8ea4dee27a72 7136 0 2022-11-26 06:06:29 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-4lvd kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-4lvd topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-provisioning-4266":"bootstrap-e2e-minion-group-4lvd","csi-hostpath-provisioning-6102":"bootstrap-e2e-minion-group-4lvd","csi-hostpath-provisioning-9467":"bootstrap-e2e-minion-group-4lvd","csi-mock-csi-mock-volumes-4587":"bootstrap-e2e-minion-group-4lvd"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-11-26 06:06:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}}}} } {kubelet Update v1 2022-11-26 06:06:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {node-problem-detector Update v1 2022-11-26 06:11:35 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2022-11-26 06:11:57 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2022-11-26 06:15:52 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-gce-dg-1-6-1-5-dwngr-clu/us-west1-b/bootstrap-e2e-minion-group-4lvd,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-26 06:11:35 +0000 UTC,LastTransitionTime:2022-11-26 06:06:33 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-26 06:11:35 +0000 UTC,LastTransitionTime:2022-11-26 06:06:33 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-26 06:11:35 +0000 UTC,LastTransitionTime:2022-11-26 06:06:33 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-26 06:11:35 +0000 UTC,LastTransitionTime:2022-11-26 06:06:33 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-26 06:11:35 +0000 UTC,LastTransitionTime:2022-11-26 06:06:33 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-26 06:11:35 +0000 UTC,LastTransitionTime:2022-11-26 06:06:33 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-26 06:11:35 +0000 UTC,LastTransitionTime:2022-11-26 06:06:33 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 06:06:38 +0000 UTC,LastTransitionTime:2022-11-26 06:06:38 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-26 06:15:52 +0000 UTC,LastTransitionTime:2022-11-26 06:06:29 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-26 06:15:52 +0000 UTC,LastTransitionTime:2022-11-26 06:06:29 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-26 06:15:52 +0000 UTC,LastTransitionTime:2022-11-26 06:06:29 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-26 06:15:52 +0000 UTC,LastTransitionTime:2022-11-26 06:06:29 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.5,},NodeAddress{Type:ExternalIP,Address:34.83.235.77,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-4lvd.c.k8s-gce-dg-1-6-1-5-dwngr-clu.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-4lvd.c.k8s-gce-dg-1-6-1-5-dwngr-clu.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:60066c0df6854403f12a43e8820a94a9,SystemUUID:60066c0d-f685-4403-f12a-43e8820a94a9,BootID:c95b6e64-9275-42a1-b638-b99f8479d167,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:67201736,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[kubernetes.io/csi/csi-mock-csi-mock-volumes-4587^a0a7c4f2-6d51-11ed-a711-7ec84174d328],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 26 06:16:29.213: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-4lvd Nov 26 06:16:29.353: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-4lvd Nov 26 06:16:29.518: INFO: pod-2c115652-4636-4797-8221-d1eea046cf6a started at 2022-11-26 06:08:36 +0000 UTC (0+1 container statuses recorded) Nov 26 06:16:29.518: INFO: Container write-pod ready: false, restart count 0 Nov 26 06:16:29.518: INFO: csi-hostpathplugin-0 started at 2022-11-26 06:08:55 +0000 UTC (0+7 container statuses recorded) Nov 26 06:16:29.518: INFO: Container csi-attacher ready: true, restart count 4 Nov 26 06:16:29.518: INFO: Container csi-provisioner ready: true, restart count 4 Nov 26 06:16:29.518: INFO: Container csi-resizer ready: true, restart count 4 Nov 26 06:16:29.518: INFO: Container csi-snapshotter ready: true, restart count 4 Nov 26 06:16:29.518: INFO: Container hostpath ready: true, restart count 4 Nov 26 06:16:29.518: INFO: Container liveness-probe ready: true, restart count 4 Nov 26 06:16:29.518: INFO: Container node-driver-registrar ready: true, restart count 4 Nov 26 06:16:29.518: INFO: pvc-volume-tester-vcrtr started at 2022-11-26 06:14:49 +0000 UTC (0+1 container statuses recorded) Nov 26 06:16:29.518: INFO: Container volume-tester ready: false, restart count 0 Nov 26 06:16:29.518: INFO: csi-hostpathplugin-0 started at 2022-11-26 06:11:34 +0000 UTC (0+7 container statuses recorded) Nov 26 06:16:29.518: INFO: Container csi-attacher ready: true, restart count 2 Nov 26 06:16:29.518: INFO: Container csi-provisioner ready: true, restart count 2 Nov 26 06:16:29.518: INFO: Container csi-resizer ready: true, restart count 2 Nov 26 06:16:29.518: INFO: Container csi-snapshotter ready: true, restart count 2 Nov 26 06:16:29.518: INFO: Container hostpath ready: true, restart count 2 Nov 26 06:16:29.518: INFO: Container liveness-probe ready: true, restart count 2 Nov 26 06:16:29.518: INFO: Container node-driver-registrar ready: true, restart count 2 Nov 26 06:16:29.518: INFO: csi-hostpathplugin-0 started at 2022-11-26 06:11:39 +0000 UTC (0+7 container statuses recorded) Nov 26 06:16:29.518: INFO: Container csi-attacher ready: true, restart count 1 Nov 26 06:16:29.518: INFO: Container csi-provisioner ready: true, restart count 1 Nov 26 06:16:29.518: INFO: Container csi-resizer ready: true, restart count 1 Nov 26 06:16:29.518: INFO: Container csi-snapshotter ready: true, restart count 1 Nov 26 06:16:29.518: INFO: Container hostpath ready: true, restart count 1 Nov 26 06:16:29.518: INFO: Container liveness-probe ready: true, restart count 1 Nov 26 06:16:29.518: INFO: Container node-driver-registrar ready: true, restart count 1 Nov 26 06:16:29.518: INFO: netserver-0 started at 2022-11-26 06:15:56 +0000 UTC (0+1 container statuses recorded) Nov 26 06:16:29.518: INFO: Container webserver ready: false, restart count 2 Nov 26 06:16:29.518: INFO: kube-proxy-bootstrap-e2e-minion-group-4lvd started at 2022-11-26 06:06:29 +0000 UTC (0+1 container statuses recorded) Nov 26 06:16:29.518: INFO: Container kube-proxy ready: false, restart count 5 Nov 26 06:16:29.518: INFO: coredns-6d97d5ddb-n6d4l started at 2022-11-26 06:06:46 +0000 UTC (0+1 container statuses recorded) Nov 26 06:16:29.518: INFO: Container coredns ready: false, restart count 6 Nov 26 06:16:29.518: INFO: netserver-0 started at 2022-11-26 06:12:04 +0000 UTC (0+1 container statuses recorded) Nov 26 06:16:29.518: INFO: Container webserver ready: true, restart count 1 Nov 26 06:16:29.518: INFO: ss-2 started at 2022-11-26 06:09:41 +0000 UTC (0+1 container statuses recorded) Nov 26 06:16:29.518: INFO: Container webserver ready: true, restart count 3 Nov 26 06:16:29.518: INFO: csi-mockplugin-0 started at 2022-11-26 06:14:37 +0000 UTC (0+3 container statuses recorded) Nov 26 06:16:29.518: INFO: Container csi-provisioner ready: true, restart count 1 Nov 26 06:16:29.518: INFO: Container driver-registrar ready: true, restart count 1 Nov 26 06:16:29.518: INFO: Container mock ready: true, restart count 1 Nov 26 06:16:29.518: INFO: metadata-proxy-v0.1-z77w4 started at 2022-11-26 06:06:30 +0000 UTC (0+2 container statuses recorded) Nov 26 06:16:29.518: INFO: Container metadata-proxy ready: true, restart count 0 Nov 26 06:16:29.518: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 26 06:16:29.518: INFO: konnectivity-agent-dx4vl started at 2022-11-26 06:06:38 +0000 UTC (0+1 container statuses recorded) Nov 26 06:16:29.518: INFO: Container konnectivity-agent ready: true, restart count 5 Nov 26 06:16:29.931: INFO: Latency metrics for node bootstrap-e2e-minion-group-4lvd Nov 26 06:16:29.931: INFO: Logging node info for node bootstrap-e2e-minion-group-6hf3 Nov 26 06:16:30.012: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-6hf3 b331b1c1-4929-41cb-863a-a271765dc91a 7323 0 2022-11-26 06:06:35 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-6hf3 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-6hf3 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-multivolume-8219":"bootstrap-e2e-minion-group-6hf3","csi-mock-csi-mock-volumes-3236":"bootstrap-e2e-minion-group-6hf3"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-26 06:06:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-26 06:06:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}}}} } {node-problem-detector Update v1 2022-11-26 06:11:40 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2022-11-26 06:16:03 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:volumesAttached":{}}} status} {kubelet Update v1 2022-11-26 06:16:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-gce-dg-1-6-1-5-dwngr-clu/us-west1-b/bootstrap-e2e-minion-group-6hf3,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-26 06:11:40 +0000 UTC,LastTransitionTime:2022-11-26 06:06:38 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-26 06:11:40 +0000 UTC,LastTransitionTime:2022-11-26 06:06:38 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-26 06:11:40 +0000 UTC,LastTransitionTime:2022-11-26 06:06:38 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-26 06:11:40 +0000 UTC,LastTransitionTime:2022-11-26 06:06:38 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-26 06:11:40 +0000 UTC,LastTransitionTime:2022-11-26 06:06:38 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-26 06:11:40 +0000 UTC,LastTransitionTime:2022-11-26 06:06:38 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-26 06:11:40 +0000 UTC,LastTransitionTime:2022-11-26 06:06:38 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 06:06:49 +0000 UTC,LastTransitionTime:2022-11-26 06:06:49 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-26 06:16:08 +0000 UTC,LastTransitionTime:2022-11-26 06:06:35 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-26 06:16:08 +0000 UTC,LastTransitionTime:2022-11-26 06:06:35 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-26 06:16:08 +0000 UTC,LastTransitionTime:2022-11-26 06:06:35 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-26 06:16:08 +0000 UTC,LastTransitionTime:2022-11-26 06:06:36 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.3,},NodeAddress{Type:ExternalIP,Address:34.127.104.133,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-6hf3.c.k8s-gce-dg-1-6-1-5-dwngr-clu.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-6hf3.c.k8s-gce-dg-1-6-1-5-dwngr-clu.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:26797942e06ae330f816fe3103a312e5,SystemUUID:26797942-e06a-e330-f816-fe3103a312e5,BootID:2fb94944-0ddb-4524-8e35-935d1c8900f4,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/volume/nfs@sha256:3bda73f2428522b0e342af80a0b9679e8594c2126f2b3cca39ed787589741b9e registry.k8s.io/e2e-test-images/volume/nfs:1.3],SizeBytes:95836203,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:67201736,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-multivolume-8219^cbaa1d14-6d51-11ed-b2a3-621f4eefce5c kubernetes.io/csi/csi-mock-csi-mock-volumes-3236^8356e743-6d51-11ed-b618-52bed00202b8 kubernetes.io/csi/csi-mock-csi-mock-volumes-3236^8af161f1-6d51-11ed-b618-52bed00202b8],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-mock-csi-mock-volumes-3236^8af161f1-6d51-11ed-b618-52bed00202b8,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-mock-csi-mock-volumes-3236^8356e743-6d51-11ed-b618-52bed00202b8,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-multivolume-8219^cbaa1d14-6d51-11ed-b2a3-621f4eefce5c,DevicePath:,},},Config:nil,},} Nov 26 06:16:30.013: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-6hf3 Nov 26 06:16:30.069: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-6hf3 Nov 26 06:16:30.192: INFO: nfs-server started at 2022-11-26 06:09:26 +0000 UTC (0+1 container statuses recorded) Nov 26 06:16:30.192: INFO: Container nfs-server ready: true, restart count 3 Nov 26 06:16:30.192: INFO: netserver-1 started at 2022-11-26 06:12:04 +0000 UTC (0+1 container statuses recorded) Nov 26 06:16:30.192: INFO: Container webserver ready: false, restart count 2 Nov 26 06:16:30.192: INFO: csi-mockplugin-0 started at 2022-11-26 06:13:45 +0000 UTC (0+3 container statuses recorded) Nov 26 06:16:30.192: INFO: Container csi-provisioner ready: true, restart count 3 Nov 26 06:16:30.192: INFO: Container driver-registrar ready: true, restart count 3 Nov 26 06:16:30.192: INFO: Container mock ready: true, restart count 3 Nov 26 06:16:30.192: INFO: kube-proxy-bootstrap-e2e-minion-group-6hf3 started at 2022-11-26 06:06:35 +0000 UTC (0+1 container statuses recorded) Nov 26 06:16:30.192: INFO: Container kube-proxy ready: false, restart count 5 Nov 26 06:16:30.192: INFO: metrics-server-v0.5.2-867b8754b9-hr966 started at 2022-11-26 06:07:02 +0000 UTC (0+2 container statuses recorded) Nov 26 06:16:30.192: INFO: Container metrics-server ready: false, restart count 5 Nov 26 06:16:30.192: INFO: Container metrics-server-nanny ready: false, restart count 5 Nov 26 06:16:30.192: INFO: ss-1 started at 2022-11-26 06:09:25 +0000 UTC (0+1 container statuses recorded) Nov 26 06:16:30.192: INFO: Container webserver ready: true, restart count 6 Nov 26 06:16:30.192: INFO: mutability-test-2s4kg started at 2022-11-26 06:13:56 +0000 UTC (0+1 container statuses recorded) Nov 26 06:16:30.192: INFO: Container netexec ready: false, restart count 4 Nov 26 06:16:30.192: INFO: metadata-proxy-v0.1-hgwt5 started at 2022-11-26 06:06:36 +0000 UTC (0+2 container statuses recorded) Nov 26 06:16:30.192: INFO: Container metadata-proxy ready: true, restart count 0 Nov 26 06:16:30.192: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 26 06:16:30.192: INFO: csi-hostpathplugin-0 started at 2022-11-26 06:15:56 +0000 UTC (0+7 container statuses recorded) Nov 26 06:16:30.192: INFO: Container csi-attacher ready: true, restart count 1 Nov 26 06:16:30.192: INFO: Container csi-provisioner ready: true, restart count 1 Nov 26 06:16:30.192: INFO: Container csi-resizer ready: true, restart count 1 Nov 26 06:16:30.192: INFO: Container csi-snapshotter ready: true, restart count 1 Nov 26 06:16:30.192: INFO: Container hostpath ready: true, restart count 1 Nov 26 06:16:30.192: INFO: Container liveness-probe ready: true, restart count 1 Nov 26 06:16:30.192: INFO: Container node-driver-registrar ready: true, restart count 1 Nov 26 06:16:30.192: INFO: test-container-pod started at 2022-11-26 06:12:37 +0000 UTC (0+1 container statuses recorded) Nov 26 06:16:30.192: INFO: Container webserver ready: true, restart count 2 Nov 26 06:16:30.192: INFO: pvc-volume-tester-btbrd started at 2022-11-26 06:13:53 +0000 UTC (0+1 container statuses recorded) Nov 26 06:16:30.193: INFO: Container volume-tester ready: false, restart count 0 Nov 26 06:16:30.193: INFO: csi-mockplugin-attacher-0 started at 2022-11-26 06:13:45 +0000 UTC (0+1 container statuses recorded) Nov 26 06:16:30.193: INFO: Container csi-attacher ready: true, restart count 2 Nov 26 06:16:30.193: INFO: external-local-lb-kt5mb started at 2022-11-26 06:14:38 +0000 UTC (0+1 container statuses recorded) Nov 26 06:16:30.193: INFO: Container netexec ready: true, restart count 2 Nov 26 06:16:30.193: INFO: pvc-volume-tester-pg8h9 started at 2022-11-26 06:14:13 +0000 UTC (0+1 container statuses recorded) Nov 26 06:16:30.193: INFO: Container volume-tester ready: true, restart count 0 Nov 26 06:16:30.193: INFO: konnectivity-agent-czjjn started at 2022-11-26 06:06:49 +0000 UTC (0+1 container statuses recorded) Nov 26 06:16:30.193: INFO: Container konnectivity-agent ready: false, restart count 4 Nov 26 06:16:30.193: INFO: pvc-volume-tester-bl2qj started at 2022-11-26 06:14:00 +0000 UTC (0+1 container statuses recorded) Nov 26 06:16:30.193: INFO: Container volume-tester ready: true, restart count 0 Nov 26 06:16:30.193: INFO: netserver-1 started at 2022-11-26 06:15:56 +0000 UTC (0+1 container statuses recorded) Nov 26 06:16:30.193: INFO: Container webserver ready: false, restart count 2 Nov 26 06:16:30.606: INFO: Latency metrics for node bootstrap-e2e-minion-group-6hf3 Nov 26 06:16:30.606: INFO: Logging node info for node bootstrap-e2e-minion-group-8xrn Nov 26 06:16:30.691: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-8xrn bdb3ca25-6ac6-4bb7-a82f-bb358d7ba8f9 7442 0 2022-11-26 06:06:29 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-8xrn kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-8xrn topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-11-26 06:06:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}}}} } {kubelet Update v1 2022-11-26 06:06:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-26 06:10:28 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {node-problem-detector Update v1 2022-11-26 06:11:35 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-11-26 06:16:02 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-gce-dg-1-6-1-5-dwngr-clu/us-west1-b/bootstrap-e2e-minion-group-8xrn,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-26 06:11:35 +0000 UTC,LastTransitionTime:2022-11-26 06:06:33 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-26 06:11:35 +0000 UTC,LastTransitionTime:2022-11-26 06:06:33 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-26 06:11:35 +0000 UTC,LastTransitionTime:2022-11-26 06:06:33 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-26 06:11:35 +0000 UTC,LastTransitionTime:2022-11-26 06:06:33 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-26 06:11:35 +0000 UTC,LastTransitionTime:2022-11-26 06:06:33 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-26 06:11:35 +0000 UTC,LastTransitionTime:2022-11-26 06:06:33 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-26 06:11:35 +0000 UTC,LastTransitionTime:2022-11-26 06:06:33 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 06:06:38 +0000 UTC,LastTransitionTime:2022-11-26 06:06:38 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-26 06:16:02 +0000 UTC,LastTransitionTime:2022-11-26 06:06:29 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-26 06:16:02 +0000 UTC,LastTransitionTime:2022-11-26 06:06:29 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-26 06:16:02 +0000 UTC,LastTransitionTime:2022-11-26 06:06:29 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-26 06:16:02 +0000 UTC,LastTransitionTime:2022-11-26 06:06:29 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.4,},NodeAddress{Type:ExternalIP,Address:34.105.92.105,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-8xrn.c.k8s-gce-dg-1-6-1-5-dwngr-clu.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-8xrn.c.k8s-gce-dg-1-6-1-5-dwngr-clu.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:1e49116bed901a0b492cc2a872edd7a8,SystemUUID:1e49116b-ed90-1a0b-492c-c2a872edd7a8,BootID:24bc719e-5b36-4d61-bc89-44455ca28dd7,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:67201736,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 26 06:16:30.691: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-8xrn Nov 26 06:16:30.788: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-8xrn Nov 26 06:16:30.930: INFO: volume-snapshot-controller-0 started at 2022-11-26 06:06:38 +0000 UTC (0+1 container statuses recorded) Nov 26 06:16:30.930: INFO: Container volume-snapshot-controller ready: true, restart count 4 Nov 26 06:16:30.930: INFO: pvc-tester-phnnw started at 2022-11-26 06:15:48 +0000 UTC (0+1 container statuses recorded) Nov 26 06:16:30.930: INFO: Container write-pod ready: false, restart count 0 Nov 26 06:16:30.930: INFO: kube-dns-autoscaler-5f6455f985-z5fph started at 2022-11-26 06:06:38 +0000 UTC (0+1 container statuses recorded) Nov 26 06:16:30.930: INFO: Container autoscaler ready: true, restart count 5 Nov 26 06:16:30.930: INFO: metadata-proxy-v0.1-h465b started at 2022-11-26 06:06:30 +0000 UTC (0+2 container statuses recorded) Nov 26 06:16:30.930: INFO: Container metadata-proxy ready: true, restart count 0 Nov 26 06:16:30.930: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 26 06:16:30.930: INFO: hostexec-bootstrap-e2e-minion-group-8xrn-lp842 started at 2022-11-26 06:15:57 +0000 UTC (0+1 container statuses recorded) Nov 26 06:16:30.930: INFO: Container agnhost-container ready: true, restart count 2 Nov 26 06:16:30.930: INFO: coredns-6d97d5ddb-rr67j started at 2022-11-26 06:06:38 +0000 UTC (0+1 container statuses recorded) Nov 26 06:16:30.930: INFO: Container coredns ready: false, restart count 4 Nov 26 06:16:30.930: INFO: external-local-update-w2stv started at 2022-11-26 06:15:49 +0000 UTC (0+1 container statuses recorded) Nov 26 06:16:30.930: INFO: Container netexec ready: true, restart count 0 Nov 26 06:16:30.930: INFO: external-local-nodes-cwmn7 started at 2022-11-26 06:14:22 +0000 UTC (0+1 container statuses recorded) Nov 26 06:16:30.930: INFO: Container netexec ready: false, restart count 2 Nov 26 06:16:30.930: INFO: ss-0 started at 2022-11-26 06:08:35 +0000 UTC (0+1 container statuses recorded) Nov 26 06:16:30.930: INFO: Container webserver ready: false, restart count 4 Nov 26 06:16:30.930: INFO: netserver-2 started at 2022-11-26 06:12:04 +0000 UTC (0+1 container statuses recorded) Nov 26 06:16:30.930: INFO: Container webserver ready: true, restart count 3 Nov 26 06:16:30.930: INFO: kube-proxy-bootstrap-e2e-minion-group-8xrn started at 2022-11-26 06:06:29 +0000 UTC (0+1 container statuses recorded) Nov 26 06:16:30.930: INFO: Container kube-proxy ready: false, restart count 5 Nov 26 06:16:30.930: INFO: konnectivity-agent-7ppwz started at 2022-11-26 06:06:38 +0000 UTC (0+1 container statuses recorded) Nov 26 06:16:30.930: INFO: Container konnectivity-agent ready: false, restart count 4 Nov 26 06:16:30.930: INFO: netserver-2 started at 2022-11-26 06:15:56 +0000 UTC (0+1 container statuses recorded) Nov 26 06:16:30.930: INFO: Container webserver ready: false, restart count 2 Nov 26 06:16:30.930: INFO: l7-default-backend-8549d69d99-7w7f7 started at 2022-11-26 06:06:38 +0000 UTC (0+1 container statuses recorded) Nov 26 06:16:30.930: INFO: Container default-http-backend ready: true, restart count 0 Nov 26 06:16:31.199: INFO: Latency metrics for node bootstrap-e2e-minion-group-8xrn [DeferCleanup (Each)] [sig-network] LoadBalancers ESIPP [Slow] tear down framework | framework.go:193 STEP: Destroying namespace "esipp-3192" for this suite. 11/26/22 06:16:31.199
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-network\]\sLoadBalancers\sESIPP\s\[Slow\]\sshould\swork\sfor\stype\=NodePort$'
test/e2e/framework/framework.go:241 k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc000d2a000) test/e2e/framework/framework.go:241 +0x96f There were additional failures detected after the initial failure: [PANICKED] Test Panicked In [AfterEach] at: /usr/local/go/src/runtime/panic.go:260 runtime error: invalid memory address or nil pointer dereference Full Stack Trace k8s.io/kubernetes/test/e2e/network.glob..func20.2() test/e2e/network/loadbalancer.go:1262 +0x113from junit_01.xml
[BeforeEach] [sig-network] LoadBalancers ESIPP [Slow] set up framework | framework.go:178 STEP: Creating a kubernetes client 11/26/22 06:22:51.613 Nov 26 06:22:51.613: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename esipp 11/26/22 06:22:51.615 Nov 26 06:22:51.654: INFO: Unexpected error while creating namespace: Post "https://34.83.17.181/api/v1/namespaces": dial tcp 34.83.17.181:443: connect: connection refused Nov 26 06:22:53.694: INFO: Unexpected error while creating namespace: Post "https://34.83.17.181/api/v1/namespaces": dial tcp 34.83.17.181:443: connect: connection refused Nov 26 06:22:55.694: INFO: Unexpected error while creating namespace: Post "https://34.83.17.181/api/v1/namespaces": dial tcp 34.83.17.181:443: connect: connection refused Nov 26 06:22:57.694: INFO: Unexpected error while creating namespace: Post "https://34.83.17.181/api/v1/namespaces": dial tcp 34.83.17.181:443: connect: connection refused Nov 26 06:22:59.694: INFO: Unexpected error while creating namespace: Post "https://34.83.17.181/api/v1/namespaces": dial tcp 34.83.17.181:443: connect: connection refused Nov 26 06:23:01.694: INFO: Unexpected error while creating namespace: Post "https://34.83.17.181/api/v1/namespaces": dial tcp 34.83.17.181:443: connect: connection refused Nov 26 06:23:03.694: INFO: Unexpected error while creating namespace: Post "https://34.83.17.181/api/v1/namespaces": dial tcp 34.83.17.181:443: connect: connection refused Nov 26 06:23:05.694: INFO: Unexpected error while creating namespace: Post "https://34.83.17.181/api/v1/namespaces": dial tcp 34.83.17.181:443: connect: connection refused Nov 26 06:23:07.694: INFO: Unexpected error while creating namespace: Post "https://34.83.17.181/api/v1/namespaces": dial tcp 34.83.17.181:443: connect: connection refused Nov 26 06:23:09.694: INFO: Unexpected error while creating namespace: Post "https://34.83.17.181/api/v1/namespaces": dial tcp 34.83.17.181:443: connect: connection refused Nov 26 06:23:11.694: INFO: Unexpected error while creating namespace: Post "https://34.83.17.181/api/v1/namespaces": dial tcp 34.83.17.181:443: connect: connection refused Nov 26 06:23:13.694: INFO: Unexpected error while creating namespace: Post "https://34.83.17.181/api/v1/namespaces": dial tcp 34.83.17.181:443: connect: connection refused Nov 26 06:23:15.695: INFO: Unexpected error while creating namespace: Post "https://34.83.17.181/api/v1/namespaces": dial tcp 34.83.17.181:443: connect: connection refused Nov 26 06:23:17.694: INFO: Unexpected error while creating namespace: Post "https://34.83.17.181/api/v1/namespaces": dial tcp 34.83.17.181:443: connect: connection refused Nov 26 06:23:19.694: INFO: Unexpected error while creating namespace: Post "https://34.83.17.181/api/v1/namespaces": dial tcp 34.83.17.181:443: connect: connection refused Nov 26 06:23:21.695: INFO: Unexpected error while creating namespace: Post "https://34.83.17.181/api/v1/namespaces": dial tcp 34.83.17.181:443: connect: connection refused Nov 26 06:23:21.734: INFO: Unexpected error while creating namespace: Post "https://34.83.17.181/api/v1/namespaces": dial tcp 34.83.17.181:443: connect: connection refused Nov 26 06:23:21.734: INFO: Unexpected error: <*errors.errorString | 0xc0001c9a00>: { s: "timed out waiting for the condition", } Nov 26 06:23:21.734: FAIL: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc000d2a000) test/e2e/framework/framework.go:241 +0x96f [AfterEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/framework/node/init/init.go:32 Nov 26 06:23:21.734: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-network] LoadBalancers ESIPP [Slow] test/e2e/network/loadbalancer.go:1260 [DeferCleanup (Each)] [sig-network] LoadBalancers ESIPP [Slow] dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/26/22 06:23:21.774 [DeferCleanup (Each)] [sig-network] LoadBalancers ESIPP [Slow] tear down framework | framework.go:193
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-network\]\sLoadBalancers\sshould\sbe\sable\sto\screate\sLoadBalancer\sService\swithout\sNodePort\sand\schange\sit\s\[Slow\]$'
test/e2e/framework/framework.go:241 k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc000cfa4b0) test/e2e/framework/framework.go:241 +0x96f There were additional failures detected after the initial failure: [PANICKED] Test Panicked In [AfterEach] at: /usr/local/go/src/runtime/panic.go:260 runtime error: invalid memory address or nil pointer dereference Full Stack Trace k8s.io/kubernetes/test/e2e/network.glob..func19.2() test/e2e/network/loadbalancer.go:73 +0x113from junit_01.xml
[BeforeEach] [sig-network] LoadBalancers set up framework | framework.go:178 STEP: Creating a kubernetes client 11/26/22 06:28:02.777 Nov 26 06:28:02.777: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename loadbalancers 11/26/22 06:28:02.779 Nov 26 06:28:02.818: INFO: Unexpected error while creating namespace: Post "https://34.83.17.181/api/v1/namespaces": dial tcp 34.83.17.181:443: connect: connection refused Nov 26 06:28:04.858: INFO: Unexpected error while creating namespace: Post "https://34.83.17.181/api/v1/namespaces": dial tcp 34.83.17.181:443: connect: connection refused Nov 26 06:28:06.859: INFO: Unexpected error while creating namespace: Post "https://34.83.17.181/api/v1/namespaces": dial tcp 34.83.17.181:443: connect: connection refused Nov 26 06:28:08.859: INFO: Unexpected error while creating namespace: Post "https://34.83.17.181/api/v1/namespaces": dial tcp 34.83.17.181:443: connect: connection refused Nov 26 06:28:10.859: INFO: Unexpected error while creating namespace: Post "https://34.83.17.181/api/v1/namespaces": dial tcp 34.83.17.181:443: connect: connection refused Nov 26 06:28:12.859: INFO: Unexpected error while creating namespace: Post "https://34.83.17.181/api/v1/namespaces": dial tcp 34.83.17.181:443: connect: connection refused Nov 26 06:28:14.858: INFO: Unexpected error while creating namespace: Post "https://34.83.17.181/api/v1/namespaces": dial tcp 34.83.17.181:443: connect: connection refused Nov 26 06:28:16.859: INFO: Unexpected error while creating namespace: Post "https://34.83.17.181/api/v1/namespaces": dial tcp 34.83.17.181:443: connect: connection refused Nov 26 06:28:18.859: INFO: Unexpected error while creating namespace: Post "https://34.83.17.181/api/v1/namespaces": dial tcp 34.83.17.181:443: connect: connection refused Nov 26 06:28:20.859: INFO: Unexpected error while creating namespace: Post "https://34.83.17.181/api/v1/namespaces": dial tcp 34.83.17.181:443: connect: connection refused Nov 26 06:28:22.858: INFO: Unexpected error while creating namespace: Post "https://34.83.17.181/api/v1/namespaces": dial tcp 34.83.17.181:443: connect: connection refused Nov 26 06:28:24.859: INFO: Unexpected error while creating namespace: Post "https://34.83.17.181/api/v1/namespaces": dial tcp 34.83.17.181:443: connect: connection refused Nov 26 06:28:26.859: INFO: Unexpected error while creating namespace: Post "https://34.83.17.181/api/v1/namespaces": dial tcp 34.83.17.181:443: connect: connection refused Nov 26 06:28:28.858: INFO: Unexpected error while creating namespace: Post "https://34.83.17.181/api/v1/namespaces": dial tcp 34.83.17.181:443: connect: connection refused Nov 26 06:28:30.859: INFO: Unexpected error while creating namespace: Post "https://34.83.17.181/api/v1/namespaces": dial tcp 34.83.17.181:443: connect: connection refused Nov 26 06:28:32.859: INFO: Unexpected error while creating namespace: Post "https://34.83.17.181/api/v1/namespaces": dial tcp 34.83.17.181:443: connect: connection refused Nov 26 06:28:32.898: INFO: Unexpected error while creating namespace: Post "https://34.83.17.181/api/v1/namespaces": dial tcp 34.83.17.181:443: connect: connection refused Nov 26 06:28:32.898: INFO: Unexpected error: <*errors.errorString | 0xc000115d80>: { s: "timed out waiting for the condition", } Nov 26 06:28:32.898: FAIL: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc000cfa4b0) test/e2e/framework/framework.go:241 +0x96f [AfterEach] [sig-network] LoadBalancers test/e2e/framework/node/init/init.go:32 Nov 26 06:28:32.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-network] LoadBalancers test/e2e/network/loadbalancer.go:71 [DeferCleanup (Each)] [sig-network] LoadBalancers dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/26/22 06:28:32.938 [DeferCleanup (Each)] [sig-network] LoadBalancers tear down framework | framework.go:193
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-network\]\sLoadBalancers\sshould\sbe\sable\sto\screate\san\sinternal\stype\sload\sbalancer\s\[Slow\]$'
test/e2e/framework/framework.go:241 k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc00111a4b0) test/e2e/framework/framework.go:241 +0x96f There were additional failures detected after the initial failure: [PANICKED] Test Panicked In [AfterEach] at: /usr/local/go/src/runtime/panic.go:260 runtime error: invalid memory address or nil pointer dereference Full Stack Trace k8s.io/kubernetes/test/e2e/network.glob..func19.2() test/e2e/network/loadbalancer.go:73 +0x113 ---------- [FAILED] Nov 26 06:31:23.053: failed to list events in namespace "loadbalancers-4161": Get "https://34.83.17.181/api/v1/namespaces/loadbalancers-4161/events": dial tcp 34.83.17.181:443: connect: connection refused In [DeferCleanup (Each)] at: test/e2e/framework/debug/dump.go:44 ---------- [FAILED] Nov 26 06:31:23.094: Couldn't delete ns: "loadbalancers-4161": Delete "https://34.83.17.181/api/v1/namespaces/loadbalancers-4161": dial tcp 34.83.17.181:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://34.83.17.181/api/v1/namespaces/loadbalancers-4161", Err:(*net.OpError)(0xc003995680)}) In [DeferCleanup (Each)] at: test/e2e/framework/framework.go:370from junit_01.xml
[BeforeEach] [sig-network] LoadBalancers set up framework | framework.go:178 STEP: Creating a kubernetes client 11/26/22 06:28:36.874 Nov 26 06:28:36.874: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename loadbalancers 11/26/22 06:28:36.876 Nov 26 06:28:36.915: INFO: Unexpected error while creating namespace: Post "https://34.83.17.181/api/v1/namespaces": dial tcp 34.83.17.181:443: connect: connection refused Nov 26 06:28:38.955: INFO: Unexpected error while creating namespace: Post "https://34.83.17.181/api/v1/namespaces": dial tcp 34.83.17.181:443: connect: connection refused Nov 26 06:28:40.955: INFO: Unexpected error while creating namespace: Post "https://34.83.17.181/api/v1/namespaces": dial tcp 34.83.17.181:443: connect: connection refused Nov 26 06:28:42.955: INFO: Unexpected error while creating namespace: Post "https://34.83.17.181/api/v1/namespaces": dial tcp 34.83.17.181:443: connect: connection refused Nov 26 06:28:44.955: INFO: Unexpected error while creating namespace: Post "https://34.83.17.181/api/v1/namespaces": dial tcp 34.83.17.181:443: connect: connection refused Nov 26 06:31:22.973: INFO: Unexpected error: <*fmt.wrapError | 0xc003948000>: { msg: "wait for service account \"default\" in namespace \"loadbalancers-4161\": timed out waiting for the condition", err: <*errors.errorString | 0xc000195d70>{ s: "timed out waiting for the condition", }, } Nov 26 06:31:22.974: FAIL: wait for service account "default" in namespace "loadbalancers-4161": timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc00111a4b0) test/e2e/framework/framework.go:241 +0x96f [AfterEach] [sig-network] LoadBalancers test/e2e/framework/node/init/init.go:32 Nov 26 06:31:22.974: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-network] LoadBalancers test/e2e/network/loadbalancer.go:71 [DeferCleanup (Each)] [sig-network] LoadBalancers dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/26/22 06:31:23.014 STEP: Collecting events from namespace "loadbalancers-4161". 11/26/22 06:31:23.014 Nov 26 06:31:23.053: INFO: Unexpected error: failed to list events in namespace "loadbalancers-4161": <*url.Error | 0xc0038122d0>: { Op: "Get", URL: "https://34.83.17.181/api/v1/namespaces/loadbalancers-4161/events", Err: <*net.OpError | 0xc003984e60>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc00316ac30>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 34, 83, 17, 181], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc000d58400>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Nov 26 06:31:23.053: FAIL: failed to list events in namespace "loadbalancers-4161": Get "https://34.83.17.181/api/v1/namespaces/loadbalancers-4161/events": dial tcp 34.83.17.181:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework/debug.dumpEventsInNamespace(0xc0017365c0, {0xc001037f50, 0x12}) test/e2e/framework/debug/dump.go:44 +0x191 k8s.io/kubernetes/test/e2e/framework/debug.DumpAllNamespaceInfo({0x801de88, 0xc001a22340}, {0xc001037f50, 0x12}) test/e2e/framework/debug/dump.go:62 +0x8d k8s.io/kubernetes/test/e2e/framework/debug/init.init.0.func1.1(0xc001736650?, {0xc001037f50?, 0x7fa7740?}) test/e2e/framework/debug/init/init.go:34 +0x32 k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo.func1() test/e2e/framework/framework.go:274 +0x6d k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo(0xc00111a4b0) test/e2e/framework/framework.go:271 +0x179 reflect.Value.call({0x6627cc0?, 0xc00012c640?, 0xc001770fb0?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0xc001b29c28?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc00012c640?, 0x29449fc?}, {0xae73300?, 0xc001770f80?, 0x2fdb5c0?}) /usr/local/go/src/reflect/value.go:368 +0xbc [DeferCleanup (Each)] [sig-network] LoadBalancers tear down framework | framework.go:193 STEP: Destroying namespace "loadbalancers-4161" for this suite. 11/26/22 06:31:23.054 Nov 26 06:31:23.094: FAIL: Couldn't delete ns: "loadbalancers-4161": Delete "https://34.83.17.181/api/v1/namespaces/loadbalancers-4161": dial tcp 34.83.17.181:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://34.83.17.181/api/v1/namespaces/loadbalancers-4161", Err:(*net.OpError)(0xc003995680)}) Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach.func1() test/e2e/framework/framework.go:370 +0x4fe k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc00111a4b0) test/e2e/framework/framework.go:383 +0x1ca reflect.Value.call({0x6627cc0?, 0xc00012c550?, 0xc004b1efb0?}, {0x75b6e72, 0x4}, {0xae73300, 0x0, 0x0?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x6627cc0?, 0xc00012c550?, 0x0?}, {0xae73300?, 0x5?, 0xc003064110?}) /usr/local/go/src/reflect/value.go:368 +0xbc
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-network\]\sLoadBalancers\sshould\sbe\sable\sto\sswitch\ssession\saffinity\sfor\sLoadBalancer\sservice\swith\sESIPP\soff\s\[Slow\]\s\[LinuxOnly\]$'
test/e2e/network/service.go:3978 k8s.io/kubernetes/test/e2e/network.execAffinityTestForLBServiceWithOptionalTransition(0x760dada?, {0x801de88, 0xc0002541a0}, 0xc002b03900, 0x1) test/e2e/network/service.go:3978 +0x1b1 k8s.io/kubernetes/test/e2e/network.execAffinityTestForLBServiceWithTransition(...) test/e2e/network/service.go:3962 k8s.io/kubernetes/test/e2e/network.glob..func19.11() test/e2e/network/loadbalancer.go:809 +0xf3from junit_01.xml
[BeforeEach] [sig-network] LoadBalancers set up framework | framework.go:178 STEP: Creating a kubernetes client 11/26/22 06:21:27.596 Nov 26 06:21:27.597: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename loadbalancers 11/26/22 06:21:27.598 STEP: Waiting for a default service account to be provisioned in namespace 11/26/22 06:22:16.631 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/26/22 06:22:16.757 [BeforeEach] [sig-network] LoadBalancers test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-network] LoadBalancers test/e2e/network/loadbalancer.go:65 [It] should be able to switch session affinity for LoadBalancer service with ESIPP off [Slow] [LinuxOnly] test/e2e/network/loadbalancer.go:802 STEP: creating service in namespace loadbalancers-6901 11/26/22 06:22:21.623 STEP: creating service affinity-lb-transition in namespace loadbalancers-6901 11/26/22 06:22:21.623 STEP: creating replication controller affinity-lb-transition in namespace loadbalancers-6901 11/26/22 06:22:21.736 I1126 06:22:21.797437 10094 runners.go:193] Created replication controller with name: affinity-lb-transition, namespace: loadbalancers-6901, replica count: 3 I1126 06:22:24.898437 10094 runners.go:193] affinity-lb-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1126 06:22:27.899437 10094 runners.go:193] affinity-lb-transition Pods: 3 out of 3 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1126 06:22:30.899754 10094 runners.go:193] affinity-lb-transition Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1126 06:22:30.899772 10094 runners.go:193] Logging node info for node bootstrap-e2e-minion-group-6hf3 I1126 06:22:30.961141 10094 runners.go:193] Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-6hf3 b331b1c1-4929-41cb-863a-a271765dc91a 9753 0 2022-11-26 06:06:35 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-6hf3 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-6hf3 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-multivolume-5397":"bootstrap-e2e-minion-group-6hf3","csi-mock-csi-mock-volumes-3236":"bootstrap-e2e-minion-group-6hf3"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-26 06:06:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-26 06:06:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}}}} } {kube-controller-manager Update v1 2022-11-26 06:20:44 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:volumesAttached":{}}} status} {node-problem-detector Update v1 2022-11-26 06:21:41 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-11-26 06:22:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-gce-dg-1-6-1-5-dwngr-clu/us-west1-b/bootstrap-e2e-minion-group-6hf3,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-26 06:21:41 +0000 UTC,LastTransitionTime:2022-11-26 06:06:38 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-26 06:21:41 +0000 UTC,LastTransitionTime:2022-11-26 06:06:38 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-26 06:21:41 +0000 UTC,LastTransitionTime:2022-11-26 06:06:38 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-26 06:21:41 +0000 UTC,LastTransitionTime:2022-11-26 06:06:38 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-26 06:21:41 +0000 UTC,LastTransitionTime:2022-11-26 06:06:38 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-26 06:21:41 +0000 UTC,LastTransitionTime:2022-11-26 06:06:38 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-26 06:21:41 +0000 UTC,LastTransitionTime:2022-11-26 06:06:38 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 06:06:49 +0000 UTC,LastTransitionTime:2022-11-26 06:06:49 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-26 06:20:44 +0000 UTC,LastTransitionTime:2022-11-26 06:06:35 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-26 06:20:44 +0000 UTC,LastTransitionTime:2022-11-26 06:06:35 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-26 06:20:44 +0000 UTC,LastTransitionTime:2022-11-26 06:06:35 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-26 06:20:44 +0000 UTC,LastTransitionTime:2022-11-26 06:06:36 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.3,},NodeAddress{Type:ExternalIP,Address:34.127.104.133,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-6hf3.c.k8s-gce-dg-1-6-1-5-dwngr-clu.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-6hf3.c.k8s-gce-dg-1-6-1-5-dwngr-clu.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:26797942e06ae330f816fe3103a312e5,SystemUUID:26797942-e06a-e330-f816-fe3103a312e5,BootID:2fb94944-0ddb-4524-8e35-935d1c8900f4,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/volume/nfs@sha256:3bda73f2428522b0e342af80a0b9679e8594c2126f2b3cca39ed787589741b9e registry.k8s.io/e2e-test-images/volume/nfs:1.3],SizeBytes:95836203,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:67201736,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[kubernetes.io/csi/csi-mock-csi-mock-volumes-3236^8356e743-6d51-11ed-b618-52bed00202b8 kubernetes.io/csi/csi-mock-csi-mock-volumes-3236^8af161f1-6d51-11ed-b618-52bed00202b8],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-mock-csi-mock-volumes-3236^8356e743-6d51-11ed-b618-52bed00202b8,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-mock-csi-mock-volumes-3236^8af161f1-6d51-11ed-b618-52bed00202b8,DevicePath:,},},Config:nil,},} I1126 06:22:30.961573 10094 runners.go:193] Logging kubelet events for node bootstrap-e2e-minion-group-6hf3 I1126 06:22:31.030658 10094 runners.go:193] Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-6hf3 I1126 06:22:31.128342 10094 runners.go:193] Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-6hf3: error trying to reach service: No agent available I1126 06:22:31.184235 10094 runners.go:193] Running kubectl logs on non-ready containers in loadbalancers-6901 Nov 26 06:22:31.489: INFO: Failed to get logs of pod affinity-lb-transition-rvm8c, container affinity-lb-transition, err: an error on the server ("unknown") has prevented the request from succeeding (get pods affinity-lb-transition-rvm8c) Nov 26 06:22:31.489: INFO: Logs of loadbalancers-6901/affinity-lb-transition-rvm8c:affinity-lb-transition on node bootstrap-e2e-minion-group-4lvd Nov 26 06:22:31.489: INFO: : STARTLOG ENDLOG for container loadbalancers-6901:affinity-lb-transition-rvm8c:affinity-lb-transition Nov 26 06:22:31.489: INFO: Unexpected error: failed to create replication controller with service in the namespace: loadbalancers-6901: <*errors.errorString | 0xc004972670>: { s: "1 containers failed which is more than allowed 0", } Nov 26 06:22:31.489: FAIL: failed to create replication controller with service in the namespace: loadbalancers-6901: 1 containers failed which is more than allowed 0 Full Stack Trace k8s.io/kubernetes/test/e2e/network.execAffinityTestForLBServiceWithOptionalTransition(0x760dada?, {0x801de88, 0xc0002541a0}, 0xc002b03900, 0x1) test/e2e/network/service.go:3978 +0x1b1 k8s.io/kubernetes/test/e2e/network.execAffinityTestForLBServiceWithTransition(...) test/e2e/network/service.go:3962 k8s.io/kubernetes/test/e2e/network.glob..func19.11() test/e2e/network/loadbalancer.go:809 +0xf3 [AfterEach] [sig-network] LoadBalancers test/e2e/framework/node/init/init.go:32 Nov 26 06:22:31.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-network] LoadBalancers test/e2e/network/loadbalancer.go:71 Nov 26 06:22:31.724: INFO: Output of kubectl describe svc: Nov 26 06:22:31.724: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-6901 describe svc --namespace=loadbalancers-6901' Nov 26 06:22:32.238: INFO: stderr: "" Nov 26 06:22:32.238: INFO: stdout: "Name: affinity-lb-transition\nNamespace: loadbalancers-6901\nLabels: <none>\nAnnotations: <none>\nSelector: name=affinity-lb-transition\nType: LoadBalancer\nIP Family Policy: SingleStack\nIP Families: IPv4\nIP: 10.0.225.223\nIPs: 10.0.225.223\nPort: <unset> 80/TCP\nTargetPort: 9376/TCP\nNodePort: <unset> 30752/TCP\nEndpoints: 10.64.1.157:9376,10.64.3.164:9376\nSession Affinity: ClientIP\nExternal Traffic Policy: Cluster\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal EnsuringLoadBalancer 11s service-controller Ensuring load balancer\n" Nov 26 06:22:32.238: INFO: Name: affinity-lb-transition Namespace: loadbalancers-6901 Labels: <none> Annotations: <none> Selector: name=affinity-lb-transition Type: LoadBalancer IP Family Policy: SingleStack IP Families: IPv4 IP: 10.0.225.223 IPs: 10.0.225.223 Port: <unset> 80/TCP TargetPort: 9376/TCP NodePort: <unset> 30752/TCP Endpoints: 10.64.1.157:9376,10.64.3.164:9376 Session Affinity: ClientIP External Traffic Policy: Cluster Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal EnsuringLoadBalancer 11s service-controller Ensuring load balancer [DeferCleanup (Each)] [sig-network] LoadBalancers test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-network] LoadBalancers dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/26/22 06:22:32.238 STEP: Collecting events from namespace "loadbalancers-6901". 11/26/22 06:22:32.238 STEP: Found 18 events. 11/26/22 06:22:32.325 Nov 26 06:22:32.325: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for affinity-lb-transition-p4rbk: { } Scheduled: Successfully assigned loadbalancers-6901/affinity-lb-transition-p4rbk to bootstrap-e2e-minion-group-8xrn Nov 26 06:22:32.325: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for affinity-lb-transition-q4gf5: { } Scheduled: Successfully assigned loadbalancers-6901/affinity-lb-transition-q4gf5 to bootstrap-e2e-minion-group-6hf3 Nov 26 06:22:32.325: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for affinity-lb-transition-rvm8c: { } Scheduled: Successfully assigned loadbalancers-6901/affinity-lb-transition-rvm8c to bootstrap-e2e-minion-group-4lvd Nov 26 06:22:32.325: INFO: At 2022-11-26 06:22:21 +0000 UTC - event for affinity-lb-transition: {replication-controller } SuccessfulCreate: Created pod: affinity-lb-transition-rvm8c Nov 26 06:22:32.325: INFO: At 2022-11-26 06:22:21 +0000 UTC - event for affinity-lb-transition: {replication-controller } SuccessfulCreate: Created pod: affinity-lb-transition-q4gf5 Nov 26 06:22:32.325: INFO: At 2022-11-26 06:22:21 +0000 UTC - event for affinity-lb-transition: {replication-controller } SuccessfulCreate: Created pod: affinity-lb-transition-p4rbk Nov 26 06:22:32.325: INFO: At 2022-11-26 06:22:21 +0000 UTC - event for affinity-lb-transition: {service-controller } EnsuringLoadBalancer: Ensuring load balancer Nov 26 06:22:32.325: INFO: At 2022-11-26 06:22:22 +0000 UTC - event for affinity-lb-transition-q4gf5: {kubelet bootstrap-e2e-minion-group-6hf3} Created: Created container affinity-lb-transition Nov 26 06:22:32.325: INFO: At 2022-11-26 06:22:22 +0000 UTC - event for affinity-lb-transition-q4gf5: {kubelet bootstrap-e2e-minion-group-6hf3} Started: Started container affinity-lb-transition Nov 26 06:22:32.325: INFO: At 2022-11-26 06:22:22 +0000 UTC - event for affinity-lb-transition-q4gf5: {kubelet bootstrap-e2e-minion-group-6hf3} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.43" already present on machine Nov 26 06:22:32.325: INFO: At 2022-11-26 06:22:23 +0000 UTC - event for affinity-lb-transition-p4rbk: {kubelet bootstrap-e2e-minion-group-8xrn} Started: Started container affinity-lb-transition Nov 26 06:22:32.325: INFO: At 2022-11-26 06:22:23 +0000 UTC - event for affinity-lb-transition-p4rbk: {kubelet bootstrap-e2e-minion-group-8xrn} Created: Created container affinity-lb-transition Nov 26 06:22:32.325: INFO: At 2022-11-26 06:22:23 +0000 UTC - event for affinity-lb-transition-p4rbk: {kubelet bootstrap-e2e-minion-group-8xrn} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.43" already present on machine Nov 26 06:22:32.325: INFO: At 2022-11-26 06:22:23 +0000 UTC - event for affinity-lb-transition-q4gf5: {kubelet bootstrap-e2e-minion-group-6hf3} Killing: Stopping container affinity-lb-transition Nov 26 06:22:32.325: INFO: At 2022-11-26 06:22:24 +0000 UTC - event for affinity-lb-transition-rvm8c: {kubelet bootstrap-e2e-minion-group-4lvd} Started: Started container affinity-lb-transition Nov 26 06:22:32.325: INFO: At 2022-11-26 06:22:24 +0000 UTC - event for affinity-lb-transition-rvm8c: {kubelet bootstrap-e2e-minion-group-4lvd} Created: Created container affinity-lb-transition Nov 26 06:22:32.325: INFO: At 2022-11-26 06:22:24 +0000 UTC - event for affinity-lb-transition-rvm8c: {kubelet bootstrap-e2e-minion-group-4lvd} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.43" already present on machine Nov 26 06:22:32.325: INFO: At 2022-11-26 06:22:26 +0000 UTC - event for affinity-lb-transition-q4gf5: {kubelet bootstrap-e2e-minion-group-6hf3} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Nov 26 06:22:32.378: INFO: POD NODE PHASE GRACE CONDITIONS Nov 26 06:22:32.378: INFO: affinity-lb-transition-p4rbk bootstrap-e2e-minion-group-8xrn Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:22:21 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:22:23 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:22:23 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:22:21 +0000 UTC }] Nov 26 06:22:32.378: INFO: affinity-lb-transition-q4gf5 bootstrap-e2e-minion-group-6hf3 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:22:21 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:22:27 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:22:27 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:22:21 +0000 UTC }] Nov 26 06:22:32.378: INFO: affinity-lb-transition-rvm8c bootstrap-e2e-minion-group-4lvd Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:22:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:22:21 +0000 UTC ContainersNotReady containers with unready status: [affinity-lb-transition]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:22:21 +0000 UTC ContainersNotReady containers with unready status: [affinity-lb-transition]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:22:21 +0000 UTC }] Nov 26 06:22:32.378: INFO: Nov 26 06:22:32.454: INFO: Unable to fetch loadbalancers-6901/affinity-lb-transition-p4rbk/affinity-lb-transition logs: an error on the server ("unknown") has prevented the request from succeeding (get pods affinity-lb-transition-p4rbk) Nov 26 06:22:32.558: INFO: Unable to fetch loadbalancers-6901/affinity-lb-transition-q4gf5/affinity-lb-transition logs: an error on the server ("unknown") has prevented the request from succeeding (get pods affinity-lb-transition-q4gf5) Nov 26 06:22:32.639: INFO: Unable to fetch loadbalancers-6901/affinity-lb-transition-rvm8c/affinity-lb-transition logs: an error on the server ("unknown") has prevented the request from succeeding (get pods affinity-lb-transition-rvm8c) Nov 26 06:22:32.700: INFO: Logging node info for node bootstrap-e2e-master Nov 26 06:22:32.759: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-master 193495c1-5b1f-409e-8da9-b1de094ba8ed 9690 0 2022-11-26 06:06:31 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-26 06:06:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2022-11-26 06:06:49 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}},"f:taints":{}}} } {kube-controller-manager Update v1 2022-11-26 06:06:49 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2022-11-26 06:22:11 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-gce-dg-1-6-1-5-dwngr-clu/us-west1-b/bootstrap-e2e-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16656896000 0} {<nil>} 16266500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3858366464 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{14991206376 0} {<nil>} 14991206376 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3596222464 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 06:06:49 +0000 UTC,LastTransitionTime:2022-11-26 06:06:49 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-26 06:22:11 +0000 UTC,LastTransitionTime:2022-11-26 06:06:31 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-26 06:22:11 +0000 UTC,LastTransitionTime:2022-11-26 06:06:31 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-26 06:22:11 +0000 UTC,LastTransitionTime:2022-11-26 06:06:31 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-26 06:22:11 +0000 UTC,LastTransitionTime:2022-11-26 06:06:33 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:34.83.17.181,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-master.c.k8s-gce-dg-1-6-1-5-dwngr-clu.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-master.c.k8s-gce-dg-1-6-1-5-dwngr-clu.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:d1b5362410d55fa2b320229b7ed22cb3,SystemUUID:d1b53624-10d5-5fa2-b320-229b7ed22cb3,BootID:13424371-b4dc-4f78-9364-1afc151da040,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:135160272,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:124990265,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:57660216,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:2c111f004bec24888d8cfa2a812a38fb8341350abac67dcd0ac64e709dfe389c registry.k8s.io/kas-network-proxy/proxy-server:v0.0.33],SizeBytes:22020129,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 26 06:22:32.760: INFO: Logging kubelet events for node bootstrap-e2e-master Nov 26 06:22:32.822: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-master Nov 26 06:22:32.886: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-master: error trying to reach service: No agent available Nov 26 06:22:32.886: INFO: Logging node info for node bootstrap-e2e-minion-group-4lvd Nov 26 06:22:32.944: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-4lvd e0d66193-5762-49e7-b872-8ea4dee27a72 10174 0 2022-11-26 06:06:29 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-4lvd kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-4lvd topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-volumemode-7846":"bootstrap-e2e-minion-group-4lvd"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-11-26 06:06:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}}}} } {kubelet Update v1 2022-11-26 06:06:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-26 06:11:57 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {node-problem-detector Update v1 2022-11-26 06:21:36 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-11-26 06:22:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-gce-dg-1-6-1-5-dwngr-clu/us-west1-b/bootstrap-e2e-minion-group-4lvd,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-26 06:21:36 +0000 UTC,LastTransitionTime:2022-11-26 06:06:33 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-26 06:21:36 +0000 UTC,LastTransitionTime:2022-11-26 06:06:33 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-26 06:21:36 +0000 UTC,LastTransitionTime:2022-11-26 06:06:33 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-26 06:21:36 +0000 UTC,LastTransitionTime:2022-11-26 06:06:33 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-26 06:21:36 +0000 UTC,LastTransitionTime:2022-11-26 06:06:33 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-26 06:21:36 +0000 UTC,LastTransitionTime:2022-11-26 06:06:33 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-26 06:21:36 +0000 UTC,LastTransitionTime:2022-11-26 06:06:33 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 06:06:38 +0000 UTC,LastTransitionTime:2022-11-26 06:06:38 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-26 06:22:31 +0000 UTC,LastTransitionTime:2022-11-26 06:06:29 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-26 06:22:31 +0000 UTC,LastTransitionTime:2022-11-26 06:06:29 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-26 06:22:31 +0000 UTC,LastTransitionTime:2022-11-26 06:06:29 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-26 06:22:31 +0000 UTC,LastTransitionTime:2022-11-26 06:06:29 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.5,},NodeAddress{Type:ExternalIP,Address:34.83.235.77,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-4lvd.c.k8s-gce-dg-1-6-1-5-dwngr-clu.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-4lvd.c.k8s-gce-dg-1-6-1-5-dwngr-clu.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:60066c0df6854403f12a43e8820a94a9,SystemUUID:60066c0d-f685-4403-f12a-43e8820a94a9,BootID:c95b6e64-9275-42a1-b638-b99f8479d167,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:67201736,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 26 06:22:32.944: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-4lvd Nov 26 06:22:33.016: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-4lvd Nov 26 06:22:33.106: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-4lvd: error trying to reach service: No agent available Nov 26 06:22:33.106: INFO: Logging node info for node bootstrap-e2e-minion-group-6hf3 Nov 26 06:22:33.163: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-6hf3 b331b1c1-4929-41cb-863a-a271765dc91a 9753 0 2022-11-26 06:06:35 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-6hf3 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-6hf3 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-multivolume-5397":"bootstrap-e2e-minion-group-6hf3","csi-mock-csi-mock-volumes-3236":"bootstrap-e2e-minion-group-6hf3"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-26 06:06:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-26 06:06:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}}}} } {kube-controller-manager Update v1 2022-11-26 06:20:44 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:volumesAttached":{}}} status} {node-problem-detector Update v1 2022-11-26 06:21:41 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-11-26 06:22:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-gce-dg-1-6-1-5-dwngr-clu/us-west1-b/bootstrap-e2e-minion-group-6hf3,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-26 06:21:41 +0000 UTC,LastTransitionTime:2022-11-26 06:06:38 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-26 06:21:41 +0000 UTC,LastTransitionTime:2022-11-26 06:06:38 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-26 06:21:41 +0000 UTC,LastTransitionTime:2022-11-26 06:06:38 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-26 06:21:41 +0000 UTC,LastTransitionTime:2022-11-26 06:06:38 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-26 06:21:41 +0000 UTC,LastTransitionTime:2022-11-26 06:06:38 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-26 06:21:41 +0000 UTC,LastTransitionTime:2022-11-26 06:06:38 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-26 06:21:41 +0000 UTC,LastTransitionTime:2022-11-26 06:06:38 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 06:06:49 +0000 UTC,LastTransitionTime:2022-11-26 06:06:49 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-26 06:20:44 +0000 UTC,LastTransitionTime:2022-11-26 06:06:35 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-26 06:20:44 +0000 UTC,LastTransitionTime:2022-11-26 06:06:35 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-26 06:20:44 +0000 UTC,LastTransitionTime:2022-11-26 06:06:35 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-26 06:20:44 +0000 UTC,LastTransitionTime:2022-11-26 06:06:36 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.3,},NodeAddress{Type:ExternalIP,Address:34.127.104.133,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-6hf3.c.k8s-gce-dg-1-6-1-5-dwngr-clu.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-6hf3.c.k8s-gce-dg-1-6-1-5-dwngr-clu.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:26797942e06ae330f816fe3103a312e5,SystemUUID:26797942-e06a-e330-f816-fe3103a312e5,BootID:2fb94944-0ddb-4524-8e35-935d1c8900f4,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/volume/nfs@sha256:3bda73f2428522b0e342af80a0b9679e8594c2126f2b3cca39ed787589741b9e registry.k8s.io/e2e-test-images/volume/nfs:1.3],SizeBytes:95836203,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:67201736,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[kubernetes.io/csi/csi-mock-csi-mock-volumes-3236^8356e743-6d51-11ed-b618-52bed00202b8 kubernetes.io/csi/csi-mock-csi-mock-volumes-3236^8af161f1-6d51-11ed-b618-52bed00202b8],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-mock-csi-mock-volumes-3236^8356e743-6d51-11ed-b618-52bed00202b8,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-mock-csi-mock-volumes-3236^8af161f1-6d51-11ed-b618-52bed00202b8,DevicePath:,},},Config:nil,},} Nov 26 06:22:33.163: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-6hf3 Nov 26 06:22:33.275: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-6hf3 Nov 26 06:22:33.368: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-6hf3: error trying to reach service: No agent available Nov 26 06:22:33.368: INFO: Logging node info for node bootstrap-e2e-minion-group-8xrn Nov 26 06:22:33.434: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-8xrn bdb3ca25-6ac6-4bb7-a82f-bb358d7ba8f9 10200 0 2022-11-26 06:06:29 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-8xrn kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-8xrn topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-11-26 06:06:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}}}} } {kubelet Update v1 2022-11-26 06:06:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {node-problem-detector Update v1 2022-11-26 06:21:35 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2022-11-26 06:22:31 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:volumesAttached":{}}} status} {kubelet Update v1 2022-11-26 06:22:31 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-gce-dg-1-6-1-5-dwngr-clu/us-west1-b/bootstrap-e2e-minion-group-8xrn,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-26 06:21:35 +0000 UTC,LastTransitionTime:2022-11-26 06:06:33 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-26 06:21:35 +0000 UTC,LastTransitionTime:2022-11-26 06:06:33 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-26 06:21:35 +0000 UTC,LastTransitionTime:2022-11-26 06:06:33 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-26 06:21:35 +0000 UTC,LastTransitionTime:2022-11-26 06:06:33 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-26 06:21:35 +0000 UTC,LastTransitionTime:2022-11-26 06:06:33 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-26 06:21:35 +0000 UTC,LastTransitionTime:2022-11-26 06:06:33 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-26 06:21:35 +0000 UTC,LastTransitionTime:2022-11-26 06:06:33 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 06:06:38 +0000 UTC,LastTransitionTime:2022-11-26 06:06:38 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-26 06:22:31 +0000 UTC,LastTransitionTime:2022-11-26 06:06:29 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-26 06:22:31 +0000 UTC,LastTransitionTime:2022-11-26 06:06:29 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-26 06:22:31 +0000 UTC,LastTransitionTime:2022-11-26 06:06:29 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-26 06:22:31 +0000 UTC,LastTransitionTime:2022-11-26 06:06:29 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.4,},NodeAddress{Type:ExternalIP,Address:34.105.92.105,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-8xrn.c.k8s-gce-dg-1-6-1-5-dwngr-clu.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-8xrn.c.k8s-gce-dg-1-6-1-5-dwngr-clu.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:1e49116bed901a0b492cc2a872edd7a8,SystemUUID:1e49116b-ed90-1a0b-492c-c2a872edd7a8,BootID:24bc719e-5b36-4d61-bc89-44455ca28dd7,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:67201736,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-provisioning-5148^b3aa56ca-6d52-11ed-8032-4289a146cfc5],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-5148^b3aa56ca-6d52-11ed-8032-4289a146cfc5,DevicePath:,},},Config:nil,},} Nov 26 06:22:33.434: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-8xrn Nov 26 06:22:33.609: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-8xrn Nov 26 06:22:33.771: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-8xrn: error trying to reach service: No agent available [DeferCleanup (Each)] [sig-network] LoadBalancers tear down framework | framework.go:193 STEP: Destroying namespace "loadbalancers-6901" for this suite. 11/26/22 06:22:33.771
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-network\]\sLoadBalancers\sshould\sbe\sable\sto\sswitch\ssession\saffinity\sfor\sLoadBalancer\sservice\swith\sESIPP\son\s\[Slow\]\s\[LinuxOnly\]$'
test/e2e/network/service.go:3978 k8s.io/kubernetes/test/e2e/network.execAffinityTestForLBServiceWithOptionalTransition(0x7638a85?, {0x801de88, 0xc0023cc820}, 0xc002830f00, 0x1) test/e2e/network/service.go:3978 +0x1b1 k8s.io/kubernetes/test/e2e/network.execAffinityTestForLBServiceWithTransition(...) test/e2e/network/service.go:3962 k8s.io/kubernetes/test/e2e/network.glob..func19.9() test/e2e/network/loadbalancer.go:787 +0xf3from junit_01.xml
[BeforeEach] [sig-network] LoadBalancers set up framework | framework.go:178 STEP: Creating a kubernetes client 11/26/22 06:25:41.878 Nov 26 06:25:41.879: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename loadbalancers 11/26/22 06:25:41.88 STEP: Waiting for a default service account to be provisioned in namespace 11/26/22 06:25:42.115 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/26/22 06:25:42.221 [BeforeEach] [sig-network] LoadBalancers test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-network] LoadBalancers test/e2e/network/loadbalancer.go:65 [It] should be able to switch session affinity for LoadBalancer service with ESIPP on [Slow] [LinuxOnly] test/e2e/network/loadbalancer.go:780 STEP: creating service in namespace loadbalancers-2710 11/26/22 06:25:42.383 STEP: creating service affinity-lb-esipp-transition in namespace loadbalancers-2710 11/26/22 06:25:42.383 STEP: creating replication controller affinity-lb-esipp-transition in namespace loadbalancers-2710 11/26/22 06:25:42.509 I1126 06:25:42.594398 10273 runners.go:193] Created replication controller with name: affinity-lb-esipp-transition, namespace: loadbalancers-2710, replica count: 3 I1126 06:25:45.695608 10273 runners.go:193] affinity-lb-esipp-transition Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1126 06:25:48.695874 10273 runners.go:193] affinity-lb-esipp-transition Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1126 06:25:48.695891 10273 runners.go:193] Logging node info for node bootstrap-e2e-minion-group-4lvd I1126 06:25:48.791808 10273 runners.go:193] Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-4lvd e0d66193-5762-49e7-b872-8ea4dee27a72 11679 0 2022-11-26 06:06:29 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-4lvd kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-4lvd topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-11-26 06:06:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}}}} } {kubelet Update v1 2022-11-26 06:06:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {node-problem-detector Update v1 2022-11-26 06:21:36 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2022-11-26 06:25:13 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:volumesAttached":{}}} status} {kubelet Update v1 2022-11-26 06:25:32 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-gce-dg-1-6-1-5-dwngr-clu/us-west1-b/bootstrap-e2e-minion-group-4lvd,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-26 06:21:36 +0000 UTC,LastTransitionTime:2022-11-26 06:06:33 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-26 06:21:36 +0000 UTC,LastTransitionTime:2022-11-26 06:06:33 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-26 06:21:36 +0000 UTC,LastTransitionTime:2022-11-26 06:06:33 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-26 06:21:36 +0000 UTC,LastTransitionTime:2022-11-26 06:06:33 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-26 06:21:36 +0000 UTC,LastTransitionTime:2022-11-26 06:06:33 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-26 06:21:36 +0000 UTC,LastTransitionTime:2022-11-26 06:06:33 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-26 06:21:36 +0000 UTC,LastTransitionTime:2022-11-26 06:06:33 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 06:06:38 +0000 UTC,LastTransitionTime:2022-11-26 06:06:38 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-26 06:25:16 +0000 UTC,LastTransitionTime:2022-11-26 06:06:29 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-26 06:25:16 +0000 UTC,LastTransitionTime:2022-11-26 06:06:29 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-26 06:25:16 +0000 UTC,LastTransitionTime:2022-11-26 06:06:29 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-26 06:25:16 +0000 UTC,LastTransitionTime:2022-11-26 06:06:29 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.5,},NodeAddress{Type:ExternalIP,Address:34.83.235.77,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-4lvd.c.k8s-gce-dg-1-6-1-5-dwngr-clu.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-4lvd.c.k8s-gce-dg-1-6-1-5-dwngr-clu.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:60066c0df6854403f12a43e8820a94a9,SystemUUID:60066c0d-f685-4403-f12a-43e8820a94a9,BootID:c95b6e64-9275-42a1-b638-b99f8479d167,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:67201736,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-multivolume-6328^139fe791-6d53-11ed-8b0a-ae3fa0c45b22],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-multivolume-6328^139fe791-6d53-11ed-8b0a-ae3fa0c45b22,DevicePath:,},},Config:nil,},} I1126 06:25:48.792254 10273 runners.go:193] Logging kubelet events for node bootstrap-e2e-minion-group-4lvd I1126 06:25:48.861566 10273 runners.go:193] Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-4lvd I1126 06:25:49.008997 10273 runners.go:193] Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-4lvd: error trying to reach service: No agent available I1126 06:25:49.079712 10273 runners.go:193] Running kubectl logs on non-ready containers in loadbalancers-2710 Nov 26 06:25:49.424: INFO: Failed to get logs of pod affinity-lb-esipp-transition-cm5jp, container affinity-lb-esipp-transition, err: an error on the server ("unknown") has prevented the request from succeeding (get pods affinity-lb-esipp-transition-cm5jp) Nov 26 06:25:49.424: INFO: Logs of loadbalancers-2710/affinity-lb-esipp-transition-cm5jp:affinity-lb-esipp-transition on node bootstrap-e2e-minion-group-6hf3 Nov 26 06:25:49.424: INFO: : STARTLOG ENDLOG for container loadbalancers-2710:affinity-lb-esipp-transition-cm5jp:affinity-lb-esipp-transition Nov 26 06:25:49.424: INFO: Unexpected error: failed to create replication controller with service in the namespace: loadbalancers-2710: <*errors.errorString | 0xc0008011b0>: { s: "1 containers failed which is more than allowed 0", } Nov 26 06:25:49.424: FAIL: failed to create replication controller with service in the namespace: loadbalancers-2710: 1 containers failed which is more than allowed 0 Full Stack Trace k8s.io/kubernetes/test/e2e/network.execAffinityTestForLBServiceWithOptionalTransition(0x7638a85?, {0x801de88, 0xc0023cc820}, 0xc002830f00, 0x1) test/e2e/network/service.go:3978 +0x1b1 k8s.io/kubernetes/test/e2e/network.execAffinityTestForLBServiceWithTransition(...) test/e2e/network/service.go:3962 k8s.io/kubernetes/test/e2e/network.glob..func19.9() test/e2e/network/loadbalancer.go:787 +0xf3 [AfterEach] [sig-network] LoadBalancers test/e2e/framework/node/init/init.go:32 Nov 26 06:25:49.425: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-network] LoadBalancers test/e2e/network/loadbalancer.go:71 Nov 26 06:25:49.603: INFO: Output of kubectl describe svc: Nov 26 06:25:49.603: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-2710 describe svc --namespace=loadbalancers-2710' Nov 26 06:25:50.240: INFO: stderr: "" Nov 26 06:25:50.240: INFO: stdout: "Name: affinity-lb-esipp-transition\nNamespace: loadbalancers-2710\nLabels: <none>\nAnnotations: <none>\nSelector: name=affinity-lb-esipp-transition\nType: LoadBalancer\nIP Family Policy: SingleStack\nIP Families: IPv4\nIP: 10.0.197.139\nIPs: 10.0.197.139\nPort: <unset> 80/TCP\nTargetPort: 9376/TCP\nNodePort: <unset> 32312/TCP\nEndpoints: 10.64.0.152:9376,10.64.1.194:9376,10.64.3.204:9376\nSession Affinity: ClientIP\nExternal Traffic Policy: Local\nHealthCheck NodePort: 31180\nEvents: <none>\n" Nov 26 06:25:50.240: INFO: Name: affinity-lb-esipp-transition Namespace: loadbalancers-2710 Labels: <none> Annotations: <none> Selector: name=affinity-lb-esipp-transition Type: LoadBalancer IP Family Policy: SingleStack IP Families: IPv4 IP: 10.0.197.139 IPs: 10.0.197.139 Port: <unset> 80/TCP TargetPort: 9376/TCP NodePort: <unset> 32312/TCP Endpoints: 10.64.0.152:9376,10.64.1.194:9376,10.64.3.204:9376 Session Affinity: ClientIP External Traffic Policy: Local HealthCheck NodePort: 31180 Events: <none> [DeferCleanup (Each)] [sig-network] LoadBalancers test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-network] LoadBalancers dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/26/22 06:25:50.24 STEP: Collecting events from namespace "loadbalancers-2710". 11/26/22 06:25:50.24 STEP: Found 18 events. 11/26/22 06:25:50.312 Nov 26 06:25:50.312: INFO: At 2022-11-26 06:25:42 +0000 UTC - event for affinity-lb-esipp-transition: {replication-controller } SuccessfulCreate: Created pod: affinity-lb-esipp-transition-lqr9t Nov 26 06:25:50.312: INFO: At 2022-11-26 06:25:42 +0000 UTC - event for affinity-lb-esipp-transition: {replication-controller } SuccessfulCreate: Created pod: affinity-lb-esipp-transition-bngcw Nov 26 06:25:50.312: INFO: At 2022-11-26 06:25:42 +0000 UTC - event for affinity-lb-esipp-transition: {replication-controller } SuccessfulCreate: Created pod: affinity-lb-esipp-transition-cm5jp Nov 26 06:25:50.312: INFO: At 2022-11-26 06:25:42 +0000 UTC - event for affinity-lb-esipp-transition-bngcw: {default-scheduler } Scheduled: Successfully assigned loadbalancers-2710/affinity-lb-esipp-transition-bngcw to bootstrap-e2e-minion-group-4lvd Nov 26 06:25:50.312: INFO: At 2022-11-26 06:25:42 +0000 UTC - event for affinity-lb-esipp-transition-cm5jp: {default-scheduler } Scheduled: Successfully assigned loadbalancers-2710/affinity-lb-esipp-transition-cm5jp to bootstrap-e2e-minion-group-6hf3 Nov 26 06:25:50.312: INFO: At 2022-11-26 06:25:42 +0000 UTC - event for affinity-lb-esipp-transition-lqr9t: {default-scheduler } Scheduled: Successfully assigned loadbalancers-2710/affinity-lb-esipp-transition-lqr9t to bootstrap-e2e-minion-group-8xrn Nov 26 06:25:50.312: INFO: At 2022-11-26 06:25:43 +0000 UTC - event for affinity-lb-esipp-transition-bngcw: {kubelet bootstrap-e2e-minion-group-4lvd} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.43" already present on machine Nov 26 06:25:50.312: INFO: At 2022-11-26 06:25:43 +0000 UTC - event for affinity-lb-esipp-transition-bngcw: {kubelet bootstrap-e2e-minion-group-4lvd} Created: Created container affinity-lb-esipp-transition Nov 26 06:25:50.312: INFO: At 2022-11-26 06:25:43 +0000 UTC - event for affinity-lb-esipp-transition-bngcw: {kubelet bootstrap-e2e-minion-group-4lvd} Started: Started container affinity-lb-esipp-transition Nov 26 06:25:50.312: INFO: At 2022-11-26 06:25:44 +0000 UTC - event for affinity-lb-esipp-transition-bngcw: {kubelet bootstrap-e2e-minion-group-4lvd} Killing: Stopping container affinity-lb-esipp-transition Nov 26 06:25:50.312: INFO: At 2022-11-26 06:25:44 +0000 UTC - event for affinity-lb-esipp-transition-cm5jp: {kubelet bootstrap-e2e-minion-group-6hf3} Created: Created container affinity-lb-esipp-transition Nov 26 06:25:50.312: INFO: At 2022-11-26 06:25:44 +0000 UTC - event for affinity-lb-esipp-transition-cm5jp: {kubelet bootstrap-e2e-minion-group-6hf3} Started: Started container affinity-lb-esipp-transition Nov 26 06:25:50.312: INFO: At 2022-11-26 06:25:44 +0000 UTC - event for affinity-lb-esipp-transition-cm5jp: {kubelet bootstrap-e2e-minion-group-6hf3} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.43" already present on machine Nov 26 06:25:50.312: INFO: At 2022-11-26 06:25:44 +0000 UTC - event for affinity-lb-esipp-transition-lqr9t: {kubelet bootstrap-e2e-minion-group-8xrn} FailedMount: MountVolume.SetUp failed for volume "kube-api-access-vp5p9" : failed to sync configmap cache: timed out waiting for the condition Nov 26 06:25:50.312: INFO: At 2022-11-26 06:25:45 +0000 UTC - event for affinity-lb-esipp-transition-lqr9t: {kubelet bootstrap-e2e-minion-group-8xrn} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.43" already present on machine Nov 26 06:25:50.312: INFO: At 2022-11-26 06:25:45 +0000 UTC - event for affinity-lb-esipp-transition-lqr9t: {kubelet bootstrap-e2e-minion-group-8xrn} Created: Created container affinity-lb-esipp-transition Nov 26 06:25:50.312: INFO: At 2022-11-26 06:25:45 +0000 UTC - event for affinity-lb-esipp-transition-lqr9t: {kubelet bootstrap-e2e-minion-group-8xrn} Started: Started container affinity-lb-esipp-transition Nov 26 06:25:50.312: INFO: At 2022-11-26 06:25:47 +0000 UTC - event for affinity-lb-esipp-transition-bngcw: {kubelet bootstrap-e2e-minion-group-4lvd} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Nov 26 06:25:50.392: INFO: POD NODE PHASE GRACE CONDITIONS Nov 26 06:25:50.392: INFO: affinity-lb-esipp-transition-bngcw bootstrap-e2e-minion-group-4lvd Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:25:42 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:25:48 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:25:48 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:25:42 +0000 UTC }] Nov 26 06:25:50.392: INFO: affinity-lb-esipp-transition-cm5jp bootstrap-e2e-minion-group-6hf3 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:25:42 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:25:45 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:25:45 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:25:42 +0000 UTC }] Nov 26 06:25:50.392: INFO: affinity-lb-esipp-transition-lqr9t bootstrap-e2e-minion-group-8xrn Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:25:42 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:25:46 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:25:46 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:25:42 +0000 UTC }] Nov 26 06:25:50.392: INFO: Nov 26 06:25:50.525: INFO: Unable to fetch loadbalancers-2710/affinity-lb-esipp-transition-bngcw/affinity-lb-esipp-transition logs: an error on the server ("unknown") has prevented the request from succeeding (get pods affinity-lb-esipp-transition-bngcw) Nov 26 06:25:50.639: INFO: Unable to fetch loadbalancers-2710/affinity-lb-esipp-transition-cm5jp/affinity-lb-esipp-transition logs: an error on the server ("unknown") has prevented the request from succeeding (get pods affinity-lb-esipp-transition-cm5jp) Nov 26 06:25:50.808: INFO: Unable to fetch loadbalancers-2710/affinity-lb-esipp-transition-lqr9t/affinity-lb-esipp-transition logs: an error on the server ("unknown") has prevented the request from succeeding (get pods affinity-lb-esipp-transition-lqr9t) Nov 26 06:25:50.917: INFO: Logging node info for node bootstrap-e2e-master Nov 26 06:25:50.983: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-master 193495c1-5b1f-409e-8da9-b1de094ba8ed 9690 0 2022-11-26 06:06:31 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-26 06:06:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2022-11-26 06:06:49 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}},"f:taints":{}}} } {kube-controller-manager Update v1 2022-11-26 06:06:49 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2022-11-26 06:22:11 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-gce-dg-1-6-1-5-dwngr-clu/us-west1-b/bootstrap-e2e-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16656896000 0} {<nil>} 16266500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3858366464 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{14991206376 0} {<nil>} 14991206376 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3596222464 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 06:06:49 +0000 UTC,LastTransitionTime:2022-11-26 06:06:49 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-26 06:22:11 +0000 UTC,LastTransitionTime:2022-11-26 06:06:31 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-26 06:22:11 +0000 UTC,LastTransitionTime:2022-11-26 06:06:31 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-26 06:22:11 +0000 UTC,LastTransitionTime:2022-11-26 06:06:31 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-26 06:22:11 +0000 UTC,LastTransitionTime:2022-11-26 06:06:33 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:34.83.17.181,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-master.c.k8s-gce-dg-1-6-1-5-dwngr-clu.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-master.c.k8s-gce-dg-1-6-1-5-dwngr-clu.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:d1b5362410d55fa2b320229b7ed22cb3,SystemUUID:d1b53624-10d5-5fa2-b320-229b7ed22cb3,BootID:13424371-b4dc-4f78-9364-1afc151da040,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:135160272,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:124990265,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:57660216,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:2c111f004bec24888d8cfa2a812a38fb8341350abac67dcd0ac64e709dfe389c registry.k8s.io/kas-network-proxy/proxy-server:v0.0.33],SizeBytes:22020129,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 26 06:25:50.983: INFO: Logging kubelet events for node bootstrap-e2e-master Nov 26 06:25:51.205: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-master Nov 26 06:25:51.361: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-master: error trying to reach service: No agent available Nov 26 06:25:51.361: INFO: Logging node info for node bootstrap-e2e-minion-group-4lvd Nov 26 06:25:51.420: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-4lvd e0d66193-5762-49e7-b872-8ea4dee27a72 11679 0 2022-11-26 06:06:29 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-4lvd kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-4lvd topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-11-26 06:06:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}}}} } {kubelet Update v1 2022-11-26 06:06:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {node-problem-detector Update v1 2022-11-26 06:21:36 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2022-11-26 06:25:13 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:volumesAttached":{}}} status} {kubelet Update v1 2022-11-26 06:25:32 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-gce-dg-1-6-1-5-dwngr-clu/us-west1-b/bootstrap-e2e-minion-group-4lvd,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-26 06:21:36 +0000 UTC,LastTransitionTime:2022-11-26 06:06:33 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-26 06:21:36 +0000 UTC,LastTransitionTime:2022-11-26 06:06:33 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-26 06:21:36 +0000 UTC,LastTransitionTime:2022-11-26 06:06:33 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-26 06:21:36 +0000 UTC,LastTransitionTime:2022-11-26 06:06:33 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-26 06:21:36 +0000 UTC,LastTransitionTime:2022-11-26 06:06:33 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-26 06:21:36 +0000 UTC,LastTransitionTime:2022-11-26 06:06:33 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-26 06:21:36 +0000 UTC,LastTransitionTime:2022-11-26 06:06:33 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 06:06:38 +0000 UTC,LastTransitionTime:2022-11-26 06:06:38 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-26 06:25:16 +0000 UTC,LastTransitionTime:2022-11-26 06:06:29 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-26 06:25:16 +0000 UTC,LastTransitionTime:2022-11-26 06:06:29 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-26 06:25:16 +0000 UTC,LastTransitionTime:2022-11-26 06:06:29 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-26 06:25:16 +0000 UTC,LastTransitionTime:2022-11-26 06:06:29 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.5,},NodeAddress{Type:ExternalIP,Address:34.83.235.77,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-4lvd.c.k8s-gce-dg-1-6-1-5-dwngr-clu.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-4lvd.c.k8s-gce-dg-1-6-1-5-dwngr-clu.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:60066c0df6854403f12a43e8820a94a9,SystemUUID:60066c0d-f685-4403-f12a-43e8820a94a9,BootID:c95b6e64-9275-42a1-b638-b99f8479d167,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:67201736,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-multivolume-6328^139fe791-6d53-11ed-8b0a-ae3fa0c45b22],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-multivolume-6328^139fe791-6d53-11ed-8b0a-ae3fa0c45b22,DevicePath:,},},Config:nil,},} Nov 26 06:25:51.421: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-4lvd Nov 26 06:25:51.478: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-4lvd Nov 26 06:25:51.561: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-4lvd: error trying to reach service: No agent available Nov 26 06:25:51.561: INFO: Logging node info for node bootstrap-e2e-minion-group-6hf3 Nov 26 06:25:51.626: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-6hf3 b331b1c1-4929-41cb-863a-a271765dc91a 11900 0 2022-11-26 06:06:35 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-6hf3 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-6hf3 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-multivolume-5397":"bootstrap-e2e-minion-group-6hf3","csi-mock-csi-mock-volumes-3236":"bootstrap-e2e-minion-group-6hf3"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-26 06:06:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-26 06:06:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}}}} } {kube-controller-manager Update v1 2022-11-26 06:20:44 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:volumesAttached":{}}} status} {node-problem-detector Update v1 2022-11-26 06:21:41 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-11-26 06:25:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-gce-dg-1-6-1-5-dwngr-clu/us-west1-b/bootstrap-e2e-minion-group-6hf3,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-26 06:21:41 +0000 UTC,LastTransitionTime:2022-11-26 06:06:38 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-26 06:21:41 +0000 UTC,LastTransitionTime:2022-11-26 06:06:38 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-26 06:21:41 +0000 UTC,LastTransitionTime:2022-11-26 06:06:38 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-26 06:21:41 +0000 UTC,LastTransitionTime:2022-11-26 06:06:38 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-26 06:21:41 +0000 UTC,LastTransitionTime:2022-11-26 06:06:38 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-26 06:21:41 +0000 UTC,LastTransitionTime:2022-11-26 06:06:38 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-26 06:21:41 +0000 UTC,LastTransitionTime:2022-11-26 06:06:38 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 06:06:49 +0000 UTC,LastTransitionTime:2022-11-26 06:06:49 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-26 06:25:50 +0000 UTC,LastTransitionTime:2022-11-26 06:06:35 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-26 06:25:50 +0000 UTC,LastTransitionTime:2022-11-26 06:06:35 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-26 06:25:50 +0000 UTC,LastTransitionTime:2022-11-26 06:06:35 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-26 06:25:50 +0000 UTC,LastTransitionTime:2022-11-26 06:06:36 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.3,},NodeAddress{Type:ExternalIP,Address:34.127.104.133,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-6hf3.c.k8s-gce-dg-1-6-1-5-dwngr-clu.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-6hf3.c.k8s-gce-dg-1-6-1-5-dwngr-clu.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:26797942e06ae330f816fe3103a312e5,SystemUUID:26797942-e06a-e330-f816-fe3103a312e5,BootID:2fb94944-0ddb-4524-8e35-935d1c8900f4,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/volume/nfs@sha256:3bda73f2428522b0e342af80a0b9679e8594c2126f2b3cca39ed787589741b9e registry.k8s.io/e2e-test-images/volume/nfs:1.3],SizeBytes:95836203,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:67201736,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[kubernetes.io/csi/csi-mock-csi-mock-volumes-3236^8356e743-6d51-11ed-b618-52bed00202b8 kubernetes.io/csi/csi-mock-csi-mock-volumes-3236^8af161f1-6d51-11ed-b618-52bed00202b8],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-mock-csi-mock-volumes-3236^8356e743-6d51-11ed-b618-52bed00202b8,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-mock-csi-mock-volumes-3236^8af161f1-6d51-11ed-b618-52bed00202b8,DevicePath:,},},Config:nil,},} Nov 26 06:25:51.627: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-6hf3 Nov 26 06:25:51.696: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-6hf3 Nov 26 06:25:51.812: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-6hf3: error trying to reach service: No agent available Nov 26 06:25:51.812: INFO: Logging node info for node bootstrap-e2e-minion-group-8xrn Nov 26 06:25:51.881: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-8xrn bdb3ca25-6ac6-4bb7-a82f-bb358d7ba8f9 11659 0 2022-11-26 06:06:29 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-8xrn kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-8xrn topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-provisioning-5148":"bootstrap-e2e-minion-group-8xrn","csi-mock-csi-mock-volumes-4529":"csi-mock-csi-mock-volumes-4529"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-11-26 06:06:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}}}} } {kubelet Update v1 2022-11-26 06:06:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {node-problem-detector Update v1 2022-11-26 06:21:35 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2022-11-26 06:22:31 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2022-11-26 06:25:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-gce-dg-1-6-1-5-dwngr-clu/us-west1-b/bootstrap-e2e-minion-group-8xrn,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-26 06:21:35 +0000 UTC,LastTransitionTime:2022-11-26 06:06:33 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-26 06:21:35 +0000 UTC,LastTransitionTime:2022-11-26 06:06:33 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-26 06:21:35 +0000 UTC,LastTransitionTime:2022-11-26 06:06:33 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-26 06:21:35 +0000 UTC,LastTransitionTime:2022-11-26 06:06:33 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-26 06:21:35 +0000 UTC,LastTransitionTime:2022-11-26 06:06:33 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-26 06:21:35 +0000 UTC,LastTransitionTime:2022-11-26 06:06:33 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-26 06:21:35 +0000 UTC,LastTransitionTime:2022-11-26 06:06:33 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 06:06:38 +0000 UTC,LastTransitionTime:2022-11-26 06:06:38 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-26 06:25:37 +0000 UTC,LastTransitionTime:2022-11-26 06:06:29 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-26 06:25:37 +0000 UTC,LastTransitionTime:2022-11-26 06:06:29 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-26 06:25:37 +0000 UTC,LastTransitionTime:2022-11-26 06:06:29 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-26 06:25:37 +0000 UTC,LastTransitionTime:2022-11-26 06:06:29 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.4,},NodeAddress{Type:ExternalIP,Address:34.105.92.105,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-8xrn.c.k8s-gce-dg-1-6-1-5-dwngr-clu.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-8xrn.c.k8s-gce-dg-1-6-1-5-dwngr-clu.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:1e49116bed901a0b492cc2a872edd7a8,SystemUUID:1e49116b-ed90-1a0b-492c-c2a872edd7a8,BootID:24bc719e-5b36-4d61-bc89-44455ca28dd7,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:67201736,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 26 06:25:51.881: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-8xrn Nov 26 06:25:51.946: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-8xrn Nov 26 06:25:52.043: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-8xrn: error trying to reach service: No agent available [DeferCleanup (Each)] [sig-network] LoadBalancers tear down framework | framework.go:193 STEP: Destroying namespace "loadbalancers-2710" for this suite. 11/26/22 06:25:52.043
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-network\]\sLoadBalancers\sshould\shandle\sload\sbalancer\scleanup\sfinalizer\sfor\sservice\s\[Slow\]$'
test/e2e/framework/framework.go:241 k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc000cfa4b0) test/e2e/framework/framework.go:241 +0x96f There were additional failures detected after the initial failure: [PANICKED] Test Panicked In [AfterEach] at: /usr/local/go/src/runtime/panic.go:260 runtime error: invalid memory address or nil pointer dereference Full Stack Trace k8s.io/kubernetes/test/e2e/network.glob..func19.2() test/e2e/network/loadbalancer.go:73 +0x113from junit_01.xml
[BeforeEach] [sig-network] LoadBalancers set up framework | framework.go:178 STEP: Creating a kubernetes client 11/26/22 06:26:51.053 Nov 26 06:26:51.053: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename loadbalancers 11/26/22 06:26:51.055 Nov 26 06:26:51.094: INFO: Unexpected error while creating namespace: Post "https://34.83.17.181/api/v1/namespaces": dial tcp 34.83.17.181:443: connect: connection refused Nov 26 06:26:53.134: INFO: Unexpected error while creating namespace: Post "https://34.83.17.181/api/v1/namespaces": dial tcp 34.83.17.181:443: connect: connection refused Nov 26 06:26:55.134: INFO: Unexpected error while creating namespace: Post "https://34.83.17.181/api/v1/namespaces": dial tcp 34.83.17.181:443: connect: connection refused Nov 26 06:26:57.134: INFO: Unexpected error while creating namespace: Post "https://34.83.17.181/api/v1/namespaces": dial tcp 34.83.17.181:443: connect: connection refused Nov 26 06:26:59.135: INFO: Unexpected error while creating namespace: Post "https://34.83.17.181/api/v1/namespaces": dial tcp 34.83.17.181:443: connect: connection refused Nov 26 06:27:01.134: INFO: Unexpected error while creating namespace: Post "https://34.83.17.181/api/v1/namespaces": dial tcp 34.83.17.181:443: connect: connection refused Nov 26 06:27:03.134: INFO: Unexpected error while creating namespace: Post "https://34.83.17.181/api/v1/namespaces": dial tcp 34.83.17.181:443: connect: connection refused Nov 26 06:27:05.134: INFO: Unexpected error while creating namespace: Post "https://34.83.17.181/api/v1/namespaces": dial tcp 34.83.17.181:443: connect: connection refused Nov 26 06:27:07.134: INFO: Unexpected error while creating namespace: Post "https://34.83.17.181/api/v1/namespaces": dial tcp 34.83.17.181:443: connect: connection refused Nov 26 06:27:09.134: INFO: Unexpected error while creating namespace: Post "https://34.83.17.181/api/v1/namespaces": dial tcp 34.83.17.181:443: connect: connection refused Nov 26 06:27:11.134: INFO: Unexpected error while creating namespace: Post "https://34.83.17.181/api/v1/namespaces": dial tcp 34.83.17.181:443: connect: connection refused Nov 26 06:27:13.134: INFO: Unexpected error while creating namespace: Post "https://34.83.17.181/api/v1/namespaces": dial tcp 34.83.17.181:443: connect: connection refused Nov 26 06:27:15.134: INFO: Unexpected error while creating namespace: Post "https://34.83.17.181/api/v1/namespaces": dial tcp 34.83.17.181:443: connect: connection refused Nov 26 06:27:17.135: INFO: Unexpected error while creating namespace: Post "https://34.83.17.181/api/v1/namespaces": dial tcp 34.83.17.181:443: connect: connection refused Nov 26 06:27:19.134: INFO: Unexpected error while creating namespace: Post "https://34.83.17.181/api/v1/namespaces": dial tcp 34.83.17.181:443: connect: connection refused Nov 26 06:27:21.134: INFO: Unexpected error while creating namespace: Post "https://34.83.17.181/api/v1/namespaces": dial tcp 34.83.17.181:443: connect: connection refused Nov 26 06:27:21.173: INFO: Unexpected error while creating namespace: Post "https://34.83.17.181/api/v1/namespaces": dial tcp 34.83.17.181:443: connect: connection refused Nov 26 06:27:21.173: INFO: Unexpected error: <*errors.errorString | 0xc00020fd40>: { s: "timed out waiting for the condition", } Nov 26 06:27:21.173: FAIL: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc000cfa4b0) test/e2e/framework/framework.go:241 +0x96f [AfterEach] [sig-network] LoadBalancers test/e2e/framework/node/init/init.go:32 Nov 26 06:27:21.174: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-network] LoadBalancers test/e2e/network/loadbalancer.go:71 [DeferCleanup (Each)] [sig-network] LoadBalancers dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/26/22 06:27:21.214 [DeferCleanup (Each)] [sig-network] LoadBalancers tear down framework | framework.go:193
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-network\]\sLoadBalancers\sshould\shave\ssession\saffinity\swork\sfor\sLoadBalancer\sservice\swith\sESIPP\soff\s\[Slow\]\s\[LinuxOnly\]$'
test/e2e/framework/framework.go:241 k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc0011a24b0) test/e2e/framework/framework.go:241 +0x96f There were additional failures detected after the initial failure: [PANICKED] Test Panicked In [AfterEach] at: /usr/local/go/src/runtime/panic.go:260 runtime error: invalid memory address or nil pointer dereference Full Stack Trace k8s.io/kubernetes/test/e2e/network.glob..func19.2() test/e2e/network/loadbalancer.go:73 +0x113from junit_01.xml
[BeforeEach] [sig-network] LoadBalancers set up framework | framework.go:178 STEP: Creating a kubernetes client 11/26/22 06:27:51.575 Nov 26 06:27:51.575: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename loadbalancers 11/26/22 06:27:51.577 Nov 26 06:27:51.617: INFO: Unexpected error while creating namespace: Post "https://34.83.17.181/api/v1/namespaces": dial tcp 34.83.17.181:443: connect: connection refused Nov 26 06:27:53.656: INFO: Unexpected error while creating namespace: Post "https://34.83.17.181/api/v1/namespaces": dial tcp 34.83.17.181:443: connect: connection refused Nov 26 06:27:55.657: INFO: Unexpected error while creating namespace: Post "https://34.83.17.181/api/v1/namespaces": dial tcp 34.83.17.181:443: connect: connection refused Nov 26 06:27:57.656: INFO: Unexpected error while creating namespace: Post "https://34.83.17.181/api/v1/namespaces": dial tcp 34.83.17.181:443: connect: connection refused Nov 26 06:27:59.656: INFO: Unexpected error while creating namespace: Post "https://34.83.17.181/api/v1/namespaces": dial tcp 34.83.17.181:443: connect: connection refused Nov 26 06:28:01.657: INFO: Unexpected error while creating namespace: Post "https://34.83.17.181/api/v1/namespaces": dial tcp 34.83.17.181:443: connect: connection refused Nov 26 06:28:03.656: INFO: Unexpected error while creating namespace: Post "https://34.83.17.181/api/v1/namespaces": dial tcp 34.83.17.181:443: connect: connection refused Nov 26 06:28:05.656: INFO: Unexpected error while creating namespace: Post "https://34.83.17.181/api/v1/namespaces": dial tcp 34.83.17.181:443: connect: connection refused Nov 26 06:28:07.657: INFO: Unexpected error while creating namespace: Post "https://34.83.17.181/api/v1/namespaces": dial tcp 34.83.17.181:443: connect: connection refused Nov 26 06:28:09.657: INFO: Unexpected error while creating namespace: Post "https://34.83.17.181/api/v1/namespaces": dial tcp 34.83.17.181:443: connect: connection refused Nov 26 06:28:11.656: INFO: Unexpected error while creating namespace: Post "https://34.83.17.181/api/v1/namespaces": dial tcp 34.83.17.181:443: connect: connection refused Nov 26 06:28:13.656: INFO: Unexpected error while creating namespace: Post "https://34.83.17.181/api/v1/namespaces": dial tcp 34.83.17.181:443: connect: connection refused Nov 26 06:28:15.657: INFO: Unexpected error while creating namespace: Post "https://34.83.17.181/api/v1/namespaces": dial tcp 34.83.17.181:443: connect: connection refused Nov 26 06:28:17.657: INFO: Unexpected error while creating namespace: Post "https://34.83.17.181/api/v1/namespaces": dial tcp 34.83.17.181:443: connect: connection refused Nov 26 06:28:19.656: INFO: Unexpected error while creating namespace: Post "https://34.83.17.181/api/v1/namespaces": dial tcp 34.83.17.181:443: connect: connection refused Nov 26 06:28:21.656: INFO: Unexpected error while creating namespace: Post "https://34.83.17.181/api/v1/namespaces": dial tcp 34.83.17.181:443: connect: connection refused Nov 26 06:28:21.696: INFO: Unexpected error while creating namespace: Post "https://34.83.17.181/api/v1/namespaces": dial tcp 34.83.17.181:443: connect: connection refused Nov 26 06:28:21.696: INFO: Unexpected error: <*errors.errorString | 0xc0001fda10>: { s: "timed out waiting for the condition", } Nov 26 06:28:21.696: FAIL: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc0011a24b0) test/e2e/framework/framework.go:241 +0x96f [AfterEach] [sig-network] LoadBalancers test/e2e/framework/node/init/init.go:32 Nov 26 06:28:21.696: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-network] LoadBalancers test/e2e/network/loadbalancer.go:71 [DeferCleanup (Each)] [sig-network] LoadBalancers dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/26/22 06:28:21.736 [DeferCleanup (Each)] [sig-network] LoadBalancers tear down framework | framework.go:193
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-network\]\sLoadBalancers\sshould\shave\ssession\saffinity\swork\sfor\sLoadBalancer\sservice\swith\sESIPP\son\s\[Slow\]\s\[LinuxOnly\]$'
test/e2e/network/service.go:3978 k8s.io/kubernetes/test/e2e/network.execAffinityTestForLBServiceWithOptionalTransition(0x75eccfc?, {0x801de88, 0xc001a2cb60}, 0xc000939400, 0x0) test/e2e/network/service.go:3978 +0x1b1 k8s.io/kubernetes/test/e2e/network.execAffinityTestForLBService(...) test/e2e/network/service.go:3966 k8s.io/kubernetes/test/e2e/network.glob..func19.8() test/e2e/network/loadbalancer.go:776 +0xf0from junit_01.xml
[BeforeEach] [sig-network] LoadBalancers set up framework | framework.go:178 STEP: Creating a kubernetes client 11/26/22 06:09:25.09 Nov 26 06:09:25.090: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename loadbalancers 11/26/22 06:09:25.092 STEP: Waiting for a default service account to be provisioned in namespace 11/26/22 06:09:25.324 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/26/22 06:09:25.438 [BeforeEach] [sig-network] LoadBalancers test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-network] LoadBalancers test/e2e/network/loadbalancer.go:65 [It] should have session affinity work for LoadBalancer service with ESIPP on [Slow] [LinuxOnly] test/e2e/network/loadbalancer.go:769 STEP: creating service in namespace loadbalancers-5104 11/26/22 06:09:25.628 STEP: creating service affinity-lb-esipp in namespace loadbalancers-5104 11/26/22 06:09:25.628 STEP: creating replication controller affinity-lb-esipp in namespace loadbalancers-5104 11/26/22 06:09:25.828 I1126 06:09:25.894762 10139 runners.go:193] Created replication controller with name: affinity-lb-esipp, namespace: loadbalancers-5104, replica count: 3 I1126 06:09:28.995124 10139 runners.go:193] affinity-lb-esipp Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1126 06:09:31.996283 10139 runners.go:193] affinity-lb-esipp Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1126 06:09:34.996583 10139 runners.go:193] affinity-lb-esipp Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1126 06:09:34.996600 10139 runners.go:193] Logging node info for node bootstrap-e2e-minion-group-4lvd I1126 06:09:35.080621 10139 runners.go:193] Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-4lvd e0d66193-5762-49e7-b872-8ea4dee27a72 2590 0 2022-11-26 06:06:29 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-4lvd kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-4lvd topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-11-26 06:06:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}}}} } {kubelet Update v1 2022-11-26 06:06:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {node-problem-detector Update v1 2022-11-26 06:06:34 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2022-11-26 06:09:10 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:volumesAttached":{}}} status} {kubelet Update v1 2022-11-26 06:09:34 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-gce-dg-1-6-1-5-dwngr-clu/us-west1-b/bootstrap-e2e-minion-group-4lvd,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-26 06:06:34 +0000 UTC,LastTransitionTime:2022-11-26 06:06:33 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-26 06:06:34 +0000 UTC,LastTransitionTime:2022-11-26 06:06:33 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-26 06:06:34 +0000 UTC,LastTransitionTime:2022-11-26 06:06:33 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-26 06:06:34 +0000 UTC,LastTransitionTime:2022-11-26 06:06:33 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-26 06:06:34 +0000 UTC,LastTransitionTime:2022-11-26 06:06:33 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-26 06:06:34 +0000 UTC,LastTransitionTime:2022-11-26 06:06:33 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-26 06:06:34 +0000 UTC,LastTransitionTime:2022-11-26 06:06:33 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 06:06:38 +0000 UTC,LastTransitionTime:2022-11-26 06:06:38 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-26 06:09:34 +0000 UTC,LastTransitionTime:2022-11-26 06:06:29 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-26 06:09:34 +0000 UTC,LastTransitionTime:2022-11-26 06:06:29 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-26 06:09:34 +0000 UTC,LastTransitionTime:2022-11-26 06:06:29 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-26 06:09:34 +0000 UTC,LastTransitionTime:2022-11-26 06:06:29 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.5,},NodeAddress{Type:ExternalIP,Address:34.83.235.77,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-4lvd.c.k8s-gce-dg-1-6-1-5-dwngr-clu.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-4lvd.c.k8s-gce-dg-1-6-1-5-dwngr-clu.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:60066c0df6854403f12a43e8820a94a9,SystemUUID:60066c0d-f685-4403-f12a-43e8820a94a9,BootID:c95b6e64-9275-42a1-b638-b99f8479d167,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:67201736,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-provisioning-4266^d6fecfae-6d50-11ed-bee1-72f9a192f147],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-4266^d6fecfae-6d50-11ed-bee1-72f9a192f147,DevicePath:,},},Config:nil,},} I1126 06:09:35.081068 10139 runners.go:193] Logging kubelet events for node bootstrap-e2e-minion-group-4lvd I1126 06:09:35.134885 10139 runners.go:193] Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-4lvd I1126 06:09:35.278726 10139 runners.go:193] metadata-proxy-v0.1-z77w4 started at 2022-11-26 06:06:30 +0000 UTC (0+2 container statuses recorded) I1126 06:09:35.278753 10139 runners.go:193] Container metadata-proxy ready: true, restart count 0 I1126 06:09:35.278759 10139 runners.go:193] Container prometheus-to-sd-exporter ready: true, restart count 0 I1126 06:09:35.278763 10139 runners.go:193] konnectivity-agent-dx4vl started at 2022-11-26 06:06:38 +0000 UTC (0+1 container statuses recorded) I1126 06:09:35.278769 10139 runners.go:193] Container konnectivity-agent ready: false, restart count 1 I1126 06:09:35.278772 10139 runners.go:193] pod-configmaps-31d506dc-db6d-4b37-bb2a-2737bf1878c2 started at 2022-11-26 06:09:33 +0000 UTC (0+1 container statuses recorded) I1126 06:09:35.278776 10139 runners.go:193] Container agnhost-container ready: false, restart count 0 I1126 06:09:35.278780 10139 runners.go:193] csi-hostpathplugin-0 started at 2022-11-26 06:08:55 +0000 UTC (0+7 container statuses recorded) I1126 06:09:35.278786 10139 runners.go:193] Container csi-attacher ready: false, restart count 1 I1126 06:09:35.278789 10139 runners.go:193] Container csi-provisioner ready: false, restart count 1 I1126 06:09:35.278794 10139 runners.go:193] Container csi-resizer ready: false, restart count 1 I1126 06:09:35.278797 10139 runners.go:193] Container csi-snapshotter ready: false, restart count 1 I1126 06:09:35.278801 10139 runners.go:193] Container hostpath ready: false, restart count 1 I1126 06:09:35.278806 10139 runners.go:193] Container liveness-probe ready: false, restart count 1 I1126 06:09:35.278809 10139 runners.go:193] Container node-driver-registrar ready: false, restart count 1 I1126 06:09:35.278813 10139 runners.go:193] pod-2c115652-4636-4797-8221-d1eea046cf6a started at 2022-11-26 06:08:36 +0000 UTC (0+1 container statuses recorded) I1126 06:09:35.278818 10139 runners.go:193] Container write-pod ready: false, restart count 0 I1126 06:09:35.278822 10139 runners.go:193] mutability-test-mzj28 started at 2022-11-26 06:08:48 +0000 UTC (0+1 container statuses recorded) I1126 06:09:35.278826 10139 runners.go:193] Container netexec ready: true, restart count 1 I1126 06:09:35.278830 10139 runners.go:193] mutability-test-mxm86 started at 2022-11-26 06:08:48 +0000 UTC (0+1 container statuses recorded) I1126 06:09:35.278835 10139 runners.go:193] Container netexec ready: true, restart count 1 I1126 06:09:35.278840 10139 runners.go:193] kube-proxy-bootstrap-e2e-minion-group-4lvd started at 2022-11-26 06:06:29 +0000 UTC (0+1 container statuses recorded) I1126 06:09:35.278845 10139 runners.go:193] Container kube-proxy ready: false, restart count 2 I1126 06:09:35.278849 10139 runners.go:193] coredns-6d97d5ddb-n6d4l started at 2022-11-26 06:06:46 +0000 UTC (0+1 container statuses recorded) I1126 06:09:35.278853 10139 runners.go:193] Container coredns ready: true, restart count 1 I1126 06:09:35.278857 10139 runners.go:193] pod-subpath-test-dynamicpv-m52h started at 2022-11-26 06:09:09 +0000 UTC (1+1 container statuses recorded) I1126 06:09:35.278861 10139 runners.go:193] Init container init-volume-dynamicpv-m52h ready: false, restart count 0 I1126 06:09:35.278865 10139 runners.go:193] Container test-container-subpath-dynamicpv-m52h ready: false, restart count 0 I1126 06:09:35.278867 10139 runners.go:193] affinity-lb-esipp-qnhll started at 2022-11-26 06:09:26 +0000 UTC (0+1 container statuses recorded) I1126 06:09:35.278871 10139 runners.go:193] Container affinity-lb-esipp ready: true, restart count 1 I1126 06:09:35.643443 10139 runners.go:193] Latency metrics for node bootstrap-e2e-minion-group-4lvd I1126 06:09:35.723412 10139 runners.go:193] Running kubectl logs on non-ready containers in loadbalancers-5104 Nov 26 06:09:35.723: INFO: Unexpected error: failed to create replication controller with service in the namespace: loadbalancers-5104: <*errors.errorString | 0xc0013f3170>: { s: "1 containers failed which is more than allowed 0", } Nov 26 06:09:35.723: FAIL: failed to create replication controller with service in the namespace: loadbalancers-5104: 1 containers failed which is more than allowed 0 Full Stack Trace k8s.io/kubernetes/test/e2e/network.execAffinityTestForLBServiceWithOptionalTransition(0x75eccfc?, {0x801de88, 0xc001a2cb60}, 0xc000939400, 0x0) test/e2e/network/service.go:3978 +0x1b1 k8s.io/kubernetes/test/e2e/network.execAffinityTestForLBService(...) test/e2e/network/service.go:3966 k8s.io/kubernetes/test/e2e/network.glob..func19.8() test/e2e/network/loadbalancer.go:776 +0xf0 [AfterEach] [sig-network] LoadBalancers test/e2e/framework/node/init/init.go:32 Nov 26 06:09:35.723: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-network] LoadBalancers test/e2e/network/loadbalancer.go:71 Nov 26 06:09:35.806: INFO: Output of kubectl describe svc: Nov 26 06:09:35.806: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-5104 describe svc --namespace=loadbalancers-5104' Nov 26 06:09:36.645: INFO: stderr: "" Nov 26 06:09:36.645: INFO: stdout: "Name: affinity-lb-esipp\nNamespace: loadbalancers-5104\nLabels: <none>\nAnnotations: <none>\nSelector: name=affinity-lb-esipp\nType: LoadBalancer\nIP Family Policy: SingleStack\nIP Families: IPv4\nIP: 10.0.166.109\nIPs: 10.0.166.109\nPort: <unset> 80/TCP\nTargetPort: 9376/TCP\nNodePort: <unset> 32418/TCP\nEndpoints: 10.64.0.25:9376,10.64.3.26:9376,10.64.3.27:9376\nSession Affinity: ClientIP\nExternal Traffic Policy: Local\nHealthCheck NodePort: 30244\nEvents: <none>\n" Nov 26 06:09:36.645: INFO: Name: affinity-lb-esipp Namespace: loadbalancers-5104 Labels: <none> Annotations: <none> Selector: name=affinity-lb-esipp Type: LoadBalancer IP Family Policy: SingleStack IP Families: IPv4 IP: 10.0.166.109 IPs: 10.0.166.109 Port: <unset> 80/TCP TargetPort: 9376/TCP NodePort: <unset> 32418/TCP Endpoints: 10.64.0.25:9376,10.64.3.26:9376,10.64.3.27:9376 Session Affinity: ClientIP External Traffic Policy: Local HealthCheck NodePort: 30244 Events: <none> [DeferCleanup (Each)] [sig-network] LoadBalancers test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-network] LoadBalancers dump namespaces | framework.go:196 STEP: dump namespace information after failure 11/26/22 06:09:36.645 STEP: Collecting events from namespace "loadbalancers-5104". 11/26/22 06:09:36.645 STEP: Found 19 events. 11/26/22 06:09:36.748 Nov 26 06:09:36.748: INFO: At 2022-11-26 06:09:25 +0000 UTC - event for affinity-lb-esipp: {replication-controller } SuccessfulCreate: Created pod: affinity-lb-esipp-xvdkc Nov 26 06:09:36.748: INFO: At 2022-11-26 06:09:26 +0000 UTC - event for affinity-lb-esipp: {replication-controller } SuccessfulCreate: Created pod: affinity-lb-esipp-qnhll Nov 26 06:09:36.748: INFO: At 2022-11-26 06:09:26 +0000 UTC - event for affinity-lb-esipp: {replication-controller } SuccessfulCreate: Created pod: affinity-lb-esipp-2d4hh Nov 26 06:09:36.748: INFO: At 2022-11-26 06:09:26 +0000 UTC - event for affinity-lb-esipp-2d4hh: {default-scheduler } Scheduled: Successfully assigned loadbalancers-5104/affinity-lb-esipp-2d4hh to bootstrap-e2e-minion-group-6hf3 Nov 26 06:09:36.748: INFO: At 2022-11-26 06:09:26 +0000 UTC - event for affinity-lb-esipp-qnhll: {default-scheduler } Scheduled: Successfully assigned loadbalancers-5104/affinity-lb-esipp-qnhll to bootstrap-e2e-minion-group-4lvd Nov 26 06:09:36.748: INFO: At 2022-11-26 06:09:26 +0000 UTC - event for affinity-lb-esipp-xvdkc: {default-scheduler } Scheduled: Successfully assigned loadbalancers-5104/affinity-lb-esipp-xvdkc to bootstrap-e2e-minion-group-6hf3 Nov 26 06:09:36.748: INFO: At 2022-11-26 06:09:27 +0000 UTC - event for affinity-lb-esipp-qnhll: {kubelet bootstrap-e2e-minion-group-4lvd} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.43" already present on machine Nov 26 06:09:36.748: INFO: At 2022-11-26 06:09:27 +0000 UTC - event for affinity-lb-esipp-qnhll: {kubelet bootstrap-e2e-minion-group-4lvd} Created: Created container affinity-lb-esipp Nov 26 06:09:36.748: INFO: At 2022-11-26 06:09:27 +0000 UTC - event for affinity-lb-esipp-qnhll: {kubelet bootstrap-e2e-minion-group-4lvd} Started: Started container affinity-lb-esipp Nov 26 06:09:36.748: INFO: At 2022-11-26 06:09:28 +0000 UTC - event for affinity-lb-esipp-qnhll: {kubelet bootstrap-e2e-minion-group-4lvd} Killing: Stopping container affinity-lb-esipp Nov 26 06:09:36.748: INFO: At 2022-11-26 06:09:29 +0000 UTC - event for affinity-lb-esipp-xvdkc: {kubelet bootstrap-e2e-minion-group-6hf3} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.43" already present on machine Nov 26 06:09:36.748: INFO: At 2022-11-26 06:09:29 +0000 UTC - event for affinity-lb-esipp-xvdkc: {kubelet bootstrap-e2e-minion-group-6hf3} Created: Created container affinity-lb-esipp Nov 26 06:09:36.748: INFO: At 2022-11-26 06:09:30 +0000 UTC - event for affinity-lb-esipp-2d4hh: {kubelet bootstrap-e2e-minion-group-6hf3} Started: Started container affinity-lb-esipp Nov 26 06:09:36.748: INFO: At 2022-11-26 06:09:30 +0000 UTC - event for affinity-lb-esipp-2d4hh: {kubelet bootstrap-e2e-minion-group-6hf3} Created: Created container affinity-lb-esipp Nov 26 06:09:36.749: INFO: At 2022-11-26 06:09:30 +0000 UTC - event for affinity-lb-esipp-2d4hh: {kubelet bootstrap-e2e-minion-group-6hf3} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.43" already present on machine Nov 26 06:09:36.749: INFO: At 2022-11-26 06:09:30 +0000 UTC - event for affinity-lb-esipp-xvdkc: {kubelet bootstrap-e2e-minion-group-6hf3} Started: Started container affinity-lb-esipp Nov 26 06:09:36.749: INFO: At 2022-11-26 06:09:30 +0000 UTC - event for affinity-lb-esipp-xvdkc: {kubelet bootstrap-e2e-minion-group-6hf3} Killing: Stopping container affinity-lb-esipp Nov 26 06:09:36.749: INFO: At 2022-11-26 06:09:31 +0000 UTC - event for affinity-lb-esipp-qnhll: {kubelet bootstrap-e2e-minion-group-4lvd} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Nov 26 06:09:36.749: INFO: At 2022-11-26 06:09:35 +0000 UTC - event for affinity-lb-esipp-xvdkc: {kubelet bootstrap-e2e-minion-group-6hf3} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Nov 26 06:09:36.882: INFO: POD NODE PHASE GRACE CONDITIONS Nov 26 06:09:36.882: INFO: affinity-lb-esipp-2d4hh bootstrap-e2e-minion-group-6hf3 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:09:26 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:09:31 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:09:31 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:09:26 +0000 UTC }] Nov 26 06:09:36.883: INFO: affinity-lb-esipp-qnhll bootstrap-e2e-minion-group-4lvd Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:09:26 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:09:32 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:09:32 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:09:26 +0000 UTC }] Nov 26 06:09:36.883: INFO: affinity-lb-esipp-xvdkc bootstrap-e2e-minion-group-6hf3 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:09:25 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:09:30 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:09:30 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:09:25 +0000 UTC }] Nov 26 06:09:36.883: INFO: Nov 26 06:09:37.507: INFO: Logging node info for node bootstrap-e2e-master Nov 26 06:09:37.588: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-master 193495c1-5b1f-409e-8da9-b1de094ba8ed 620 0 2022-11-26 06:06:31 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-26 06:06:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2022-11-26 06:06:49 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}},"f:taints":{}}} } {kube-controller-manager Update v1 2022-11-26 06:06:49 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2022-11-26 06:06:52 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-gce-dg-1-6-1-5-dwngr-clu/us-west1-b/bootstrap-e2e-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16656896000 0} {<nil>} 16266500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3858366464 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{14991206376 0} {<nil>} 14991206376 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3596222464 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 06:06:49 +0000 UTC,LastTransitionTime:2022-11-26 06:06:49 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-26 06:06:52 +0000 UTC,LastTransitionTime:2022-11-26 06:06:31 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-26 06:06:52 +0000 UTC,LastTransitionTime:2022-11-26 06:06:31 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-26 06:06:52 +0000 UTC,LastTransitionTime:2022-11-26 06:06:31 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-26 06:06:52 +0000 UTC,LastTransitionTime:2022-11-26 06:06:33 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:34.83.17.181,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-master.c.k8s-gce-dg-1-6-1-5-dwngr-clu.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-master.c.k8s-gce-dg-1-6-1-5-dwngr-clu.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:d1b5362410d55fa2b320229b7ed22cb3,SystemUUID:d1b53624-10d5-5fa2-b320-229b7ed22cb3,BootID:13424371-b4dc-4f78-9364-1afc151da040,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:135160272,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:124990265,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:57660216,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:2c111f004bec24888d8cfa2a812a38fb8341350abac67dcd0ac64e709dfe389c registry.k8s.io/kas-network-proxy/proxy-server:v0.0.33],SizeBytes:22020129,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 26 06:09:37.588: INFO: Logging kubelet events for node bootstrap-e2e-master Nov 26 06:09:37.669: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-master Nov 26 06:09:37.815: INFO: kube-scheduler-bootstrap-e2e-master started at 2022-11-26 06:05:48 +0000 UTC (0+1 container statuses recorded) Nov 26 06:09:37.815: INFO: Container kube-scheduler ready: true, restart count 0 Nov 26 06:09:37.815: INFO: metadata-proxy-v0.1-gg5tl started at 2022-11-26 06:06:31 +0000 UTC (0+2 container statuses recorded) Nov 26 06:09:37.815: INFO: Container metadata-proxy ready: true, restart count 0 Nov 26 06:09:37.815: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 26 06:09:37.815: INFO: etcd-server-bootstrap-e2e-master started at 2022-11-26 06:05:48 +0000 UTC (0+1 container statuses recorded) Nov 26 06:09:37.815: INFO: Container etcd-container ready: true, restart count 0 Nov 26 06:09:37.815: INFO: konnectivity-server-bootstrap-e2e-master started at 2022-11-26 06:05:48 +0000 UTC (0+1 container statuses recorded) Nov 26 06:09:37.815: INFO: Container konnectivity-server-container ready: true, restart count 0 Nov 26 06:09:37.815: INFO: kube-controller-manager-bootstrap-e2e-master started at 2022-11-26 06:05:48 +0000 UTC (0+1 container statuses recorded) Nov 26 06:09:37.815: INFO: Container kube-controller-manager ready: true, restart count 1 Nov 26 06:09:37.815: INFO: kube-addon-manager-bootstrap-e2e-master started at 2022-11-26 06:06:05 +0000 UTC (0+1 container statuses recorded) Nov 26 06:09:37.815: INFO: Container kube-addon-manager ready: false, restart count 0 Nov 26 06:09:37.815: INFO: l7-lb-controller-bootstrap-e2e-master started at 2022-11-26 06:06:05 +0000 UTC (0+1 container statuses recorded) Nov 26 06:09:37.815: INFO: Container l7-lb-controller ready: true, restart count 3 Nov 26 06:09:37.815: INFO: etcd-server-events-bootstrap-e2e-master started at 2022-11-26 06:05:48 +0000 UTC (0+1 container statuses recorded) Nov 26 06:09:37.815: INFO: Container etcd-container ready: true, restart count 1 Nov 26 06:09:37.815: INFO: kube-apiserver-bootstrap-e2e-master started at 2022-11-26 06:05:48 +0000 UTC (0+1 container statuses recorded) Nov 26 06:09:37.815: INFO: Container kube-apiserver ready: true, restart count 0 Nov 26 06:09:39.218: INFO: Latency metrics for node bootstrap-e2e-master Nov 26 06:09:39.218: INFO: Logging node info for node bootstrap-e2e-minion-group-4lvd Nov 26 06:09:39.300: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-4lvd e0d66193-5762-49e7-b872-8ea4dee27a72 2645 0 2022-11-26 06:06:29 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-4lvd kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-4lvd topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-provisioning-4266":"bootstrap-e2e-minion-group-4lvd"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-11-26 06:06:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}}}} } {kubelet Update v1 2022-11-26 06:06:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {node-problem-detector Update v1 2022-11-26 06:06:34 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2022-11-26 06:09:10 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:volumesAttached":{}}} status} {kubelet Update v1 2022-11-26 06:09:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-gce-dg-1-6-1-5-dwngr-clu/us-west1-b/bootstrap-e2e-minion-group-4lvd,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-26 06:06:34 +0000 UTC,LastTransitionTime:2022-11-26 06:06:33 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-26 06:06:34 +0000 UTC,LastTransitionTime:2022-11-26 06:06:33 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-26 06:06:34 +0000 UTC,LastTransitionTime:2022-11-26 06:06:33 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-26 06:06:34 +0000 UTC,LastTransitionTime:2022-11-26 06:06:33 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-26 06:06:34 +0000 UTC,LastTransitionTime:2022-11-26 06:06:33 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-26 06:06:34 +0000 UTC,LastTransitionTime:2022-11-26 06:06:33 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-26 06:06:34 +0000 UTC,LastTransitionTime:2022-11-26 06:06:33 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 06:06:38 +0000 UTC,LastTransitionTime:2022-11-26 06:06:38 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-26 06:09:34 +0000 UTC,LastTransitionTime:2022-11-26 06:06:29 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-26 06:09:34 +0000 UTC,LastTransitionTime:2022-11-26 06:06:29 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-26 06:09:34 +0000 UTC,LastTransitionTime:2022-11-26 06:06:29 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-26 06:09:34 +0000 UTC,LastTransitionTime:2022-11-26 06:06:29 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.5,},NodeAddress{Type:ExternalIP,Address:34.83.235.77,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-4lvd.c.k8s-gce-dg-1-6-1-5-dwngr-clu.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-4lvd.c.k8s-gce-dg-1-6-1-5-dwngr-clu.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:60066c0df6854403f12a43e8820a94a9,SystemUUID:60066c0d-f685-4403-f12a-43e8820a94a9,BootID:c95b6e64-9275-42a1-b638-b99f8479d167,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:67201736,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-provisioning-4266^d6fecfae-6d50-11ed-bee1-72f9a192f147],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-4266^d6fecfae-6d50-11ed-bee1-72f9a192f147,DevicePath:,},},Config:nil,},} Nov 26 06:09:39.300: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-4lvd Nov 26 06:09:39.394: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-4lvd Nov 26 06:09:39.726: INFO: pod-2c115652-4636-4797-8221-d1eea046cf6a started at 2022-11-26 06:08:36 +0000 UTC (0+1 container statuses recorded) Nov 26 06:09:39.726: INFO: Container write-pod ready: false, restart count 0 Nov 26 06:09:39.726: INFO: mutability-test-mzj28 started at 2022-11-26 06:08:48 +0000 UTC (0+1 container statuses recorded) Nov 26 06:09:39.726: INFO: Container netexec ready: true, restart count 1 Nov 26 06:09:39.726: INFO: csi-hostpathplugin-0 started at 2022-11-26 06:08:55 +0000 UTC (0+7 container statuses recorded) Nov 26 06:09:39.726: INFO: Container csi-attacher ready: true, restart count 2 Nov 26 06:09:39.726: INFO: Container csi-provisioner ready: true, restart count 2 Nov 26 06:09:39.726: INFO: Container csi-resizer ready: true, restart count 2 Nov 26 06:09:39.726: INFO: Container csi-snapshotter ready: true, restart count 2 Nov 26 06:09:39.726: INFO: Container hostpath ready: true, restart count 2 Nov 26 06:09:39.726: INFO: Container liveness-probe ready: true, restart count 2 Nov 26 06:09:39.726: INFO: Container node-driver-registrar ready: true, restart count 2 Nov 26 06:09:39.726: INFO: hostexec-bootstrap-e2e-minion-group-4lvd-c2msp started at 2022-11-26 06:09:36 +0000 UTC (0+1 container statuses recorded) Nov 26 06:09:39.726: INFO: Container agnhost-container ready: true, restart count 0 Nov 26 06:09:39.726: INFO: coredns-6d97d5ddb-n6d4l started at 2022-11-26 06:06:46 +0000 UTC (0+1 container statuses recorded) Nov 26 06:09:39.726: INFO: Container coredns ready: false, restart count 1 Nov 26 06:09:39.726: INFO: mutability-test-mxm86 started at 2022-11-26 06:08:48 +0000 UTC (0+1 container statuses recorded) Nov 26 06:09:39.726: INFO: Container netexec ready: true, restart count 1 Nov 26 06:09:39.726: INFO: kube-proxy-bootstrap-e2e-minion-group-4lvd started at 2022-11-26 06:06:29 +0000 UTC (0+1 container statuses recorded) Nov 26 06:09:39.726: INFO: Container kube-proxy ready: false, restart count 2 Nov 26 06:09:39.726: INFO: pod-subpath-test-dynamicpv-m52h started at 2022-11-26 06:09:09 +0000 UTC (1+1 container statuses recorded) Nov 26 06:09:39.726: INFO: Init container init-volume-dynamicpv-m52h ready: false, restart count 0 Nov 26 06:09:39.726: INFO: Container test-container-subpath-dynamicpv-m52h ready: false, restart count 0 Nov 26 06:09:39.726: INFO: hostexec-bootstrap-e2e-minion-group-4lvd-5ctsz started at 2022-11-26 06:09:35 +0000 UTC (0+1 container statuses recorded) Nov 26 06:09:39.726: INFO: Container agnhost-container ready: true, restart count 0 Nov 26 06:09:39.726: INFO: affinity-lb-esipp-qnhll started at 2022-11-26 06:09:26 +0000 UTC (0+1 container statuses recorded) Nov 26 06:09:39.726: INFO: Container affinity-lb-esipp ready: false, restart count 1 Nov 26 06:09:39.726: INFO: konnectivity-agent-dx4vl started at 2022-11-26 06:06:38 +0000 UTC (0+1 container statuses recorded) Nov 26 06:09:39.726: INFO: Container konnectivity-agent ready: false, restart count 1 Nov 26 06:09:39.726: INFO: pod-configmaps-31d506dc-db6d-4b37-bb2a-2737bf1878c2 started at 2022-11-26 06:09:33 +0000 UTC (0+1 container statuses recorded) Nov 26 06:09:39.726: INFO: Container agnhost-container ready: false, restart count 0 Nov 26 06:09:39.726: INFO: hostexec-bootstrap-e2e-minion-group-4lvd-vwqhj started at <nil> (0+0 container statuses recorded) Nov 26 06:09:39.726: INFO: metadata-proxy-v0.1-z77w4 started at 2022-11-26 06:06:30 +0000 UTC (0+2 container statuses recorded) Nov 26 06:09:39.726: INFO: Container metadata-proxy ready: true, restart count 0 Nov 26 06:09:39.726: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 26 06:09:40.181: INFO: Latency metrics for node bootstrap-e2e-minion-group-4lvd Nov 26 06:09:40.181: INFO: Logging node info for node bootstrap-e2e-minion-group-6hf3 Nov 26 06:09:40.339: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-6hf3 b331b1c1-4929-41cb-863a-a271765dc91a 2725 0 2022-11-26 06:06:35 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-6hf3 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-6hf3 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-5523":"bootstrap-e2e-minion-group-6hf3"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-26 06:06:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {node-problem-detector Update v1 2022-11-26 06:06:39 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2022-11-26 06:06:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}}}} } {kube-controller-manager Update v1 2022-11-26 06:06:49 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2022-11-26 06:09:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-gce-dg-1-6-1-5-dwngr-clu/us-west1-b/bootstrap-e2e-minion-group-6hf3,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-26 06:06:39 +0000 UTC,LastTransitionTime:2022-11-26 06:06:38 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-26 06:06:39 +0000 UTC,LastTransitionTime:2022-11-26 06:06:38 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-26 06:06:39 +0000 UTC,LastTransitionTime:2022-11-26 06:06:38 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-26 06:06:39 +0000 UTC,LastTransitionTime:2022-11-26 06:06:38 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-26 06:06:39 +0000 UTC,LastTransitionTime:2022-11-26 06:06:38 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-26 06:06:39 +0000 UTC,LastTransitionTime:2022-11-26 06:06:38 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-26 06:06:39 +0000 UTC,LastTransitionTime:2022-11-26 06:06:38 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 06:06:49 +0000 UTC,LastTransitionTime:2022-11-26 06:06:49 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-26 06:09:39 +0000 UTC,LastTransitionTime:2022-11-26 06:06:35 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-26 06:09:39 +0000 UTC,LastTransitionTime:2022-11-26 06:06:35 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-26 06:09:39 +0000 UTC,LastTransitionTime:2022-11-26 06:06:35 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-26 06:09:39 +0000 UTC,LastTransitionTime:2022-11-26 06:06:36 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.3,},NodeAddress{Type:ExternalIP,Address:34.127.104.133,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-6hf3.c.k8s-gce-dg-1-6-1-5-dwngr-clu.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-6hf3.c.k8s-gce-dg-1-6-1-5-dwngr-clu.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:26797942e06ae330f816fe3103a312e5,SystemUUID:26797942-e06a-e330-f816-fe3103a312e5,BootID:2fb94944-0ddb-4524-8e35-935d1c8900f4,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:67201736,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 26 06:09:40.339: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-6hf3 Nov 26 06:09:40.615: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-6hf3 Nov 26 06:09:41.373: INFO: nfs-server started at 2022-11-26 06:09:26 +0000 UTC (0+1 container statuses recorded) Nov 26 06:09:41.373: INFO: Container nfs-server ready: false, restart count 0 Nov 26 06:09:41.373: INFO: kube-proxy-bootstrap-e2e-minion-group-6hf3 started at 2022-11-26 06:06:35 +0000 UTC (0+1 container statuses recorded) Nov 26 06:09:41.373: INFO: Container kube-proxy ready: false, restart count 2 Nov 26 06:09:41.373: INFO: metrics-server-v0.5.2-867b8754b9-hr966 started at 2022-11-26 06:07:02 +0000 UTC (0+2 container statuses recorded) Nov 26 06:09:41.373: INFO: Container metrics-server ready: false, restart count 1 Nov 26 06:09:41.373: INFO: Container metrics-server-nanny ready: false, restart count 1 Nov 26 06:09:41.373: INFO: ss-1 started at 2022-11-26 06:09:25 +0000 UTC (0+1 container statuses recorded) Nov 26 06:09:41.373: INFO: Container webserver ready: false, restart count 0 Nov 26 06:09:41.373: INFO: metadata-proxy-v0.1-hgwt5 started at 2022-11-26 06:06:36 +0000 UTC (0+2 container statuses recorded) Nov 26 06:09:41.373: INFO: Container metadata-proxy ready: true, restart count 0 Nov 26 06:09:41.373: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 26 06:09:41.373: INFO: pod-f924687c-2191-45ec-8e19-0be662f948aa started at 2022-11-26 06:09:24 +0000 UTC (0+1 container statuses recorded) Nov 26 06:09:41.373: INFO: Container write-pod ready: false, restart count 0 Nov 26 06:09:41.373: INFO: test-hostpath-type-nqlrv started at 2022-11-26 06:09:25 +0000 UTC (0+1 container statuses recorded) Nov 26 06:09:41.373: INFO: Container host-path-sh-testing ready: true, restart count 0 Nov 26 06:09:41.373: INFO: hostexec-bootstrap-e2e-minion-group-6hf3-8fjm4 started at 2022-11-26 06:09:17 +0000 UTC (0+1 container statuses recorded) Nov 26 06:09:41.373: INFO: Container agnhost-container ready: true, restart count 2 Nov 26 06:09:41.373: INFO: test-hostpath-type-cvrqm started at 2022-11-26 06:09:18 +0000 UTC (0+1 container statuses recorded) Nov 26 06:09:41.373: INFO: Container host-path-sh-testing ready: true, restart count 0 Nov 26 06:09:41.373: INFO: affinity-lb-esipp-xvdkc started at 2022-11-26 06:09:25 +0000 UTC (0+1 container statuses recorded) Nov 26 06:09:41.373: INFO: Container affinity-lb-esipp ready: true, restart count 1 Nov 26 06:09:41.373: INFO: csi-mockplugin-0 started at 2022-11-26 06:08:36 +0000 UTC (0+3 container statuses recorded) Nov 26 06:09:41.373: INFO: Container csi-provisioner ready: true, restart count 0 Nov 26 06:09:41.373: INFO: Container driver-registrar ready: true, restart count 0 Nov 26 06:09:41.373: INFO: Container mock ready: true, restart count 0 Nov 26 06:09:41.373: INFO: affinity-lb-esipp-2d4hh started at 2022-11-26 06:09:26 +0000 UTC (0+1 container statuses recorded) Nov 26 06:09:41.373: INFO: Container affinity-lb-esipp ready: true, restart count 0 Nov 26 06:09:41.373: INFO: konnectivity-agent-czjjn started at 2022-11-26 06:06:49 +0000 UTC (0+1 container statuses recorded) Nov 26 06:09:41.373: INFO: Container konnectivity-agent ready: false, restart count 1 Nov 26 06:09:41.373: INFO: pod-secrets-6ba160e5-357c-413d-96d4-4cc971c326e3 started at 2022-11-26 06:08:35 +0000 UTC (0+1 container statuses recorded) Nov 26 06:09:41.373: INFO: Container creates-volume-test ready: false, restart count 0 Nov 26 06:09:46.769: INFO: Latency metrics for node bootstrap-e2e-minion-group-6hf3 Nov 26 06:09:46.769: INFO: Logging node info for node bootstrap-e2e-minion-group-8xrn Nov 26 06:09:46.829: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-8xrn bdb3ca25-6ac6-4bb7-a82f-bb358d7ba8f9 2860 0 2022-11-26 06:06:29 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-8xrn kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.hostpath.csi/node:bootstrap-e2e-minion-group-8xrn topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-multivolume-4259":"bootstrap-e2e-minion-group-8xrn","csi-hostpath-multivolume-9422":"bootstrap-e2e-minion-group-8xrn","csi-hostpath-provisioning-138":"bootstrap-e2e-minion-group-8xrn","csi-hostpath-provisioning-7419":"bootstrap-e2e-minion-group-8xrn"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-11-26 06:06:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}}}} } {kubelet Update v1 2022-11-26 06:06:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {node-problem-detector Update v1 2022-11-26 06:06:34 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2022-11-26 06:09:34 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:volumesAttached":{}}} status} {kubelet Update v1 2022-11-26 06:09:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-gce-dg-1-6-1-5-dwngr-clu/us-west1-b/bootstrap-e2e-minion-group-8xrn,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-26 06:06:34 +0000 UTC,LastTransitionTime:2022-11-26 06:06:33 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-26 06:06:34 +0000 UTC,LastTransitionTime:2022-11-26 06:06:33 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-26 06:06:34 +0000 UTC,LastTransitionTime:2022-11-26 06:06:33 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-26 06:06:34 +0000 UTC,LastTransitionTime:2022-11-26 06:06:33 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-26 06:06:34 +0000 UTC,LastTransitionTime:2022-11-26 06:06:33 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-26 06:06:34 +0000 UTC,LastTransitionTime:2022-11-26 06:06:33 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-26 06:06:34 +0000 UTC,LastTransitionTime:2022-11-26 06:06:33 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-26 06:06:38 +0000 UTC,LastTransitionTime:2022-11-26 06:06:38 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-26 06:09:43 +0000 UTC,LastTransitionTime:2022-11-26 06:06:29 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-26 06:09:43 +0000 UTC,LastTransitionTime:2022-11-26 06:06:29 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-26 06:09:43 +0000 UTC,LastTransitionTime:2022-11-26 06:06:29 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-26 06:09:43 +0000 UTC,LastTransitionTime:2022-11-26 06:06:29 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.4,},NodeAddress{Type:ExternalIP,Address:34.105.92.105,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-8xrn.c.k8s-gce-dg-1-6-1-5-dwngr-clu.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-8xrn.c.k8s-gce-dg-1-6-1-5-dwngr-clu.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:1e49116bed901a0b492cc2a872edd7a8,SystemUUID:1e49116b-ed90-1a0b-492c-c2a872edd7a8,BootID:24bc719e-5b36-4d61-bc89-44455ca28dd7,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.50+70617042976dc1,KubeProxyVersion:v1.27.0-alpha.0.50+70617042976dc1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.50_70617042976dc1],SizeBytes:67201736,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-provisioning-7419^e35eae53-6d50-11ed-93e8-0ef86fd76d4a kubernetes.io/csi/csi-hostpath-provisioning-7419^e360f917-6d50-11ed-93e8-0ef86fd76d4a kubernetes.io/csi/csi-hostpath-provisioning-7419^e36a9f2b-6d50-11ed-93e8-0ef86fd76d4a kubernetes.io/csi/csi-hostpath-provisioning-7419^e36cc4f9-6d50-11ed-93e8-0ef86fd76d4a kubernetes.io/csi/csi-hostpath-provisioning-7419^e3757e30-6d50-11ed-93e8-0ef86fd76d4a],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-7419^e35eae53-6d50-11ed-93e8-0ef86fd76d4a,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-7419^e36cc4f9-6d50-11ed-93e8-0ef86fd76d4a,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-7419^e36a9f2b-6d50-11ed-93e8-0ef86fd76d4a,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-7419^e3757e30-6d50-11ed-93e8-0ef86fd76d4a,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-7419^e360f917-6d50-11ed-93e8-0ef86fd76d4a,DevicePath:,},},Config:nil,},} Nov 26 06:09:46.830: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-8xrn Nov 26 06:09:46.891: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-8xrn Nov 26 06:09:47.167: INFO: kube-proxy-bootstrap-e2e-minion-group-8xrn started at 2022-11-26 06:06:29 +0000 UTC (0+1 container statuses recorded) Nov 26 06:09:47.167: INFO: Container kube-proxy ready: false, restart count 2 Nov 26 06:09:47.167: INFO: konnectivity-agent-7ppwz started at 2022-11-26 06:06:38 +0000 UTC (0+1 container statuses recorded) Nov 26 06:09:47.167: INFO: Container konnectivity-agent ready: true, restart count 2 Nov 26 06:09:47.167: INFO: hostpath-2-client started at 2022-11-26 06:09:31 +0000 UTC (0+1 container statuses recorded) Nov 26 06:09:47.167: INFO: Container hostpath-2-client ready: false, restart count 0 Nov 26 06:09:47.167: INFO: hostexec-bootstrap-e2e-minion-group-8xrn-j9z74 started at 2022-11-26 06:09:13 +0000 UTC (0+1 container statuses recorded) Nov 26 06:09:47.167: INFO: Container agnhost-container ready: true, restart count 0 Nov 26 06:09:47.167: INFO: l7-default-backend-8549d69d99-7w7f7 started at 2022-11-26 06:06:38 +0000 UTC (0+1 container statuses recorded) Nov 26 06:09:47.167: INFO: Container default-http-backend ready: true, restart count 0 Nov 26 06:09:47.167: INFO: hostpath-0-client started at <nil> (0+0 container statuses recorded) Nov 26 06:09:47.167: INFO: pod-subpath-test-preprovisionedpv-spd9 started at 2022-11-26 06:09:07 +0000 UTC (1+2 container statuses recorded) Nov 26 06:09:47.167: INFO: Init container init-volume-preprovisionedpv-spd9 ready: true, restart count 1 Nov 26 06:09:47.167: INFO: Container test-container-subpath-preprovisionedpv-spd9 ready: true, restart count 1 Nov 26 06:09:47.167: INFO: Container test-container-volume-preprovisionedpv-spd9 ready: true, restart count 1 Nov 26 06:09:47.167: INFO: hostexec-bootstrap-e2e-minion-group-8xrn-2kfpg started at 2022-11-26 06:08:44 +0000 UTC (0+1 container statuses recorded) Nov 26 06:09:47.167: INFO: Container agnhost-container ready: true, restart count 0 Nov 26 06:09:47.167: INFO: hostpath-3-client started at 2022-11-26 06:09:31 +0000 UTC (0+1 container statuses recorded) Nov 26 06:09:47.167: INFO: Container hostpath-3-client ready: false, restart count 0 Nov 26 06:09:47.167: INFO: csi-hostpathplugin-0 started at 2022-11-26 06:09:32 +0000 UTC (0+7 container statuses recorded) Nov 26 06:09:47.167: INFO: Container csi-attacher ready: true, restart count 1 Nov 26 06:09:47.167: INFO: Container csi-provisioner ready: true, restart count 1 Nov 26 06:09:47.167: INFO: Container csi-resizer ready: true, restart count 1 Nov 26 06:09:47.167: INFO: Container csi-snapshotter ready: true, restart count 1 Nov 26 06:09:47.167: INFO: Container hostpath ready: true, restart count 1 Nov 26 06:09:47.167: INFO: Container liveness-probe ready: true, restart count 1 Nov 26 06:09:47.167: INFO: Container node-driver-registrar ready: true, restart count 1 Nov 26 06:09:47.167: INFO: volume-snapshot-controller-0 started at 2022-11-26 06:06:38 +0000 UTC (0+1 container statuses recorded) Nov 26 06:09:47.167: INFO: Container volume-snapshot-controller ready: true, restart count 2 Nov 26 06:09:47.167: INFO: var-expansion-9bc36452-7149-43e0-822d-e07557bedbc2 started at 2022-11-26 06:08:35 +0000 UTC (0+1 container statuses recorded) Nov 26 06:09:47.167: INFO: Container dapi-container ready: false, restart count 0 Nov 26 06:09:47.167: INFO: csi-hostpathplugin-0 started at 2022-11-26 06:08:37 +0000 UTC (0+7 container statuses recorded) Nov 26 06:09:47.167: INFO: Container csi-attacher ready: true, restart count 0 Nov 26 06:09:47.167: INFO: Container csi-provisioner ready: true, restart count 0 Nov 26 06:09:47.167: INFO: Container csi-resizer ready: true, restart count 0 Nov 26 06:09:47.167: INFO: Container csi-snapshotter ready: true, restart count 0 Nov 26 06:09:47.167: INFO: Container hostpath ready: true, restart count 0 Nov 26 06:09:47.167: INFO: Container liveness-probe ready: true, restart count 0 Nov 26 06:09:47.167: INFO: Container node-driver-registrar ready: true, restart count 0 Nov 26 06:09:47.167: INFO: kube-dns-autoscaler-5f6455f985-z5fph started at 2022-11-26 06:06:38 +0000 UTC (0+1 container statuses recorded) Nov 26 06:09:47.167: INFO: Container autoscaler ready: false, restart count 1 Nov 26 06:09:47.167: INFO: hostpath-1-client started at 2022-11-26 06:09:31 +0000 UTC (0+1 container statuses recorded) Nov 26 06:09:47.167: INFO: Container hostpath-1-client ready: false, restart count 0 Nov 26 06:09:47.167: INFO: csi-hostpathplugin-0 started at 2022-11-26 06:09:11 +0000 UTC (0+7 container statuses recorded) Nov 26 06:09:47.167: INFO: Container csi-attacher ready: true, restart count 0 Nov 26 06:09:47.167: INFO: Container csi-provisioner ready: true, restart count 0 Nov 26 06:09:47.167: INFO: Container csi-resizer ready: true, restart count 0 Nov 26 06:09:47.167: INFO: Container csi-snapshotter ready: true, restart count 0 Nov 26 06:09:47.167: INFO: Container hostpath ready: true, restart count 0 Nov 26 06:09:47.167: INFO: Container liveness-probe ready: true, restart count 0 Nov 26 06:09:47.167: INFO: Container node-driver-registrar ready: true, restart count 0 Nov 26 06:09:47.167: INFO: metadata-proxy-v0.1-h465b started at 2022-11-26 06:06:30 +0000 UTC (0+2 container statuses recorded) Nov 26 06:09:47.167: INFO: Container metadata-proxy ready: true, restart count 0 Nov 26 06:09:47.167: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 26 06:09:47.167: INFO: hostexec-bootstrap-e2e-minion-group-8xrn-2m9l7 started at <nil> (0+0 container statuses recorded) Nov 26 06:09:47.167: INFO: coredns-6d97d5ddb-rr67j started at 2022-11-26 06:06:38 +0000 UTC (0+1 container statuses recorded) Nov 26 06:09:47.167: INFO: Container coredns ready: true, restart count 0 Nov 26 06:09:47.167: INFO: csi-hostpathplugin-0 started at 2022-11-26 06:08:38 +0000 UTC (0+7 container statuses recorded) Nov 26 06:09:47.167: INFO: Container csi-attacher ready: true, restart count 0 Nov 26 06:09:47.167: INFO: Container csi-provisioner ready: true, restart count 0 Nov 26 06:09:47.167: INFO: Container csi-resizer ready: true, restart count 0 Nov 26 06:09:47.167: INFO: Container csi-snapshotter ready: true, restart count 0 Nov 26 06:09:47.167: INFO: Container hostpath ready: true, restart count 0 Nov 26 06:09:47.167: INFO: Container liveness-probe ready: true, restart count 0 Nov 26 06:09:47.167: INFO: Container node-driver-registrar ready: true, restart count 0 Nov 26 06:09:47.167: INFO: pod-fefa9900-cce6-4435-a3b9-4821d4bbb9fa started at 2022-11-26 06:09:32 +0000 UTC (0+1 container statuses recorded) Nov 26 06:09:47.167: INFO: Container write-pod ready: false, restart count 0 Nov 26 06:09:47.167: INFO: ss-0 started at 2022-11-26 06:08:35 +0000 UTC (0+1 container statuses recorded) Nov 26 06:09:47.167: INFO: Container webserver ready: true, restart count 0 Nov 26 06:09:47.167: INFO: hostpath-4-client started at 2022-11-26 06:09:31 +0000 UTC (0+1 container statuses recorded) Nov 26 06:09:47.167: INFO: Container hostpath-4-client ready: false, restart count 0 Nov 26 06:09:48.060: INFO: Latency metrics for node bootstrap-e2e-minion-group-8xrn [DeferCleanup (Each)] [sig-network] LoadBalancers tear down framework | framework.go:193 STEP: Destroying namespace "loadbalancers-5104" for this suite. 11/26/22 06:09:48.06
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-network\]\sLoadBalancers\sshould\sonly\sallow\saccess\sfrom\sservice\sloadbalancer\ssource\sranges\s\[Slow\]$'
test/e2e/network/service.go:4078 k8s.io/kubernetes/test/e2e/network.checkReachabilityFromPod(0x1, 0x0?, {0xc00430e858, 0x12}, {0xc00430e708, 0x11}, {0xc004e263d0?, 0x7fe0bc8?}) test/e2e/network/service.go:4078 +0x165 k8s.io/kubernetes/test/e2e/network.glob..func19.5() test/e2e/network/loadbalancer.go:568 +0x857
[BeforeEach] [sig-network] LoadBalancers set up framework | framework.go:178 STEP: Creating a kubernetes client 11/26/22 06:20:06.443 Nov 26 06:20:06.444: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename loadbalancers 11/26/22 06:20:06.445 STEP: Waiting for a default service account to be provisioned in namespace 11/26/22 06:20:06.621 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 11/26/22 06:20:06.709 [BeforeEach] [sig-network] LoadBalancers test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-network] LoadBalancers test/e2e/network/loadbalancer.go:65 [It] should only allow access from service loadbalancer source ranges [Slow] test/e2e/network/loadbalancer.go:487 STEP: Prepare allow source ips 11/26/22 06:20:06.911 Nov 26 06:20:06.911: INFO: Creating new exec pod Nov 26 06:20:06.977: INFO: Waiting up to 5m0s for pod "execpod-acceptqsr5s" in namespace "loadbalancers-8265" to be "running" Nov 26 06:20:07.028: INFO: Pod "execpod-acceptqsr5s": Phase="Pending", Reason="", readiness=false. Elapsed: 50.471594ms Nov 26 06:20:09.122: INFO: Pod "execpod-acceptqsr5s": Phase="Pending", Reason="", readiness=false. Elapsed: 2.144677849s Nov 26 06:20:11.195: INFO: Pod "execpod-acceptqsr5s": Phase="Running", Reason="", readiness=true. Elapsed: 4.217585442s Nov 26 06:20:11.195: INFO: Pod "execpod-acceptqsr5s" satisfied condition "running" Nov 26 06:20:11.195: INFO: Creating new exec pod Nov 26 06:20:11.305: INFO: Waiting up to 5m0s for pod "execpod-dropgqznm" in namespace "loadbalancers-8265" to be "running" Nov 26 06:20:11.482: INFO: Pod "execpod-dropgqznm": Phase="Pending", Reason="", readiness=false. Elapsed: 177.393635ms Nov 26 06:20:13.531: INFO: Pod "execpod-dropgqznm": Phase="Pending", Reason="", readiness=false. Elapsed: 2.225813057s Nov 26 06:20:15.578: INFO: Pod "execpod-dropgqznm": Phase="Running", Reason="", readiness=true. Elapsed: 4.273361432s Nov 26 06:20:15.578: INFO: Pod "execpod-dropgqznm" satisfied condition "running" STEP: creating a pod to be part of the service lb-sourcerange 11/26/22 06:20:15.578 Nov 26 06:20:15.644: INFO: Waiting up to 2m0s for 1 pods to be created Nov 26 06:20:15.741: INFO: Found all 1 pods Nov 26 06:20:15.741: INFO: Waiting up to 2m0s for 1 pods to be running and ready: [lb-sourcerange-pb7ck] Nov 26 06:20:15.741: INFO: Waiting up to 2m0s for pod "lb-sourcerange-pb7ck" in namespace "loadbalancers-8265" to be "running and ready" Nov 26 06:20:15.832: INFO: Pod "lb-sourcerange-pb7ck": Phase="Pending", Reason="", readiness=false. Elapsed: 90.651955ms Nov 26 06:20:15.832: INFO: Error evaluating pod condition running and ready: want pod 'lb-sourcerange-pb7ck' on 'bootstrap-e2e-minion-group-8xrn' to be 'Running' but was 'Pending' Nov 26 06:20:17.938: INFO: Pod "lb-sourcerange-pb7ck": Phase="Running", Reason="", readiness=false. Elapsed: 2.197394127s Nov 26 06:20:17.938: INFO: Error evaluating pod condition running and ready: pod 'lb-sourcerange-pb7ck' on 'bootstrap-e2e-minion-group-8xrn' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:20:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:20:15 +0000 UTC ContainersNotReady containers with unready status: [netexec]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:20:15 +0000 UTC ContainersNotReady containers with unready status: [netexec]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-26 06:20:15 +0000 UTC }] Nov 26 06:20:20.065: INFO: Pod "lb-sourcerange-pb7ck": Phase="Running", Reason="", readiness=true. Elapsed: 4.323769793s Nov 26 06:20:20.065: INFO: Pod "lb-sourcerange-pb7ck" satisfied condition "running and ready" Nov 26 06:20:20.065: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [lb-sourcerange-pb7ck] Nov 26 06:20:20.517: INFO: Waiting up to 15m0s for service "lb-sourcerange" to have a LoadBalancer Nov 26 06:21:16.096: INFO: Retrying .... error trying to get Service lb-sourcerange: Get "https://34.83.17.181/api/v1/namespaces/loadbalancers-8265/services/lb-sourcerange": dial tcp 34.83.17.181:443: connect: connection refused - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=1315, ErrCode=NO_ERROR, debug="" Nov 26 06:21:16.661: INFO: Retrying .... error trying to get Service lb-sourcerange: Get "https://34.83.17.181/api/v1/namespaces/loadbalancers-8265/services/lb-sourcerange": dial tcp 34.83.17.181:443: connect: connection refused STEP: check reachability from different sources 11/26/22 06:22:20.673 Nov 26 06:22:20.673: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134"' Nov 26 06:22:21.265: INFO: rc: 1 Nov 26 06:22:21.265: INFO: Expect target to be reachable. But got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134": Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1. Retry until timeout Nov 26 06:22:23.267: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134"' Nov 26 06:22:23.750: INFO: rc: 1 Nov 26 06:22:23.750: INFO: Expect target to be reachable. But got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134": Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1. Retry until timeout Nov 26 06:22:25.266: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134"' Nov 26 06:22:25.770: INFO: rc: 1 Nov 26 06:22:25.770: INFO: Expect target to be reachable. But got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134": Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1. Retry until timeout Nov 26 06:22:27.267: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134"' Nov 26 06:22:27.765: INFO: rc: 1 Nov 26 06:22:27.765: INFO: Expect target to be reachable. But got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134": Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1. Retry until timeout Nov 26 06:22:29.266: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134"' Nov 26 06:22:29.937: INFO: rc: 1 Nov 26 06:22:29.937: INFO: Expect target to be reachable. But got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134": Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1. Retry until timeout Nov 26 06:22:31.266: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134"' Nov 26 06:22:31.956: INFO: rc: 1 Nov 26 06:22:31.956: INFO: Expect target to be reachable. But got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134": Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1. Retry until timeout Nov 26 06:22:33.266: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134"' Nov 26 06:22:33.797: INFO: rc: 1 Nov 26 06:22:33.797: INFO: Expect target to be reachable. But got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134": Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1. Retry until timeout Nov 26 06:22:35.266: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134"' Nov 26 06:22:35.788: INFO: rc: 1 Nov 26 06:22:35.788: INFO: Expect target to be reachable. But got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134": Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1. Retry until timeout Nov 26 06:22:37.267: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134"' Nov 26 06:22:37.794: INFO: rc: 1 Nov 26 06:22:37.794: INFO: Expect target to be reachable. But got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134": Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1. Retry until timeout Nov 26 06:22:39.267: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134"' Nov 26 06:22:39.983: INFO: rc: 1 Nov 26 06:22:39.983: INFO: Expect target to be reachable. But got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134": Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1. Retry until timeout Nov 26 06:22:41.266: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134"' Nov 26 06:22:41.722: INFO: rc: 1 Nov 26 06:22:41.722: INFO: Expect target to be reachable. But got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134": Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1. Retry until timeout Nov 26 06:22:43.266: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134"' Nov 26 06:22:43.654: INFO: rc: 1 Nov 26 06:22:43.654: INFO: Expect target to be reachable. But got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134": Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1. Retry until timeout Nov 26 06:22:45.266: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134"' Nov 26 06:22:46.019: INFO: rc: 1 Nov 26 06:22:46.019: INFO: Expect target to be reachable. But got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134": Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1. Retry until timeout Nov 26 06:22:47.266: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134"' Nov 26 06:22:47.830: INFO: rc: 1 Nov 26 06:22:47.830: INFO: Expect target to be reachable. But got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134": Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1. Retry until timeout Nov 26 06:22:49.266: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134"' Nov 26 06:22:49.380: INFO: rc: 1 Nov 26 06:22:49.381: INFO: Expect target to be reachable. But got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134": Command stdout: stderr: The connection to the server 34.83.17.181 was refused - did you specify the right host or port? error: exit status 1. Retry until timeout Nov 26 06:22:51.266: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134"' Nov 26 06:22:51.387: INFO: rc: 1 Nov 26 06:22:51.387: INFO: Expect target to be reachable. But got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134": Command stdout: stderr: The connection to the server 34.83.17.181 was refused - did you specify the right host or port? error: exit status 1. Retry until timeout Nov 26 06:22:53.267: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134"' Nov 26 06:22:53.382: INFO: rc: 1 Nov 26 06:22:53.382: INFO: Expect target to be reachable. But got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134": Command stdout: stderr: The connection to the server 34.83.17.181 was refused - did you specify the right host or port? error: exit status 1. Retry until timeout Nov 26 06:22:55.267: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134"' Nov 26 06:22:55.376: INFO: rc: 1 Nov 26 06:22:55.376: INFO: Expect target to be reachable. But got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134": Command stdout: stderr: The connection to the server 34.83.17.181 was refused - did you specify the right host or port? error: exit status 1. Retry until timeout Nov 26 06:22:57.267: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134"' Nov 26 06:22:57.376: INFO: rc: 1 Nov 26 06:22:57.376: INFO: Expect target to be reachable. But got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134": Command stdout: stderr: The connection to the server 34.83.17.181 was refused - did you specify the right host or port? error: exit status 1. Retry until timeout Nov 26 06:22:59.266: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134"' Nov 26 06:22:59.388: INFO: rc: 1 Nov 26 06:22:59.388: INFO: Expect target to be reachable. But got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134": Command stdout: stderr: The connection to the server 34.83.17.181 was refused - did you specify the right host or port? error: exit status 1. Retry until timeout Nov 26 06:23:01.266: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134"' Nov 26 06:23:01.376: INFO: rc: 1 Nov 26 06:23:01.376: INFO: Expect target to be reachable. But got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134": Command stdout: stderr: The connection to the server 34.83.17.181 was refused - did you specify the right host or port? error: exit status 1. Retry until timeout Nov 26 06:23:03.266: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134"' Nov 26 06:23:03.380: INFO: rc: 1 Nov 26 06:23:03.380: INFO: Expect target to be reachable. But got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134": Command stdout: stderr: The connection to the server 34.83.17.181 was refused - did you specify the right host or port? error: exit status 1. Retry until timeout Nov 26 06:23:05.266: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134"' Nov 26 06:23:05.379: INFO: rc: 1 Nov 26 06:23:05.379: INFO: Expect target to be reachable. But got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134": Command stdout: stderr: The connection to the server 34.83.17.181 was refused - did you specify the right host or port? error: exit status 1. Retry until timeout Nov 26 06:23:07.266: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134"' Nov 26 06:23:07.374: INFO: rc: 1 Nov 26 06:23:07.374: INFO: Expect target to be reachable. But got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134": Command stdout: stderr: The connection to the server 34.83.17.181 was refused - did you specify the right host or port? error: exit status 1. Retry until timeout Nov 26 06:23:09.266: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134"' Nov 26 06:23:09.386: INFO: rc: 1 Nov 26 06:23:09.386: INFO: Expect target to be reachable. But got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134": Command stdout: stderr: The connection to the server 34.83.17.181 was refused - did you specify the right host or port? error: exit status 1. Retry until timeout Nov 26 06:23:11.266: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134"' Nov 26 06:23:11.384: INFO: rc: 1 Nov 26 06:23:11.384: INFO: Expect target to be reachable. But got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134": Command stdout: stderr: The connection to the server 34.83.17.181 was refused - did you specify the right host or port? error: exit status 1. Retry until timeout Nov 26 06:23:13.266: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134"' Nov 26 06:23:13.375: INFO: rc: 1 Nov 26 06:23:13.375: INFO: Expect target to be reachable. But got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134": Command stdout: stderr: The connection to the server 34.83.17.181 was refused - did you specify the right host or port? error: exit status 1. Retry until timeout Nov 26 06:23:15.266: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134"' Nov 26 06:23:15.376: INFO: rc: 1 Nov 26 06:23:15.376: INFO: Expect target to be reachable. But got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134": Command stdout: stderr: The connection to the server 34.83.17.181 was refused - did you specify the right host or port? error: exit status 1. Retry until timeout Nov 26 06:23:17.266: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134"' Nov 26 06:23:17.384: INFO: rc: 1 Nov 26 06:23:17.384: INFO: Expect target to be reachable. But got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134": Command stdout: stderr: The connection to the server 34.83.17.181 was refused - did you specify the right host or port? error: exit status 1. Retry until timeout Nov 26 06:23:19.266: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134"' Nov 26 06:23:19.403: INFO: rc: 1 Nov 26 06:23:19.403: INFO: Expect target to be reachable. But got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134": Command stdout: stderr: The connection to the server 34.83.17.181 was refused - did you specify the right host or port? error: exit status 1. Retry until timeout Nov 26 06:23:21.267: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134"' Nov 26 06:23:21.402: INFO: rc: 1 Nov 26 06:23:21.402: INFO: Expect target to be reachable. But got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134": Command stdout: stderr: The connection to the server 34.83.17.181 was refused - did you specify the right host or port? error: exit status 1. Retry until timeout Nov 26 06:23:23.267: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134"' Nov 26 06:23:23.406: INFO: rc: 1 Nov 26 06:23:23.406: INFO: Expect target to be reachable. But got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134": Command stdout: stderr: The connection to the server 34.83.17.181 was refused - did you specify the right host or port? error: exit status 1. Retry until timeout Nov 26 06:23:25.267: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134"' Nov 26 06:23:25.403: INFO: rc: 1 Nov 26 06:23:25.403: INFO: Expect target to be reachable. But got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134": Command stdout: stderr: The connection to the server 34.83.17.181 was refused - did you specify the right host or port? error: exit status 1. Retry until timeout Nov 26 06:23:27.266: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134"' Nov 26 06:23:27.411: INFO: rc: 1 Nov 26 06:23:27.411: INFO: Expect target to be reachable. But got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134": Command stdout: stderr: The connection to the server 34.83.17.181 was refused - did you specify the right host or port? error: exit status 1. Retry until timeout Nov 26 06:23:29.266: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134"' Nov 26 06:23:29.411: INFO: rc: 1 Nov 26 06:23:29.411: INFO: Expect target to be reachable. But got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134": Command stdout: stderr: The connection to the server 34.83.17.181 was refused - did you specify the right host or port? error: exit status 1. Retry until timeout Nov 26 06:23:31.267: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134"' Nov 26 06:23:31.402: INFO: rc: 1 Nov 26 06:23:31.402: INFO: Expect target to be reachable. But got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134": Command stdout: stderr: The connection to the server 34.83.17.181 was refused - did you specify the right host or port? error: exit status 1. Retry until timeout Nov 26 06:23:33.266: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134"' Nov 26 06:23:33.400: INFO: rc: 1 Nov 26 06:23:33.400: INFO: Expect target to be reachable. But got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134": Command stdout: stderr: The connection to the server 34.83.17.181 was refused - did you specify the right host or port? error: exit status 1. Retry until timeout Nov 26 06:23:35.267: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134"' Nov 26 06:23:35.408: INFO: rc: 1 Nov 26 06:23:35.408: INFO: Expect target to be reachable. But got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134": Command stdout: stderr: The connection to the server 34.83.17.181 was refused - did you specify the right host or port? error: exit status 1. Retry until timeout Nov 26 06:23:37.267: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134"' Nov 26 06:23:37.400: INFO: rc: 1 Nov 26 06:23:37.400: INFO: Expect target to be reachable. But got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134": Command stdout: stderr: The connection to the server 34.83.17.181 was refused - did you specify the right host or port? error: exit status 1. Retry until timeout Nov 26 06:23:39.267: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134"' Nov 26 06:23:39.409: INFO: rc: 1 Nov 26 06:23:39.409: INFO: Expect target to be reachable. But got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134": Command stdout: stderr: The connection to the server 34.83.17.181 was refused - did you specify the right host or port? error: exit status 1. Retry until timeout Nov 26 06:23:41.267: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134"' Nov 26 06:23:41.409: INFO: rc: 1 Nov 26 06:23:41.409: INFO: Expect target to be reachable. But got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134": Command stdout: stderr: The connection to the server 34.83.17.181 was refused - did you specify the right host or port? error: exit status 1. Retry until timeout Nov 26 06:23:43.267: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134"' Nov 26 06:23:43.403: INFO: rc: 1 Nov 26 06:23:43.403: INFO: Expect target to be reachable. But got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134": Command stdout: stderr: The connection to the server 34.83.17.181 was refused - did you specify the right host or port? error: exit status 1. Retry until timeout Nov 26 06:23:45.266: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134"' Nov 26 06:23:45.407: INFO: rc: 1 Nov 26 06:23:45.407: INFO: Expect target to be reachable. But got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134": Command stdout: stderr: The connection to the server 34.83.17.181 was refused - did you specify the right host or port? error: exit status 1. Retry until timeout Nov 26 06:23:47.266: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134"' Nov 26 06:23:47.404: INFO: rc: 1 Nov 26 06:23:47.404: INFO: Expect target to be reachable. But got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134": Command stdout: stderr: The connection to the server 34.83.17.181 was refused - did you specify the right host or port? error: exit status 1. Retry until timeout Nov 26 06:23:49.266: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134"' Nov 26 06:23:49.412: INFO: rc: 1 Nov 26 06:23:49.412: INFO: Expect target to be reachable. But got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134": Command stdout: stderr: The connection to the server 34.83.17.181 was refused - did you specify the right host or port? error: exit status 1. Retry until timeout Nov 26 06:23:51.266: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134"' Nov 26 06:23:51.420: INFO: rc: 1 Nov 26 06:23:51.420: INFO: Expect target to be reachable. But got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134": Command stdout: stderr: The connection to the server 34.83.17.181 was refused - did you specify the right host or port? error: exit status 1. Retry until timeout Nov 26 06:23:53.267: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134"' Nov 26 06:23:55.631: INFO: rc: 1 Nov 26 06:23:55.631: INFO: Expect target to be reachable. But got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134": Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1. Retry until timeout Nov 26 06:23:57.266: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134"' Nov 26 06:23:57.618: INFO: rc: 1 Nov 26 06:23:57.619: INFO: Expect target to be reachable. But got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134": Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1. Retry until timeout Nov 26 06:23:59.266: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134"' Nov 26 06:23:59.712: INFO: rc: 1 Nov 26 06:23:59.712: INFO: Expect target to be reachable. But got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134": Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1. Retry until timeout Nov 26 06:24:01.267: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134"' Nov 26 06:24:01.660: INFO: rc: 1 Nov 26 06:24:01.660: INFO: Expect target to be reachable. But got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134": Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1. Retry until timeout Nov 26 06:24:03.266: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134"' Nov 26 06:24:03.656: INFO: rc: 1 Nov 26 06:24:03.656: INFO: Expect target to be reachable. But got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134": Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1. Retry until timeout Nov 26 06:24:05.266: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134"' Nov 26 06:24:05.627: INFO: rc: 1 Nov 26 06:24:05.627: INFO: Expect target to be reachable. But got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134": Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1. Retry until timeout Nov 26 06:24:07.267: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134"' Nov 26 06:24:07.632: INFO: rc: 1 Nov 26 06:24:07.632: INFO: Expect target to be reachable. But got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134": Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1. Retry until timeout Nov 26 06:24:09.266: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134"' Nov 26 06:24:09.644: INFO: rc: 1 Nov 26 06:24:09.644: INFO: Expect target to be reachable. But got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134": Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1. Retry until timeout Nov 26 06:24:11.266: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134"' Nov 26 06:24:11.627: INFO: rc: 1 Nov 26 06:24:11.627: INFO: Expect target to be reachable. But got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134": Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1. Retry until timeout Nov 26 06:24:13.266: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134"' Nov 26 06:24:13.625: INFO: rc: 1 Nov 26 06:24:13.625: INFO: Expect target to be reachable. But got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134": Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1. Retry until timeout Nov 26 06:24:15.267: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134"' Nov 26 06:24:15.629: INFO: rc: 1 Nov 26 06:24:15.629: INFO: Expect target to be reachable. But got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134": Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1. Retry until timeout Nov 26 06:24:17.267: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134"' Nov 26 06:24:17.622: INFO: rc: 1 Nov 26 06:24:17.622: INFO: Expect target to be reachable. But got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134": Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1. Retry until timeout Nov 26 06:24:19.266: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134"' Nov 26 06:24:19.655: INFO: rc: 1 Nov 26 06:24:19.655: INFO: Expect target to be reachable. But got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134": Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1. Retry until timeout Nov 26 06:24:21.266: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134"' Nov 26 06:24:21.631: INFO: rc: 1 Nov 26 06:24:21.631: INFO: Expect target to be reachable. But got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134": Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1. Retry until timeout Nov 26 06:24:23.266: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134"' Nov 26 06:24:23.634: INFO: rc: 1 Nov 26 06:24:23.634: INFO: Expect target to be reachable. But got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134": Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1. Retry until timeout Nov 26 06:24:25.266: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134"' Nov 26 06:24:25.678: INFO: rc: 1 Nov 26 06:24:25.678: INFO: Expect target to be reachable. But got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134": Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1. Retry until timeout Nov 26 06:24:27.267: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134"' Nov 26 06:24:27.678: INFO: rc: 1 Nov 26 06:24:27.678: INFO: Expect target to be reachable. But got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134": Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1. Retry until timeout Nov 26 06:24:29.266: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134"' Nov 26 06:24:29.656: INFO: rc: 1 Nov 26 06:24:29.656: INFO: Expect target to be reachable. But got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134": Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1. Retry until timeout Nov 26 06:24:31.267: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134"' Nov 26 06:24:31.628: INFO: rc: 1 Nov 26 06:24:31.628: INFO: Expect target to be reachable. But got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134": Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1. Retry until timeout Nov 26 06:24:33.266: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134"' Nov 26 06:24:33.832: INFO: stderr: "+ wget -T 5 -qO- 34.145.84.134\n" Nov 26 06:24:33.832: INFO: stdout: "NOW: 2022-11-26 06:24:33.794305566 +0000 UTC m=+133.082910298" Nov 26 06:24:33.832: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-dropgqznm -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134"' Nov 26 06:24:34.371: INFO: rc: 1 STEP: Update service LoadBalancerSourceRange and check reachability 11/26/22 06:24:34.504 Nov 26 06:24:34.608: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134"' Nov 26 06:24:40.161: INFO: rc: 1 Nov 26 06:24:40.161: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-dropgqznm -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134"' Nov 26 06:24:40.748: INFO: rc: 1 Nov 26 06:24:40.748: INFO: Expect target to be reachable. But got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-dropgqznm -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134": Command stdout: stderr: + wget -T 5 -qO- 34.145.84.134 wget: can't connect to remote host (34.145.84.134): Connection refused command terminated with exit code 1 error: exit status 1. Retry until timeout Nov 26 06:24:42.749: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-dropgqznm -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134"' Nov 26 06:24:43.302: INFO: rc: 1 Nov 26 06:24:43.302: INFO: Expect target to be reachable. But got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-dropgqznm -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134": Command stdout: stderr: + wget -T 5 -qO- 34.145.84.134 wget: can't connect to remote host (34.145.84.134): Connection refused command terminated with exit code 1 error: exit status 1. Retry until timeout Nov 26 06:24:44.748: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-dropgqznm -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134"' Nov 26 06:24:45.987: INFO: rc: 1 Nov 26 06:24:45.987: INFO: Expect target to be reachable. But got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-dropgqznm -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134": Command stdout: stderr: + wget -T 5 -qO- 34.145.84.134 wget: can't connect to remote host (34.145.84.134): Connection refused command terminated with exit code 1 error: exit status 1. Retry until timeout Nov 26 06:24:46.748: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-dropgqznm -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134"' Nov 26 06:24:47.285: INFO: rc: 1 Nov 26 06:24:47.285: INFO: Expect target to be reachable. But got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-dropgqznm -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134": Command stdout: stderr: + wget -T 5 -qO- 34.145.84.134 wget: can't connect to remote host (34.145.84.134): Connection refused command terminated with exit code 1 error: exit status 1. Retry until timeout Nov 26 06:24:48.749: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-dropgqznm -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134"' Nov 26 06:24:49.374: INFO: rc: 1 Nov 26 06:24:49.374: INFO: Expect target to be reachable. But got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-dropgqznm -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134": Command stdout: stderr: + wget -T 5 -qO- 34.145.84.134 wget: can't connect to remote host (34.145.84.134): Connection refused command terminated with exit code 1 error: exit status 1. Retry until timeout Nov 26 06:24:50.748: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-dropgqznm -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134"' Nov 26 06:24:51.287: INFO: rc: 1 Nov 26 06:24:51.287: INFO: Expect target to be reachable. But got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-dropgqznm -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134": Command stdout: stderr: + wget -T 5 -qO- 34.145.84.134 wget: can't connect to remote host (34.145.84.134): Connection refused command terminated with exit code 1 error: exit status 1. Retry until timeout Nov 26 06:24:52.749: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-dropgqznm -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134"' Nov 26 06:24:53.288: INFO: rc: 1 Nov 26 06:24:53.288: INFO: Expect target to be reachable. But got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-dropgqznm -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134": Command stdout: stderr: + wget -T 5 -qO- 34.145.84.134 wget: can't connect to remote host (34.145.84.134): Connection refused command terminated with exit code 1 error: exit status 1. Retry until timeout Nov 26 06:24:54.749: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-dropgqznm -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134"' Nov 26 06:24:55.303: INFO: rc: 1 Nov 26 06:24:55.303: INFO: Expect target to be reachable. But got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-dropgqznm -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134": Command stdout: stderr: + wget -T 5 -qO- 34.145.84.134 wget: can't connect to remote host (34.145.84.134): Connection refused command terminated with exit code 1 error: exit status 1. Retry until timeout Nov 26 06:24:56.749: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-dropgqznm -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134"' Nov 26 06:24:57.411: INFO: rc: 1 Nov 26 06:24:57.411: INFO: Expect target to be reachable. But got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-dropgqznm -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134": Command stdout: stderr: + wget -T 5 -qO- 34.145.84.134 wget: can't connect to remote host (34.145.84.134): Connection refused command terminated with exit code 1 error: exit status 1. Retry until timeout Nov 26 06:24:58.749: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-dropgqznm -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134"' Nov 26 06:24:59.462: INFO: rc: 1 Nov 26 06:24:59.462: INFO: Expect target to be reachable. But got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-dropgqznm -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134": Command stdout: stderr: + wget -T 5 -qO- 34.145.84.134 wget: can't connect to remote host (34.145.84.134): Connection refused command terminated with exit code 1 error: exit status 1. Retry until timeout Nov 26 06:25:00.748: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-dropgqznm -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134"' Nov 26 06:25:01.303: INFO: rc: 1 Nov 26 06:25:01.303: INFO: Expect target to be reachable. But got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-dropgqznm -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134": Command stdout: stderr: + wget -T 5 -qO- 34.145.84.134 wget: can't connect to remote host (34.145.84.134): Connection refused command terminated with exit code 1 error: exit status 1. Retry until timeout Nov 26 06:25:02.749: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-dropgqznm -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134"' Nov 26 06:25:03.576: INFO: rc: 1 Nov 26 06:25:03.576: INFO: Expect target to be reachable. But got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-dropgqznm -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134": Command stdout: stderr: + wget -T 5 -qO- 34.145.84.134 wget: can't connect to remote host (34.145.84.134): Connection refused command terminated with exit code 1 error: exit status 1. Retry until timeout Nov 26 06:25:04.749: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-dropgqznm -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134"' Nov 26 06:25:05.736: INFO: rc: 1 Nov 26 06:25:05.736: INFO: Expect target to be reachable. But got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-dropgqznm -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134": Command stdout: stderr: + wget -T 5 -qO- 34.145.84.134 wget: can't connect to remote host (34.145.84.134): Connection refused command terminated with exit code 1 error: exit status 1. Retry until timeout Nov 26 06:25:06.748: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-dropgqznm -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134"' ------------------------------ Progress Report for Ginkgo Process #17 Automatically polling progress: [sig-network] LoadBalancers should only allow access from service loadbalancer source ranges [Slow] (Spec Runtime: 5m0.415s) test/e2e/network/loadbalancer.go:487 In [It] (Node Runtime: 5m0s) test/e2e/network/loadbalancer.go:487 At [By Step] Update service LoadBalancerSourceRange and check reachability (Step Runtime: 32.354s) test/e2e/network/loadbalancer.go:544 Spec Goroutine goroutine 1300 [select] k8s.io/kubernetes/test/e2e/framework/kubectl.KubectlBuilder.ExecWithFullOutput({0xc000dbf080?, 0x0?}) test/e2e/framework/kubectl/builder.go:125 k8s.io/kubernetes/test/e2e/framework/kubectl.KubectlBuilder.Exec(...) test/e2e/framework/kubectl/builder.go:107 k8s.io/kubernetes/test/e2e/framework/kubectl.RunKubectl({0xc00430e858?, 0x4?}, {0xc0004cbbb0?, 0x0?, 0x1?}) test/e2e/framework/kubectl/builder.go:154 k8s.io/kubernetes/test/e2e/framework/pod/output.RunHostCmd(...) test/e2e/framework/pod/output/output.go:82 > k8s.io/kubernetes/test/e2e/network.checkReachabilityFromPod.func1() test/e2e/network/service.go:4066 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0001b0000?}, 0x2?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc00494c318, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001b0000}, 0x88?, 0x2fd9d05?, 0x30?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc0004cbe30?, 0xc0004cbdd8?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x75ef083?, 0x11?, 0xc0004cbe30?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.checkReachabilityFromPod(0x1, 0x0?, {0xc00430e858, 0x12}, {0xc00430e708, 0x11}, {0xc004e263d0?, 0x7fe0bc8?}) test/e2e/network/service.go:4065 > k8s.io/kubernetes/test/e2e/network.glob..func19.5() test/e2e/network/loadbalancer.go:556 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004fb8300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 06:25:07.727: INFO: rc: 1 Nov 26 06:25:07.727: INFO: Expect target to be reachable. But got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-dropgqznm -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134": Command stdout: stderr: + wget -T 5 -qO- 34.145.84.134 wget: can't connect to remote host (34.145.84.134): Connection refused command terminated with exit code 1 error: exit status 1. Retry until timeout Nov 26 06:25:08.749: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-dropgqznm -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134"' Nov 26 06:25:09.855: INFO: rc: 1 Nov 26 06:25:09.855: INFO: Expect target to be reachable. But got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-dropgqznm -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134": Command stdout: stderr: + wget -T 5 -qO- 34.145.84.134 wget: can't connect to remote host (34.145.84.134): Connection refused command terminated with exit code 1 error: exit status 1. Retry until timeout Nov 26 06:25:10.749: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-dropgqznm -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134"' Nov 26 06:25:11.569: INFO: rc: 1 Nov 26 06:25:11.569: INFO: Expect target to be reachable. But got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-dropgqznm -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134": Command stdout: stderr: + wget -T 5 -qO- 34.145.84.134 wget: can't connect to remote host (34.145.84.134): Connection refused command terminated with exit code 1 error: exit status 1. Retry until timeout Nov 26 06:25:12.749: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-dropgqznm -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134"' Nov 26 06:25:13.450: INFO: rc: 1 Nov 26 06:25:13.450: INFO: Expect target to be reachable. But got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-dropgqznm -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134": Command stdout: stderr: + wget -T 5 -qO- 34.145.84.134 wget: can't connect to remote host (34.145.84.134): Connection refused command terminated with exit code 1 error: exit status 1. Retry until timeout Nov 26 06:25:14.749: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-dropgqznm -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134"' Nov 26 06:25:15.462: INFO: rc: 1 Nov 26 06:25:15.462: INFO: Expect target to be reachable. But got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-dropgqznm -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134": Command stdout: stderr: + wget -T 5 -qO- 34.145.84.134 wget: can't connect to remote host (34.145.84.134): Connection refused command terminated with exit code 1 error: exit status 1. Retry until timeout Nov 26 06:25:16.749: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-dropgqznm -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134"' Nov 26 06:25:17.463: INFO: rc: 1 Nov 26 06:25:17.463: INFO: Expect target to be reachable. But got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-dropgqznm -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134": Command stdout: stderr: + wget -T 5 -qO- 34.145.84.134 wget: can't connect to remote host (34.145.84.134): Connection refused command terminated with exit code 1 error: exit status 1. Retry until timeout Nov 26 06:25:18.749: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-dropgqznm -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134"' Nov 26 06:25:19.520: INFO: rc: 1 Nov 26 06:25:19.520: INFO: Expect target to be reachable. But got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-dropgqznm -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134": Command stdout: stderr: + wget -T 5 -qO- 34.145.84.134 wget: can't connect to remote host (34.145.84.134): Connection refused command terminated with exit code 1 error: exit status 1. Retry until timeout Nov 26 06:25:20.749: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-dropgqznm -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134"' Nov 26 06:25:21.561: INFO: rc: 1 Nov 26 06:25:21.561: INFO: Expect target to be reachable. But got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-dropgqznm -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134": Command stdout: stderr: + wget -T 5 -qO- 34.145.84.134 wget: can't connect to remote host (34.145.84.134): Connection refused command terminated with exit code 1 error: exit status 1. Retry until timeout Nov 26 06:25:22.749: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-dropgqznm -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134"' Nov 26 06:25:23.644: INFO: rc: 1 Nov 26 06:25:23.644: INFO: Expect target to be reachable. But got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-dropgqznm -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134": Command stdout: stderr: + wget -T 5 -qO- 34.145.84.134 wget: can't connect to remote host (34.145.84.134): Connection refused command terminated with exit code 1 error: exit status 1. Retry until timeout Nov 26 06:25:24.749: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-dropgqznm -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134"' Nov 26 06:25:25.842: INFO: rc: 1 Nov 26 06:25:25.842: INFO: Expect target to be reachable. But got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-dropgqznm -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134": Command stdout: stderr: + wget -T 5 -qO- 34.145.84.134 wget: can't connect to remote host (34.145.84.134): Connection refused command terminated with exit code 1 error: exit status 1. Retry until timeout Nov 26 06:25:26.749: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-dropgqznm -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134"' ------------------------------ Progress Report for Ginkgo Process #17 Automatically polling progress: [sig-network] LoadBalancers should only allow access from service loadbalancer source ranges [Slow] (Spec Runtime: 5m20.417s) test/e2e/network/loadbalancer.go:487 In [It] (Node Runtime: 5m20.003s) test/e2e/network/loadbalancer.go:487 At [By Step] Update service LoadBalancerSourceRange and check reachability (Step Runtime: 52.357s) test/e2e/network/loadbalancer.go:544 Spec Goroutine goroutine 1300 [select] k8s.io/kubernetes/test/e2e/framework/kubectl.KubectlBuilder.ExecWithFullOutput({0xc002b06840?, 0x0?}) test/e2e/framework/kubectl/builder.go:125 k8s.io/kubernetes/test/e2e/framework/kubectl.KubectlBuilder.Exec(...) test/e2e/framework/kubectl/builder.go:107 k8s.io/kubernetes/test/e2e/framework/kubectl.RunKubectl({0xc00430e858?, 0x4?}, {0xc0004cbbb0?, 0x0?, 0x1?}) test/e2e/framework/kubectl/builder.go:154 k8s.io/kubernetes/test/e2e/framework/pod/output.RunHostCmd(...) test/e2e/framework/pod/output/output.go:82 > k8s.io/kubernetes/test/e2e/network.checkReachabilityFromPod.func1() test/e2e/network/service.go:4066 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0001b0000?}, 0x2?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc00494c318, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001b0000}, 0x88?, 0x2fd9d05?, 0x30?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc0004cbe30?, 0xc0004cbdd8?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x75ef083?, 0x11?, 0xc0004cbe30?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.checkReachabilityFromPod(0x1, 0x0?, {0xc00430e858, 0x12}, {0xc00430e708, 0x11}, {0xc004e263d0?, 0x7fe0bc8?}) test/e2e/network/service.go:4065 > k8s.io/kubernetes/test/e2e/network.glob..func19.5() test/e2e/network/loadbalancer.go:556 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004fb8300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 06:25:27.492: INFO: rc: 1 Nov 26 06:25:27.492: INFO: Expect target to be reachable. But got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-dropgqznm -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134": Command stdout: stderr: + wget -T 5 -qO- 34.145.84.134 wget: can't connect to remote host (34.145.84.134): Connection refused command terminated with exit code 1 error: exit status 1. Retry until timeout Nov 26 06:25:28.749: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-dropgqznm -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134"' Nov 26 06:25:30.129: INFO: rc: 1 Nov 26 06:25:30.129: INFO: Expect target to be reachable. But got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-dropgqznm -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134": Command stdout: stderr: + wget -T 5 -qO- 34.145.84.134 wget: can't connect to remote host (34.145.84.134): Connection refused command terminated with exit code 1 error: exit status 1. Retry until timeout Nov 26 06:25:30.749: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-dropgqznm -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134"' Nov 26 06:25:31.579: INFO: rc: 1 Nov 26 06:25:31.579: INFO: Expect target to be reachable. But got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-dropgqznm -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134": Command stdout: stderr: + wget -T 5 -qO- 34.145.84.134 wget: can't connect to remote host (34.145.84.134): Connection refused command terminated with exit code 1 error: exit status 1. Retry until timeout Nov 26 06:25:32.749: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-dropgqznm -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134"' Nov 26 06:25:33.613: INFO: rc: 1 Nov 26 06:25:33.613: INFO: Expect target to be reachable. But got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-dropgqznm -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134": Command stdout: stderr: + wget -T 5 -qO- 34.145.84.134 wget: can't connect to remote host (34.145.84.134): Connection refused command terminated with exit code 1 error: exit status 1. Retry until timeout Nov 26 06:25:34.749: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-dropgqznm -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134"' Nov 26 06:25:35.759: INFO: rc: 1 Nov 26 06:25:35.759: INFO: Expect target to be reachable. But got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-dropgqznm -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134": Command stdout: stderr: + wget -T 5 -qO- 34.145.84.134 wget: can't connect to remote host (34.145.84.134): Connection refused command terminated with exit code 1 error: exit status 1. Retry until timeout Nov 26 06:25:36.749: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-dropgqznm -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134"' ------------------------------ Progress Report for Ginkgo Process #17 Automatically polling progress: [sig-network] LoadBalancers should only allow access from service loadbalancer source ranges [Slow] (Spec Runtime: 5m40.42s) test/e2e/network/loadbalancer.go:487 In [It] (Node Runtime: 5m40.006s) test/e2e/network/loadbalancer.go:487 At [By Step] Update service LoadBalancerSourceRange and check reachability (Step Runtime: 1m12.36s) test/e2e/network/loadbalancer.go:544 Spec Goroutine goroutine 1300 [select] k8s.io/kubernetes/test/e2e/framework/kubectl.KubectlBuilder.ExecWithFullOutput({0xc000693080?, 0x0?}) test/e2e/framework/kubectl/builder.go:125 k8s.io/kubernetes/test/e2e/framework/kubectl.KubectlBuilder.Exec(...) test/e2e/framework/kubectl/builder.go:107 k8s.io/kubernetes/test/e2e/framework/kubectl.RunKubectl({0xc00430e858?, 0x4?}, {0xc0004cbbb0?, 0x0?, 0x1?}) test/e2e/framework/kubectl/builder.go:154 k8s.io/kubernetes/test/e2e/framework/pod/output.RunHostCmd(...) test/e2e/framework/pod/output/output.go:82 > k8s.io/kubernetes/test/e2e/network.checkReachabilityFromPod.func1() test/e2e/network/service.go:4066 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0001b0000?}, 0x2?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc00494c318, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001b0000}, 0x88?, 0x2fd9d05?, 0x30?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc0004cbe30?, 0xc0004cbdd8?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x75ef083?, 0x11?, 0xc0004cbe30?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.checkReachabilityFromPod(0x1, 0x0?, {0xc00430e858, 0x12}, {0xc00430e708, 0x11}, {0xc004e263d0?, 0x7fe0bc8?}) test/e2e/network/service.go:4065 > k8s.io/kubernetes/test/e2e/network.glob..func19.5() test/e2e/network/loadbalancer.go:556 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004fb8300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ ------------------------------ Progress Report for Ginkgo Process #17 Automatically polling progress: [sig-network] LoadBalancers should only allow access from service loadbalancer source ranges [Slow] (Spec Runtime: 6m0.423s) test/e2e/network/loadbalancer.go:487 In [It] (Node Runtime: 6m0.009s) test/e2e/network/loadbalancer.go:487 At [By Step] Update service LoadBalancerSourceRange and check reachability (Step Runtime: 1m32.363s) test/e2e/network/loadbalancer.go:544 Spec Goroutine goroutine 1300 [select] k8s.io/kubernetes/test/e2e/framework/kubectl.KubectlBuilder.ExecWithFullOutput({0xc000693080?, 0x0?}) test/e2e/framework/kubectl/builder.go:125 k8s.io/kubernetes/test/e2e/framework/kubectl.KubectlBuilder.Exec(...) test/e2e/framework/kubectl/builder.go:107 k8s.io/kubernetes/test/e2e/framework/kubectl.RunKubectl({0xc00430e858?, 0x4?}, {0xc0004cbbb0?, 0x0?, 0x1?}) test/e2e/framework/kubectl/builder.go:154 k8s.io/kubernetes/test/e2e/framework/pod/output.RunHostCmd(...) test/e2e/framework/pod/output/output.go:82 > k8s.io/kubernetes/test/e2e/network.checkReachabilityFromPod.func1() test/e2e/network/service.go:4066 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0001b0000?}, 0x2?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc00494c318, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001b0000}, 0x88?, 0x2fd9d05?, 0x30?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc0004cbe30?, 0xc0004cbdd8?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x75ef083?, 0x11?, 0xc0004cbe30?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.checkReachabilityFromPod(0x1, 0x0?, {0xc00430e858, 0x12}, {0xc00430e708, 0x11}, {0xc004e263d0?, 0x7fe0bc8?}) test/e2e/network/service.go:4065 > k8s.io/kubernetes/test/e2e/network.glob..func19.5() test/e2e/network/loadbalancer.go:556 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004fb8300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 06:26:07.543: INFO: rc: 1 Nov 26 06:26:07.543: INFO: Expect target to be reachable. But got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-dropgqznm -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134": Command stdout: stderr: error: Timeout occurred error: exit status 1. Retry until timeout Nov 26 06:26:08.749: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-dropgqznm -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134"' Nov 26 06:26:09.114: INFO: rc: 1 Nov 26 06:26:09.115: INFO: Expect target to be reachable. But got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-dropgqznm -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134": Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1. Retry until timeout Nov 26 06:26:10.749: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-dropgqznm -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134"' Nov 26 06:26:11.103: INFO: rc: 1 Nov 26 06:26:11.103: INFO: Expect target to be reachable. But got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-dropgqznm -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134": Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1. Retry until timeout Nov 26 06:26:12.749: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-dropgqznm -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134"' Nov 26 06:26:13.110: INFO: rc: 1 Nov 26 06:26:13.110: INFO: Expect target to be reachable. But got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-dropgqznm -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134": Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1. Retry until timeout Nov 26 06:26:14.749: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-dropgqznm -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134"' Nov 26 06:26:15.109: INFO: rc: 1 Nov 26 06:26:15.109: INFO: Expect target to be reachable. But got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-dropgqznm -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134": Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1. Retry until timeout Nov 26 06:26:16.748: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-dropgqznm -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134"' Nov 26 06:26:17.114: INFO: rc: 1 Nov 26 06:26:17.114: INFO: Expect target to be reachable. But got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-dropgqznm -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134": Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1. Retry until timeout Nov 26 06:26:18.749: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-dropgqznm -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134"' Nov 26 06:26:19.123: INFO: rc: 1 Nov 26 06:26:19.123: INFO: Expect target to be reachable. But got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-dropgqznm -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134": Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1. Retry until timeout Nov 26 06:26:20.749: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-dropgqznm -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134"' Nov 26 06:26:21.108: INFO: rc: 1 Nov 26 06:26:21.108: INFO: Expect target to be reachable. But got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-dropgqznm -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134": Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1. Retry until timeout Nov 26 06:26:22.749: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-dropgqznm -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134"' Nov 26 06:26:23.129: INFO: rc: 1 Nov 26 06:26:23.129: INFO: Expect target to be reachable. But got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-dropgqznm -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134": Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1. Retry until timeout Nov 26 06:26:24.748: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-dropgqznm -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134"' Nov 26 06:26:25.116: INFO: rc: 1 Nov 26 06:26:25.116: INFO: Expect target to be reachable. But got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-dropgqznm -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134": Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1. Retry until timeout Nov 26 06:26:26.749: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-dropgqznm -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134"' ------------------------------ Progress Report for Ginkgo Process #17 Automatically polling progress: [sig-network] LoadBalancers should only allow access from service loadbalancer source ranges [Slow] (Spec Runtime: 6m20.426s) test/e2e/network/loadbalancer.go:487 In [It] (Node Runtime: 6m20.011s) test/e2e/network/loadbalancer.go:487 At [By Step] Update service LoadBalancerSourceRange and check reachability (Step Runtime: 1m52.365s) test/e2e/network/loadbalancer.go:544 Spec Goroutine goroutine 1300 [select] k8s.io/kubernetes/test/e2e/framework/kubectl.KubectlBuilder.ExecWithFullOutput({0xc0006922c0?, 0x0?}) test/e2e/framework/kubectl/builder.go:125 k8s.io/kubernetes/test/e2e/framework/kubectl.KubectlBuilder.Exec(...) test/e2e/framework/kubectl/builder.go:107 k8s.io/kubernetes/test/e2e/framework/kubectl.RunKubectl({0xc00430e858?, 0x4?}, {0xc0004cbbb0?, 0x0?, 0x1?}) test/e2e/framework/kubectl/builder.go:154 k8s.io/kubernetes/test/e2e/framework/pod/output.RunHostCmd(...) test/e2e/framework/pod/output/output.go:82 > k8s.io/kubernetes/test/e2e/network.checkReachabilityFromPod.func1() test/e2e/network/service.go:4066 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2742871, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe0bc8?, 0xc0001b0000?}, 0x2?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc00494c318, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001b0000}, 0x88?, 0x2fd9d05?, 0x30?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc0004cbe30?, 0xc0004cbdd8?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x75ef083?, 0x11?, 0xc0004cbe30?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.checkReachabilityFromPod(0x1, 0x0?, {0xc00430e858, 0x12}, {0xc00430e708, 0x11}, {0xc004e263d0?, 0x7fe0bc8?}) test/e2e/network/service.go:4065 > k8s.io/kubernetes/test/e2e/network.glob..func19.5() test/e2e/network/loadbalancer.go:556 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004fb8300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 06:26:27.107: INFO: rc: 1 Nov 26 06:26:27.107: INFO: Expect target to be reachable. But got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-dropgqznm -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134": Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1. Retry until timeout Nov 26 06:26:28.749: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-dropgqznm -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134"' Nov 26 06:26:29.110: INFO: rc: 1 Nov 26 06:26:29.110: INFO: Expect target to be reachable. But got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-dropgqznm -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134": Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1. Retry until timeout Nov 26 06:26:30.748: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-dropgqznm -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134"' Nov 26 06:26:31.117: INFO: rc: 1 Nov 26 06:26:31.117: INFO: Expect target to be reachable. But got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-dropgqznm -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134": Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1. Retry until timeout Nov 26 06:26:32.748: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-dropgqznm -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134"' Nov 26 06:26:33.098: INFO: rc: 1 Nov 26 06:26:33.098: INFO: Expect target to be reachable. But got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-dropgqznm -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134": Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1. Retry until timeout Nov 26 06:26:34.748: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-dropgqznm -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134"' Nov 26 06:26:35.116: INFO: rc: 1 Nov 26 06:26:35.116: INFO: Expect target to be reachable. But got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-dropgqznm -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134": Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1. Retry until timeout Nov 26 06:26:36.749: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-dropgqznm -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134"' Nov 26 06:26:37.103: INFO: rc: 1 Nov 26 06:26:37.103: INFO: Expect target to be reachable. But got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-dropgqznm -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134": Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1. Retry until timeout Nov 26 06:26:38.749: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-dropgqznm -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134"' Nov 26 06:26:39.114: INFO: rc: 1 Nov 26 06:26:39.114: INFO: Expect target to be reachable. But got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-dropgqznm -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134": Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1. Retry until timeout Nov 26 06:26:40.748: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-dropgqznm -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134"' Nov 26 06:26:41.281: INFO: stderr: "+ wget -T 5 -qO- 34.145.84.134\n" Nov 26 06:26:41.281: INFO: stdout: "NOW: 2022-11-26 06:26:41.252464142 +0000 UTC m=+260.541068873" STEP: Delete LoadBalancerSourceRange field and check reachability 11/26/22 06:26:41.281 Nov 26 06:26:41.456: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134"' Nov 26 06:26:41.811: INFO: rc: 1 Nov 26 06:26:41.811: INFO: Expect target to be reachable. But got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134": Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1. Retry until timeout Nov 26 06:26:43.811: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134"' Nov 26 06:26:44.204: INFO: rc: 1 Nov 26 06:26:44.204: INFO: Expect target to be reachable. But got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134": Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1. Retry until timeout Nov 26 06:26:45.811: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134"' Nov 26 06:26:46.175: INFO: rc: 1 Nov 26 06:26:46.175: INFO: Expect target to be reachable. But got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134": Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1. Retry until timeout ------------------------------ Progress Report for Ginkgo Process #17 Automatically polling progress: [sig-network] LoadBalancers should only allow access from service loadbalancer source ranges [Slow] (Spec Runtime: 6m40.428s) test/e2e/network/loadbalancer.go:487 In [It] (Node Runtime: 6m40.014s) test/e2e/network/loadbalancer.go:487 At [By Step] Delete LoadBalancerSourceRange field and check reachability (Step Runtime: 5.59s) test/e2e/network/loadbalancer.go:558 Spec Goroutine goroutine 1300 [select] k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc00494c570, 0x2fdb16a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe0bc8, 0xc0001b0000}, 0x88?, 0x2fd9d05?, 0x30?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe0bc8, 0xc0001b0000}, 0xc0004cbe30?, 0xc0004cbdd8?, 0x262a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x75ef083?, 0x11?, 0xc0004cbe30?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 > k8s.io/kubernetes/test/e2e/network.checkReachabilityFromPod(0x1, 0x0?, {0xc00430e858, 0x12}, {0xc00430e420, 0x13}, {0xc004e263d0?, 0x7fe0bc8?}) test/e2e/network/service.go:4065 > k8s.io/kubernetes/test/e2e/network.glob..func19.5() test/e2e/network/loadbalancer.go:567 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d591ce, 0xc004fb8300}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 ------------------------------ Nov 26 06:26:47.811: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134"' Nov 26 06:26:48.174: INFO: rc: 1 Nov 26 06:26:48.174: INFO: Expect target to be reachable. But got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134": Command stdout: stderr: Error from server: error dialing backend: No agent available error: exit status 1. Retry until timeout Nov 26 06:26:49.811: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134"' Nov 26 06:26:49.944: INFO: rc: 1 Nov 26 06:26:49.944: INFO: Expect target to be reachable. But got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134": Command stdout: stderr: The connection to the server 34.83.17.181 was refused - did you specify the right host or port? error: exit status 1. Retry until timeout Nov 26 06:26:51.812: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134"' Nov 26 06:26:51.944: INFO: rc: 1 Nov 26 06:26:51.944: INFO: Expect target to be reachable. But got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134": Command stdout: stderr: The connection to the server 34.83.17.181 was refused - did you specify the right host or port? error: exit status 1. Retry until timeout Nov 26 06:26:53.811: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134"' Nov 26 06:26:53.951: INFO: rc: 1 Nov 26 06:26:53.951: INFO: Expect target to be reachable. But got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134": Command stdout: stderr: The connection to the server 34.83.17.181 was refused - did you specify the right host or port? error: exit status 1. Retry until timeout Nov 26 06:26:55.812: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134"' Nov 26 06:26:55.928: INFO: rc: 1 Nov 26 06:26:55.928: INFO: Expect target to be reachable. But got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134": Command stdout: stderr: The connection to the server 34.83.17.181 was refused - did you specify the right host or port? error: exit status 1. Retry until timeout Nov 26 06:26:57.812: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134"' Nov 26 06:26:57.948: INFO: rc: 1 Nov 26 06:26:57.948: INFO: Expect target to be reachable. But got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134": Command stdout: stderr: The connection to the server 34.83.17.181 was refused - did you specify the right host or port? error: exit status 1. Retry until timeout Nov 26 06:26:59.812: INFO: Running '/workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -x -c wget -T 5 -qO- "34.145.84.134"' Nov 26 06:26:59.940: INFO: rc: 1 Nov 26 06:26:59.940: INFO: Expect target to be reachable. But got err: error running /workspace/github.com/containerd/containerd/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.17.181 --kubeconfig=/workspace/.kube/config --namespace=loadbalancers-8265 exec execpod-acceptqsr5s -- /bin/sh -