Result | FAILURE |
Tests | 1 failed / 39 succeeded |
Started | |
Elapsed | 22m58s |
Revision | a0c9adfb66819b14be4a967b1762dcbcb7386554 |
Refs |
1202 |
E2E:Machine | n1-standard-4 |
E2E:MaxNodes | 3 |
E2E:MinNodes | 1 |
E2E:Region | us-central1 |
E2E:Version | 1.30.3-gke.1639000 |
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=test\sTestVaultKMSSpire$'
e2e_test.go:898: Create namespace earth-prcp7 to deploy to
e2e_test.go:900: error creating scc: failed to assign SCC: exec: "oc": executable file not found in $PATH, output:
from junit_8vYbfd6g.xml
Filter through log files
test TestExamples
test TestExamples/pipelinerun-examples-slsa-v1
test TestExamples/pipelinerun-examples-slsa-v1/../examples/pipelineruns/pipeline-output-image.yaml
test TestExamples/pipelinerun-examples-slsa-v2alpha3
test TestExamples/pipelinerun-examples-slsa-v2alpha3/../examples/pipelineruns/pipeline-output-image.yaml
test TestExamples/pipelinerun-examples-slsa-v2alpha4
test TestExamples/pipelinerun-examples-slsa-v2alpha4/../examples/pipelineruns/pipeline-output-image.yaml
test TestExamples/pipelinerun-no-repeated-subjects-v2alpha4
test TestExamples/pipelinerun-no-repeated-subjects-v2alpha4/../examples/v2alpha4/pipeline-with-repeated-results.yaml
test TestExamples/pipelinerun-type-hinted-results-v2alpha4
test TestExamples/pipelinerun-type-hinted-results-v2alpha4/../examples/v2alpha4/pipeline-with-object-type-hinting.yaml
test TestExamples/taskrun-examples-slsa-v1
test TestExamples/taskrun-examples-slsa-v1/../examples/taskruns/task-output-image.yaml
test TestExamples/taskrun-examples-slsa-v2alpha3
test TestExamples/taskrun-examples-slsa-v2alpha3/../examples/taskruns/task-output-image.yaml
test TestExamples/taskrun-examples-slsa-v2alpha4
test TestExamples/taskrun-examples-slsa-v2alpha4/../examples/taskruns/task-output-image.yaml
test TestExamples/taskrun-type-hinted-results-v2alpha4
test TestExamples/taskrun-type-hinted-results-v2alpha4/../examples/v2alpha4/task-with-object-type-hinting.yaml
test TestInstall
test TestMultiBackendStorage
test TestMultiBackendStorage/pipelinerun
test TestMultiBackendStorage/taskrun
test TestOCISigning
test TestOCISigning/cosign
test TestOCISigning/x509
test TestOCIStorage
test TestProvenanceMaterials
test TestProvenanceMaterials/pipelinerun
test TestProvenanceMaterials/taskrun
test TestRekor
test TestRekor/pipelinerun
test TestRekor/taskrun
test TestRetryFailed
test TestRetryFailed/pipelinerun
test TestRetryFailed/taskrun
test TestTektonStorage
test TestTektonStorage/pipelinerun
test TestTektonStorage/taskrun
test TestFulcio
test TestGCSStorage
... skipping 199 lines ... % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 9 100 9 0 0 55 0 --:--:-- --:--:-- --:--:-- 55 gzip: stdin: not in gzip format tar: Child returned status 1 tar: Error is not recoverable: exiting now >> Deploying Tekton Pipelines namespace/tekton-pipelines created clusterrole.rbac.authorization.k8s.io/tekton-pipelines-controller-cluster-access created clusterrole.rbac.authorization.k8s.io/tekton-pipelines-controller-tenant-access created clusterrole.rbac.authorization.k8s.io/tekton-pipelines-webhook-cluster-access created clusterrole.rbac.authorization.k8s.io/tekton-events-controller-cluster-access created ... skipping 64 lines ... configmap/hubresolver-config created deployment.apps/tekton-pipelines-remote-resolvers created service/tekton-pipelines-remote-resolvers created horizontalpodautoscaler.autoscaling/tekton-pipelines-webhook created deployment.apps/tekton-pipelines-webhook created service/tekton-pipelines-webhook created error: the server doesn't have a resource type "pipelineresources" No resources found No resources found No resources found No resources found Waiting until all pods in namespace tekton-pipelines are up..... All pods are up: ... skipping 208 lines ... clients.go:123: Deleting namespace earth-7gjpg --- PASS: TestProvenanceMaterials (21.51s) --- PASS: TestProvenanceMaterials/taskrun (13.32s) --- PASS: TestProvenanceMaterials/pipelinerun (8.19s) === RUN TestVaultKMSSpire e2e_test.go:898: Create namespace earth-prcp7 to deploy to e2e_test.go:900: error creating scc: failed to assign SCC: exec: "oc": executable file not found in $PATH, output: --- FAIL: TestVaultKMSSpire (1.02s) === RUN TestExamples === RUN TestExamples/taskrun-examples-slsa-v1 examples_test.go:201: Create namespace earth-hq7hp to deploy to examples_test.go:575: Adding test ../examples/taskruns/task-output-image.yaml === RUN TestExamples/taskrun-examples-slsa-v1/../examples/taskruns/task-output-image.yaml examples_test.go:225: creating object ../examples/taskruns/task-output-image.yaml ... skipping 1082 lines ... --- PASS: TestExamples/pipelinerun-examples-slsa-v2alpha4 (9.14s) --- PASS: TestExamples/pipelinerun-examples-slsa-v2alpha4/../examples/pipelineruns/pipeline-output-image.yaml (5.98s) --- PASS: TestExamples/pipelinerun-type-hinted-results-v2alpha4 (21.05s) --- PASS: TestExamples/pipelinerun-type-hinted-results-v2alpha4/../examples/v2alpha4/pipeline-with-object-type-hinting.yaml (5.74s) --- PASS: TestExamples/pipelinerun-no-repeated-subjects-v2alpha4 (22.60s) --- PASS: TestExamples/pipelinerun-no-repeated-subjects-v2alpha4/../examples/v2alpha4/pipeline-with-repeated-results.yaml (7.29s) FAIL FAIL github.com/tektoncd/chains/test 367.356s FAIL Finished run, return code is 1 XML report written to /logs/artifacts/junit_8vYbfd6g.xml >> Tekton Chains Logs 2024/09/16 11:55:51 Registering 4 clients 2024/09/16 11:55:51 Registering 2 informer factories 2024/09/16 11:55:51 Registering 2 informers ... skipping 218 lines ... {"level":"info","ts":"2024-09-16T11:56:12.557Z","logger":"watcher","caller":"storage/storage.go:61","msg":"configured backends from config: [tekton oci tekton]","commit":"42ec2f3-dirty"} {"level":"info","ts":"2024-09-16T11:56:12.557Z","logger":"watcher","caller":"storage/storage.go:100","msg":"successfully initialized backends: [tekton oci]","commit":"42ec2f3-dirty"} {"level":"info","ts":"2024-09-16T11:56:12.557Z","logger":"watcher","caller":"pipelinerun/controller.go:68","msg":"could not send close event to WatchBackends()...","commit":"42ec2f3-dirty"} {"level":"info","ts":"2024-09-16T11:56:12.557Z","logger":"watcher","caller":"storage/storage.go:61","msg":"configured backends from config: [tekton oci tekton]","commit":"42ec2f3-dirty"} {"level":"info","ts":"2024-09-16T11:56:12.557Z","logger":"watcher","caller":"storage/storage.go:100","msg":"successfully initialized backends: [tekton oci]","commit":"42ec2f3-dirty"} *************************************** *** E2E TEST FAILED *** *** Start of information dump *** *************************************** >>> All resources: NAMESPACE NAME READY STATUS RESTARTS AGE earth-n5stx pod/pipeline-test-run-t1-pod 0/1 Completed 0 9s earth-n5stx pod/pipeline-test-run-t2-pod 0/1 Completed 0 9s ... skipping 176 lines ... default 18m Normal NodeAllocatableEnforced node/gke-tchains-e2e-cls18356-default-pool-e4db100d-741p Updated Node Allocatable limit across pods default 14m Warning NodeRegistrationCheckerStart node/gke-tchains-e2e-cls18356-default-pool-e4db100d-741p Mon Sep 16 11:37:30 UTC 2024 - ** Starting Node Registration Checker ** default 14m Normal Synced node/gke-tchains-e2e-cls18356-default-pool-e4db100d-741p Node synced successfully default 14m Normal Starting node/gke-tchains-e2e-cls18356-default-pool-e4db100d-741p default 14m Normal RegisteredNode node/gke-tchains-e2e-cls18356-default-pool-e4db100d-741p Node gke-tchains-e2e-cls18356-default-pool-e4db100d-741p event: Registered Node gke-tchains-e2e-cls18356-default-pool-e4db100d-741p in Controller default 11m Warning NodeRegistrationCheckerDidNotRunChecks node/gke-tchains-e2e-cls18356-default-pool-e4db100d-741p Mon Sep 16 11:44:30 UTC 2024 - ** Node ready and registered. ** default 3m49s Warning FailedToCreateEndpoint endpoints/registry Failed to create endpoint for service earth-vzzh2/registry: endpoints "registry" already exists earth-n5stx 9s Normal Scheduled pod/pipeline-test-run-t1-pod Successfully assigned earth-n5stx/pipeline-test-run-t1-pod to gke-tchains-e2e-cls18356-default-pool-bbe1f426-xqss earth-n5stx 9s Normal Pulled pod/pipeline-test-run-t1-pod Container image "gcr.io/tekton-releases/github.com/tektoncd/pipeline/cmd/entrypoint:v0.59.2@sha256:1fe2a1b363c2fc27e3db89c2874f0af164ad56c28a05a03be241863bb65a1403" already present on machine earth-n5stx 9s Normal Created pod/pipeline-test-run-t1-pod Created container prepare earth-n5stx 8s Normal Started pod/pipeline-test-run-t1-pod Started container prepare earth-n5stx 8s Normal Pulled pod/pipeline-test-run-t1-pod Container image "cgr.dev/chainguard/busybox@sha256:19f02276bf8dbdd62f069b922f10c65262cc34b710eea26ff928129a736be791" already present on machine earth-n5stx 8s Normal Created pod/pipeline-test-run-t1-pod Created container place-scripts ... skipping 48 lines ... earth-n5stx 7s Normal Pending taskrun/pipeline-test-run-t3 pod status "Initialized":"False"; message: "containers with incomplete status: [place-scripts]" earth-n5stx 6s Normal Pending taskrun/pipeline-test-run-t3 pod status "Ready":"False"; message: "containers with unready status: [step-step1]" earth-n5stx 5s Normal Running taskrun/pipeline-test-run-t3 Not all Steps in the Task have finished executing earth-n5stx 3s Normal Succeeded taskrun/pipeline-test-run-t3 All Steps have completed executing earth-n5stx 9s Normal Started pipelinerun/pipeline-test-run earth-n5stx 9s Normal FinalizerUpdate pipelinerun/pipeline-test-run Updated "pipeline-test-run" finalizers earth-n5stx 9s Normal Running pipelinerun/pipeline-test-run Tasks Completed: 0 (Failed: 0, Cancelled 0), Incomplete: 3, Skipped: 0 earth-n5stx 3s Normal Running pipelinerun/pipeline-test-run Tasks Completed: 1 (Failed: 0, Cancelled 0), Incomplete: 2, Skipped: 0 earth-n5stx 3s Normal Running pipelinerun/pipeline-test-run Tasks Completed: 2 (Failed: 0, Cancelled 0), Incomplete: 1, Skipped: 0 earth-n5stx 2s Normal Succeeded pipelinerun/pipeline-test-run Tasks Completed: 3 (Failed: 0, Cancelled 0), Skipped: 0 gke-managed-cim 14m Warning FailedScheduling pod/kube-state-metrics-0 no nodes available to schedule pods gke-managed-cim 14m Warning FailedScheduling pod/kube-state-metrics-0 0/2 nodes are available: 2 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. gke-managed-cim 14m Normal Scheduled pod/kube-state-metrics-0 Successfully assigned gke-managed-cim/kube-state-metrics-0 to gke-tchains-e2e-cls18356-default-pool-e4db100d-741p gke-managed-cim 14m Warning FailedMount pod/kube-state-metrics-0 MountVolume.SetUp failed for volume "kube-api-access-5ldjj" : failed to sync configmap cache: timed out waiting for the condition gke-managed-cim 14m Normal Pulling pod/kube-state-metrics-0 Pulling image "gke.gcr.io/kube-state-metrics:v2.7.0-gke.57@sha256:dbe4ea045f05b2eab7da0d63ecbb5cf9ca36b0037bee7a68220b0a77ed6476d0" gke-managed-cim 14m Normal Pulled pod/kube-state-metrics-0 Successfully pulled image "gke.gcr.io/kube-state-metrics:v2.7.0-gke.57@sha256:dbe4ea045f05b2eab7da0d63ecbb5cf9ca36b0037bee7a68220b0a77ed6476d0" in 1.374s (3.217s including waiting). Image size: 12923236 bytes. gke-managed-cim 14m Normal Created pod/kube-state-metrics-0 Created container kube-state-metrics gke-managed-cim 14m Normal Started pod/kube-state-metrics-0 Started container kube-state-metrics gke-managed-cim 14m Normal Pulling pod/kube-state-metrics-0 Pulling image "gke.gcr.io/gke-metrics-collector:20240501_2300_RC0@sha256:af727fbef6a16960bd3541d89b94e1a4938b57041e5869f148995d8c271a6334" gke-managed-cim 14m Normal Pulled pod/kube-state-metrics-0 Successfully pulled image "gke.gcr.io/gke-metrics-collector:20240501_2300_RC0@sha256:af727fbef6a16960bd3541d89b94e1a4938b57041e5869f148995d8c271a6334" in 1.421s (1.73s including waiting). Image size: 23786769 bytes. gke-managed-cim 14m Normal Created pod/kube-state-metrics-0 Created container ksm-metrics-collector gke-managed-cim 14m Normal Started pod/kube-state-metrics-0 Started container ksm-metrics-collector gke-managed-cim 15m Warning FailedCreate statefulset/kube-state-metrics create Pod kube-state-metrics-0 in StatefulSet kube-state-metrics failed error: pods "kube-state-metrics-0" is forbidden: error looking up service account gke-managed-cim/kube-state-metrics: serviceaccount "kube-state-metrics" not found gke-managed-cim 15m Normal SuccessfulCreate statefulset/kube-state-metrics create Pod kube-state-metrics-0 in StatefulSet kube-state-metrics successful gke-managed-cim 13m Warning FailedGetResourceMetric horizontalpodautoscaler/kube-state-metrics unable to get metric memory: unable to fetch metrics from resource metrics API: the server is currently unable to handle the request (get pods.metrics.k8s.io) gmp-system 14m Warning FailedScheduling pod/alertmanager-0 no nodes available to schedule pods gmp-system 14m Warning FailedScheduling pod/alertmanager-0 0/2 nodes are available: 2 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. gmp-system 14m Normal Scheduled pod/alertmanager-0 Successfully assigned gmp-system/alertmanager-0 to gke-tchains-e2e-cls18356-default-pool-e4db100d-741p gmp-system 14m Warning FailedMount pod/alertmanager-0 MountVolume.SetUp failed for volume "config" : secret "alertmanager" not found gmp-system 15m Normal SuccessfulCreate statefulset/alertmanager create Pod alertmanager-0 in StatefulSet alertmanager successful gmp-system 14m Normal SuccessfulDelete statefulset/alertmanager delete Pod alertmanager-0 in StatefulSet alertmanager successful gmp-system 14m Normal Scheduled pod/collector-7h72v Successfully assigned gmp-system/collector-7h72v to gke-tchains-e2e-cls18356-default-pool-e4db100d-741p gmp-system 14m Normal Pulling pod/collector-7h72v Pulling image "gke.gcr.io/gke-distroless/bash:gke_distroless_20240607.00_p0@sha256:85daad626909e7fbf1c8de2e3d4611c685fd0b7a23dbac623829718c8e9359bf" gmp-system 14m Normal Pulled pod/collector-7h72v Successfully pulled image "gke.gcr.io/gke-distroless/bash:gke_distroless_20240607.00_p0@sha256:85daad626909e7fbf1c8de2e3d4611c685fd0b7a23dbac623829718c8e9359bf" in 375ms (1.507s including waiting). Image size: 18373482 bytes. gmp-system 14m Normal Created pod/collector-7h72v Created container config-init ... skipping 4 lines ... gmp-system 14m Normal Started pod/collector-7h72v Started container prometheus gmp-system 14m Normal Pulling pod/collector-7h72v Pulling image "gke.gcr.io/prometheus-engine/config-reloader:v0.12.0-gke.5@sha256:21055a361185da47fbd2c21389fb5cd00b54bfed5c784e0dc258b5b416beaf7e" gmp-system 14m Normal Pulled pod/collector-7h72v Successfully pulled image "gke.gcr.io/prometheus-engine/config-reloader:v0.12.0-gke.5@sha256:21055a361185da47fbd2c21389fb5cd00b54bfed5c784e0dc258b5b416beaf7e" in 1.24s (1.24s including waiting). Image size: 59131319 bytes. gmp-system 14m Normal Created pod/collector-7h72v Created container config-reloader gmp-system 14m Normal Started pod/collector-7h72v Started container config-reloader gmp-system 14m Normal Scheduled pod/collector-k74lw Successfully assigned gmp-system/collector-k74lw to gke-tchains-e2e-cls18356-default-pool-5af8ac8c-q6vs gmp-system 14m Warning NetworkNotReady pod/collector-k74lw network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized gmp-system 14m Warning FailedMount pod/collector-k74lw MountVolume.SetUp failed for volume "collection-secret" : object "gmp-system"/"collection" not registered gmp-system 14m Warning FailedMount pod/collector-k74lw MountVolume.SetUp failed for volume "config" : object "gmp-system"/"collector" not registered gmp-system 14m Warning FailedMount pod/collector-k74lw MountVolume.SetUp failed for volume "kube-api-access-zqcn7" : object "gmp-system"/"kube-root-ca.crt" not registered gmp-system 14m Normal Scheduled pod/collector-mdznw Successfully assigned gmp-system/collector-mdznw to gke-tchains-e2e-cls18356-default-pool-bbe1f426-xqss gmp-system 14m Normal Pulling pod/collector-mdznw Pulling image "gke.gcr.io/gke-distroless/bash:gke_distroless_20240607.00_p0@sha256:85daad626909e7fbf1c8de2e3d4611c685fd0b7a23dbac623829718c8e9359bf" gmp-system 14m Normal Pulled pod/collector-mdznw Successfully pulled image "gke.gcr.io/gke-distroless/bash:gke_distroless_20240607.00_p0@sha256:85daad626909e7fbf1c8de2e3d4611c685fd0b7a23dbac623829718c8e9359bf" in 317ms (317ms including waiting). Image size: 18373482 bytes. gmp-system 14m Warning Failed pod/collector-mdznw Error: services have not yet been read at least once, cannot construct envvars gmp-system 14m Normal Pulled pod/collector-mdznw Container image "gke.gcr.io/gke-distroless/bash:gke_distroless_20240607.00_p0@sha256:85daad626909e7fbf1c8de2e3d4611c685fd0b7a23dbac623829718c8e9359bf" already present on machine gmp-system 13m Normal Created pod/collector-mdznw Created container config-init gmp-system 13m Normal Started pod/collector-mdznw Started container config-init gmp-system 13m Normal Pulling pod/collector-mdznw Pulling image "gke.gcr.io/prometheus-engine/prometheus:v2.45.3-gmp.7-gke.0@sha256:8c8e35af7e2b92ac9d82ce640621c0d3aa10d7d62856681af3572d0a8fbb787b" gmp-system 13m Normal Pulled pod/collector-mdznw Successfully pulled image "gke.gcr.io/prometheus-engine/prometheus:v2.45.3-gmp.7-gke.0@sha256:8c8e35af7e2b92ac9d82ce640621c0d3aa10d7d62856681af3572d0a8fbb787b" in 3.321s (3.322s including waiting). Image size: 112349941 bytes. gmp-system 13m Normal Created pod/collector-mdznw Created container prometheus gmp-system 13m Normal Started pod/collector-mdznw Started container prometheus gmp-system 13m Normal Pulling pod/collector-mdznw Pulling image "gke.gcr.io/prometheus-engine/config-reloader:v0.12.0-gke.5@sha256:21055a361185da47fbd2c21389fb5cd00b54bfed5c784e0dc258b5b416beaf7e" gmp-system 13m Normal Pulled pod/collector-mdznw Successfully pulled image "gke.gcr.io/prometheus-engine/config-reloader:v0.12.0-gke.5@sha256:21055a361185da47fbd2c21389fb5cd00b54bfed5c784e0dc258b5b416beaf7e" in 1.709s (1.709s including waiting). Image size: 59131319 bytes. gmp-system 13m Normal Created pod/collector-mdznw Created container config-reloader gmp-system 13m Normal Started pod/collector-mdznw Started container config-reloader gmp-system 14m Warning NetworkNotReady pod/collector-r7hvm network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized gmp-system 14m Normal Scheduled pod/collector-r7hvm Successfully assigned gmp-system/collector-r7hvm to gke-tchains-e2e-cls18356-default-pool-5af8ac8c-q6vs gmp-system 14m Warning FailedMount pod/collector-r7hvm MountVolume.SetUp failed for volume "config" : object "gmp-system"/"collector" not registered gmp-system 14m Warning FailedMount pod/collector-r7hvm MountVolume.SetUp failed for volume "collection-secret" : object "gmp-system"/"collection" not registered gmp-system 14m Warning FailedMount pod/collector-r7hvm MountVolume.SetUp failed for volume "kube-api-access-dbpzr" : object "gmp-system"/"kube-root-ca.crt" not registered gmp-system 14m Normal Pulling pod/collector-r7hvm Pulling image "gke.gcr.io/gke-distroless/bash:gke_distroless_20240607.00_p0@sha256:85daad626909e7fbf1c8de2e3d4611c685fd0b7a23dbac623829718c8e9359bf" gmp-system 14m Normal Pulled pod/collector-r7hvm Successfully pulled image "gke.gcr.io/gke-distroless/bash:gke_distroless_20240607.00_p0@sha256:85daad626909e7fbf1c8de2e3d4611c685fd0b7a23dbac623829718c8e9359bf" in 217ms (217ms including waiting). Image size: 18373482 bytes. gmp-system 14m Normal Created pod/collector-r7hvm Created container config-init gmp-system 14m Normal Started pod/collector-r7hvm Started container config-init gmp-system 13m Normal Pulling pod/collector-r7hvm Pulling image "gke.gcr.io/prometheus-engine/prometheus:v2.45.3-gmp.7-gke.0@sha256:8c8e35af7e2b92ac9d82ce640621c0d3aa10d7d62856681af3572d0a8fbb787b" gmp-system 14m Normal Scheduled pod/collector-tjp2x Successfully assigned gmp-system/collector-tjp2x to gke-tchains-e2e-cls18356-default-pool-bbe1f426-xqss gmp-system 14m Warning NetworkNotReady pod/collector-tjp2x network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized gmp-system 14m Warning FailedMount pod/collector-tjp2x MountVolume.SetUp failed for volume "config" : object "gmp-system"/"collector" not registered gmp-system 14m Warning FailedMount pod/collector-tjp2x MountVolume.SetUp failed for volume "collection-secret" : object "gmp-system"/"collection" not registered gmp-system 14m Warning FailedMount pod/collector-tjp2x MountVolume.SetUp failed for volume "kube-api-access-9mg66" : object "gmp-system"/"kube-root-ca.crt" not registered gmp-system 14m Warning FailedMount pod/collector-tjp2x MountVolume.SetUp failed for volume "config" : configmap "collector" not found gmp-system 14m Warning FailedMount pod/collector-tjp2x MountVolume.SetUp failed for volume "collection-secret" : secret "collection" not found gmp-system 14m Normal Scheduled pod/collector-zqnnz Successfully assigned gmp-system/collector-zqnnz to gke-tchains-e2e-cls18356-default-pool-e4db100d-741p gmp-system 14m Warning NetworkNotReady pod/collector-zqnnz network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized gmp-system 14m Warning FailedMount pod/collector-zqnnz MountVolume.SetUp failed for volume "config" : object "gmp-system"/"collector" not registered gmp-system 14m Warning FailedMount pod/collector-zqnnz MountVolume.SetUp failed for volume "collection-secret" : object "gmp-system"/"collection" not registered gmp-system 14m Warning FailedMount pod/collector-zqnnz MountVolume.SetUp failed for volume "kube-api-access-w4s8d" : object "gmp-system"/"kube-root-ca.crt" not registered gmp-system 14m Normal SuccessfulCreate daemonset/collector Created pod: collector-zqnnz gmp-system 14m Normal SuccessfulCreate daemonset/collector Created pod: collector-tjp2x gmp-system 14m Normal SuccessfulCreate daemonset/collector Created pod: collector-k74lw gmp-system 14m Normal SuccessfulDelete daemonset/collector Deleted pod: collector-tjp2x gmp-system 14m Normal SuccessfulDelete daemonset/collector Deleted pod: collector-k74lw gmp-system 14m Normal SuccessfulDelete daemonset/collector Deleted pod: collector-zqnnz ... skipping 9 lines ... gmp-system 14m Normal Started pod/gmp-operator-858f4d9857-mk9wn Started container operator gmp-system 15m Normal SuccessfulCreate replicaset/gmp-operator-858f4d9857 Created pod: gmp-operator-858f4d9857-mk9wn gmp-system 15m Normal ScalingReplicaSet deployment/gmp-operator Scaled up replica set gmp-operator-858f4d9857 to 1 gmp-system 14m Warning FailedScheduling pod/rule-evaluator-54d5b49bb4-69sc8 no nodes available to schedule pods gmp-system 14m Warning FailedScheduling pod/rule-evaluator-54d5b49bb4-69sc8 0/2 nodes are available: 2 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. gmp-system 14m Normal Scheduled pod/rule-evaluator-54d5b49bb4-69sc8 Successfully assigned gmp-system/rule-evaluator-54d5b49bb4-69sc8 to gke-tchains-e2e-cls18356-default-pool-e4db100d-741p gmp-system 14m Warning FailedMount pod/rule-evaluator-54d5b49bb4-69sc8 MountVolume.SetUp failed for volume "rules-secret" : secret "rules" not found gmp-system 14m Warning FailedMount pod/rule-evaluator-54d5b49bb4-69sc8 MountVolume.SetUp failed for volume "config" : configmap "rule-evaluator" not found gmp-system 14m Warning FailedMount pod/rule-evaluator-54d5b49bb4-69sc8 MountVolume.SetUp failed for volume "rules" : configmap "rules-generated" not found gmp-system 15m Normal SuccessfulCreate replicaset/rule-evaluator-54d5b49bb4 Created pod: rule-evaluator-54d5b49bb4-69sc8 gmp-system 14m Normal SuccessfulDelete replicaset/rule-evaluator-54d5b49bb4 Deleted pod: rule-evaluator-54d5b49bb4-69sc8 gmp-system 15m Normal ScalingReplicaSet deployment/rule-evaluator Scaled up replica set rule-evaluator-54d5b49bb4 to 1 gmp-system 14m Normal ScalingReplicaSet deployment/rule-evaluator Scaled down replica set rule-evaluator-54d5b49bb4 to 0 from 1 kube-system 15m Normal LeaderElection lease/addon-manager gke-56312a4eeba5470b9ed0-bb34-e189-vm_a935817a-6dc6-4ab9-80da-8be8e0af5e0c became leader kube-system 15m Normal LeaderElection lease/addon-resizer gke-56312a4eeba5470b9ed0-1f5c-16fd-vm became leader ... skipping 36 lines ... kube-system 14m Normal Pulled pod/fluentbit-gke-54h2h Successfully pulled image "gke.gcr.io/gke-metrics-collector:20240731_2300_RC0@sha256:363cb043ab30d5cef604d9be51fcbaa3a32f8887a27dc5b11403b1f868e355c5" in 1.096s (1.096s including waiting). Image size: 24576014 bytes. kube-system 14m Normal Created pod/fluentbit-gke-54h2h Created container fluentbit-metrics-collector kube-system 14m Normal Started pod/fluentbit-gke-54h2h Started container fluentbit-metrics-collector kube-system 14m Normal Scheduled pod/fluentbit-gke-nnznk Successfully assigned kube-system/fluentbit-gke-nnznk to gke-tchains-e2e-cls18356-default-pool-bbe1f426-xqss kube-system 14m Normal Pulling pod/fluentbit-gke-nnznk Pulling image "gke.gcr.io/gke-distroless/bash:gke_distroless_20240707.00_p0@sha256:9cd98e70ae7072b83fa4a5752ab2f022960e5b7af7585e83b60899f140ebe003" kube-system 14m Normal Pulled pod/fluentbit-gke-nnznk Successfully pulled image "gke.gcr.io/gke-distroless/bash:gke_distroless_20240707.00_p0@sha256:9cd98e70ae7072b83fa4a5752ab2f022960e5b7af7585e83b60899f140ebe003" in 339ms (339ms including waiting). Image size: 18373482 bytes. kube-system 14m Warning Failed pod/fluentbit-gke-nnznk Error: services have not yet been read at least once, cannot construct envvars kube-system 14m Normal Pulled pod/fluentbit-gke-nnznk Container image "gke.gcr.io/gke-distroless/bash:gke_distroless_20240707.00_p0@sha256:9cd98e70ae7072b83fa4a5752ab2f022960e5b7af7585e83b60899f140ebe003" already present on machine kube-system 14m Normal Created pod/fluentbit-gke-nnznk Created container fluentbit-gke-init kube-system 14m Normal Started pod/fluentbit-gke-nnznk Started container fluentbit-gke-init kube-system 13m Normal Pulling pod/fluentbit-gke-nnznk Pulling image "gke.gcr.io/fluent-bit:v1.8.12-gke.31@sha256:b148f7f960f101b6d52efd909fe43fef73cb40cee3571da61034974965605b66" kube-system 13m Normal Pulled pod/fluentbit-gke-nnznk Successfully pulled image "gke.gcr.io/fluent-bit:v1.8.12-gke.31@sha256:b148f7f960f101b6d52efd909fe43fef73cb40cee3571da61034974965605b66" in 4s (4s including waiting). Image size: 94630329 bytes. kube-system 13m Normal Created pod/fluentbit-gke-nnznk Created container fluentbit ... skipping 28 lines ... kube-system 14m Normal SuccessfulCreate daemonset/fluentbit-gke Created pod: fluentbit-gke-54h2h kube-system 15m Normal LeaderElection lease/gcp-controller-manager gke-56312a4eeba5470b9ed0-1f5c-16fd-vm became leader kube-system 15m Normal LeaderElection lease/gke-common-webhook-lock gke-56312a4eeba5470b9ed0-1f5c-16fd-vm_52f05 became leader kube-system 14m Normal Scheduled pod/gke-metrics-agent-9d9s4 Successfully assigned kube-system/gke-metrics-agent-9d9s4 to gke-tchains-e2e-cls18356-default-pool-bbe1f426-xqss kube-system 14m Normal Pulling pod/gke-metrics-agent-9d9s4 Pulling image "gke.gcr.io/gke-metrics-agent:1.12.2-gke.5@sha256:ef8a05d14ebba1cfb777cfce9d9682862edd6bb7aa2d0e4603ed5dc6e9841963" kube-system 14m Normal Pulled pod/gke-metrics-agent-9d9s4 Successfully pulled image "gke.gcr.io/gke-metrics-agent:1.12.2-gke.5@sha256:ef8a05d14ebba1cfb777cfce9d9682862edd6bb7aa2d0e4603ed5dc6e9841963" in 1.88s (1.88s including waiting). Image size: 27062783 bytes. kube-system 14m Warning Failed pod/gke-metrics-agent-9d9s4 Error: services have not yet been read at least once, cannot construct envvars kube-system 14m Normal Pulled pod/gke-metrics-agent-9d9s4 Container image "gke.gcr.io/gke-metrics-agent:1.12.2-gke.5@sha256:ef8a05d14ebba1cfb777cfce9d9682862edd6bb7aa2d0e4603ed5dc6e9841963" already present on machine kube-system 14m Warning Failed pod/gke-metrics-agent-9d9s4 Error: services have not yet been read at least once, cannot construct envvars kube-system 14m Normal Pulling pod/gke-metrics-agent-9d9s4 Pulling image "gke.gcr.io/gke-metrics-collector:20240620_2300_RC0@sha256:463e73163c4d343b8a3327e0d2e8e955d22434e9005a1a188275ac55b8cfebb4" kube-system 14m Normal Pulled pod/gke-metrics-agent-9d9s4 Successfully pulled image "gke.gcr.io/gke-metrics-collector:20240620_2300_RC0@sha256:463e73163c4d343b8a3327e0d2e8e955d22434e9005a1a188275ac55b8cfebb4" in 1.185s (1.185s including waiting). Image size: 24343841 bytes. kube-system 14m Warning Failed pod/gke-metrics-agent-9d9s4 Error: services have not yet been read at least once, cannot construct envvars kube-system 14m Normal Pulled pod/gke-metrics-agent-9d9s4 Container image "gke.gcr.io/gke-metrics-agent:1.12.2-gke.5@sha256:ef8a05d14ebba1cfb777cfce9d9682862edd6bb7aa2d0e4603ed5dc6e9841963" already present on machine kube-system 14m Normal Pulled pod/gke-metrics-agent-9d9s4 Container image "gke.gcr.io/gke-metrics-collector:20240620_2300_RC0@sha256:463e73163c4d343b8a3327e0d2e8e955d22434e9005a1a188275ac55b8cfebb4" already present on machine kube-system 14m Normal Scheduled pod/gke-metrics-agent-dzxtp Successfully assigned kube-system/gke-metrics-agent-dzxtp to gke-tchains-e2e-cls18356-default-pool-e4db100d-741p kube-system 14m Normal Pulling pod/gke-metrics-agent-dzxtp Pulling image "gke.gcr.io/gke-metrics-agent:1.12.2-gke.5@sha256:ef8a05d14ebba1cfb777cfce9d9682862edd6bb7aa2d0e4603ed5dc6e9841963" kube-system 14m Normal Pulled pod/gke-metrics-agent-dzxtp Successfully pulled image "gke.gcr.io/gke-metrics-agent:1.12.2-gke.5@sha256:ef8a05d14ebba1cfb777cfce9d9682862edd6bb7aa2d0e4603ed5dc6e9841963" in 2.021s (2.021s including waiting). Image size: 27062783 bytes. kube-system 14m Normal Created pod/gke-metrics-agent-dzxtp Created container gke-metrics-agent ... skipping 22 lines ... kube-system 14m Normal SuccessfulCreate daemonset/gke-metrics-agent Created pod: gke-metrics-agent-qzmr2 kube-system 15m Normal LeaderElection lease/ingress-gce-lock gke-56312a4eeba5470b9ed0-bb34-e189-vm_47880 became leader kube-system 15m Normal LeaderElection lease/ingress-gce-neg-lock gke-56312a4eeba5470b9ed0-bb34-e189-vm_47880 became leader kube-system 14m Normal Scheduled pod/konnectivity-agent-5f45d8b5fc-2pvmv Successfully assigned kube-system/konnectivity-agent-5f45d8b5fc-2pvmv to gke-tchains-e2e-cls18356-default-pool-bbe1f426-xqss kube-system 14m Normal Pulling pod/konnectivity-agent-5f45d8b5fc-2pvmv Pulling image "gke.gcr.io/proxy-agent:v0.30.2-gke.0@sha256:d4e4b901c538beb26b3bad4de01ed6e89cb49c1726f252269a82bfd26c8aa7b6" kube-system 14m Normal Pulled pod/konnectivity-agent-5f45d8b5fc-2pvmv Successfully pulled image "gke.gcr.io/proxy-agent:v0.30.2-gke.0@sha256:d4e4b901c538beb26b3bad4de01ed6e89cb49c1726f252269a82bfd26c8aa7b6" in 1.43s (1.43s including waiting). Image size: 10426715 bytes. kube-system 14m Warning Failed pod/konnectivity-agent-5f45d8b5fc-2pvmv Error: services have not yet been read at least once, cannot construct envvars kube-system 14m Normal Pulling pod/konnectivity-agent-5f45d8b5fc-2pvmv Pulling image "gke.gcr.io/gke-metrics-collector:20240717_2300_RC0@sha256:d460e6b5088332f62b990f8a1f7bf6d9eca7c3f41cb974e3db493d6b0fc4ad70" kube-system 14m Normal Pulled pod/konnectivity-agent-5f45d8b5fc-2pvmv Successfully pulled image "gke.gcr.io/gke-metrics-collector:20240717_2300_RC0@sha256:d460e6b5088332f62b990f8a1f7bf6d9eca7c3f41cb974e3db493d6b0fc4ad70" in 1.202s (1.202s including waiting). Image size: 24425624 bytes. kube-system 14m Warning Failed pod/konnectivity-agent-5f45d8b5fc-2pvmv Error: services have not yet been read at least once, cannot construct envvars kube-system 14m Normal Pulled pod/konnectivity-agent-5f45d8b5fc-2pvmv Container image "gke.gcr.io/proxy-agent:v0.30.2-gke.0@sha256:d4e4b901c538beb26b3bad4de01ed6e89cb49c1726f252269a82bfd26c8aa7b6" already present on machine kube-system 13m Normal Pulled pod/konnectivity-agent-5f45d8b5fc-2pvmv Container image "gke.gcr.io/gke-metrics-collector:20240717_2300_RC0@sha256:d460e6b5088332f62b990f8a1f7bf6d9eca7c3f41cb974e3db493d6b0fc4ad70" already present on machine kube-system 13m Normal Created pod/konnectivity-agent-5f45d8b5fc-2pvmv Created container konnectivity-agent kube-system 13m Normal Started pod/konnectivity-agent-5f45d8b5fc-2pvmv Started container konnectivity-agent kube-system 13m Normal Created pod/konnectivity-agent-5f45d8b5fc-2pvmv Created container konnectivity-agent-metrics-collector kube-system 13m Normal Started pod/konnectivity-agent-5f45d8b5fc-2pvmv Started container konnectivity-agent-metrics-collector kube-system 14m Normal Scheduled pod/konnectivity-agent-5f45d8b5fc-8wjqs Successfully assigned kube-system/konnectivity-agent-5f45d8b5fc-8wjqs to gke-tchains-e2e-cls18356-default-pool-bbe1f426-xqss kube-system 14m Normal Pulling pod/konnectivity-agent-5f45d8b5fc-8wjqs Pulling image "gke.gcr.io/proxy-agent:v0.30.2-gke.0@sha256:d4e4b901c538beb26b3bad4de01ed6e89cb49c1726f252269a82bfd26c8aa7b6" kube-system 14m Normal Pulled pod/konnectivity-agent-5f45d8b5fc-8wjqs Successfully pulled image "gke.gcr.io/proxy-agent:v0.30.2-gke.0@sha256:d4e4b901c538beb26b3bad4de01ed6e89cb49c1726f252269a82bfd26c8aa7b6" in 1.521s (1.521s including waiting). Image size: 10426715 bytes. kube-system 14m Warning Failed pod/konnectivity-agent-5f45d8b5fc-8wjqs Error: services have not yet been read at least once, cannot construct envvars kube-system 14m Normal Pulling pod/konnectivity-agent-5f45d8b5fc-8wjqs Pulling image "gke.gcr.io/gke-metrics-collector:20240717_2300_RC0@sha256:d460e6b5088332f62b990f8a1f7bf6d9eca7c3f41cb974e3db493d6b0fc4ad70" kube-system 14m Normal Pulled pod/konnectivity-agent-5f45d8b5fc-8wjqs Successfully pulled image "gke.gcr.io/gke-metrics-collector:20240717_2300_RC0@sha256:d460e6b5088332f62b990f8a1f7bf6d9eca7c3f41cb974e3db493d6b0fc4ad70" in 1.215s (1.215s including waiting). Image size: 24425624 bytes. kube-system 14m Warning Failed pod/konnectivity-agent-5f45d8b5fc-8wjqs Error: services have not yet been read at least once, cannot construct envvars kube-system 14m Normal Pulled pod/konnectivity-agent-5f45d8b5fc-8wjqs Container image "gke.gcr.io/proxy-agent:v0.30.2-gke.0@sha256:d4e4b901c538beb26b3bad4de01ed6e89cb49c1726f252269a82bfd26c8aa7b6" already present on machine kube-system 14m Normal Pulled pod/konnectivity-agent-5f45d8b5fc-8wjqs Container image "gke.gcr.io/gke-metrics-collector:20240717_2300_RC0@sha256:d460e6b5088332f62b990f8a1f7bf6d9eca7c3f41cb974e3db493d6b0fc4ad70" already present on machine kube-system 14m Normal Created pod/konnectivity-agent-5f45d8b5fc-8wjqs Created container konnectivity-agent kube-system 14m Normal Started pod/konnectivity-agent-5f45d8b5fc-8wjqs Started container konnectivity-agent kube-system 14m Normal Created pod/konnectivity-agent-5f45d8b5fc-8wjqs Created container konnectivity-agent-metrics-collector kube-system 14m Normal Started pod/konnectivity-agent-5f45d8b5fc-8wjqs Started container konnectivity-agent-metrics-collector ... skipping 23 lines ... kube-system 15m Normal ScalingReplicaSet deployment/konnectivity-agent Scaled up replica set konnectivity-agent-5f45d8b5fc to 1 kube-system 14m Normal ScalingReplicaSet deployment/konnectivity-agent Scaled up replica set konnectivity-agent-5f45d8b5fc to 3 from 1 kube-system 15m Normal LeaderElection lease/kube-controller-manager gke-56312a4eeba5470b9ed0-bb34-e189-vm_f5f1b0bd-7f47-482d-90f3-f4ae4aed8338 became leader kube-system 14m Normal Scheduled pod/kube-dns-76c489c55b-bs85l Successfully assigned kube-system/kube-dns-76c489c55b-bs85l to gke-tchains-e2e-cls18356-default-pool-bbe1f426-xqss kube-system 14m Normal Pulling pod/kube-dns-76c489c55b-bs85l Pulling image "gke.gcr.io/k8s-dns-kube-dns:1.23.0-gke.9@sha256:48d7e5c5cdd5b356e55c3e61a7ae8f2657f15b661b385639f7b983fe134c0709" kube-system 14m Normal Pulled pod/kube-dns-76c489c55b-bs85l Successfully pulled image "gke.gcr.io/k8s-dns-kube-dns:1.23.0-gke.9@sha256:48d7e5c5cdd5b356e55c3e61a7ae8f2657f15b661b385639f7b983fe134c0709" in 1.891s (1.891s including waiting). Image size: 32530343 bytes. kube-system 14m Warning Failed pod/kube-dns-76c489c55b-bs85l Error: services have not yet been read at least once, cannot construct envvars kube-system 14m Normal Pulling pod/kube-dns-76c489c55b-bs85l Pulling image "gke.gcr.io/k8s-dns-dnsmasq-nanny:1.23.0-gke.9@sha256:8c165a991f95755137077c927455e2d996de2c3d5efb0c369f7d94f8dc7d4fb5" kube-system 14m Normal Pulled pod/kube-dns-76c489c55b-bs85l Successfully pulled image "gke.gcr.io/k8s-dns-dnsmasq-nanny:1.23.0-gke.9@sha256:8c165a991f95755137077c927455e2d996de2c3d5efb0c369f7d94f8dc7d4fb5" in 2.058s (2.058s including waiting). Image size: 37174146 bytes. kube-system 14m Warning Failed pod/kube-dns-76c489c55b-bs85l Error: services have not yet been read at least once, cannot construct envvars kube-system 14m Normal Pulling pod/kube-dns-76c489c55b-bs85l Pulling image "gke.gcr.io/k8s-dns-sidecar:1.23.0-gke.9@sha256:5d99c8b4ffbd794477f16644c3a0e51b79246052c8e4518af0614c3274ff3631" kube-system 14m Normal Pulled pod/kube-dns-76c489c55b-bs85l Successfully pulled image "gke.gcr.io/k8s-dns-sidecar:1.23.0-gke.9@sha256:5d99c8b4ffbd794477f16644c3a0e51b79246052c8e4518af0614c3274ff3631" in 1.083s (1.083s including waiting). Image size: 29040121 bytes. kube-system 14m Warning Failed pod/kube-dns-76c489c55b-bs85l Error: services have not yet been read at least once, cannot construct envvars kube-system 14m Normal Pulling pod/kube-dns-76c489c55b-bs85l Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.12-gke.13@sha256:7e93b4bfa310d477fd8a977d5772a2f92dc746152906d31b66e872f919de3b5e" kube-system 14m Normal Pulled pod/kube-dns-76c489c55b-bs85l Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.12-gke.13@sha256:7e93b4bfa310d477fd8a977d5772a2f92dc746152906d31b66e872f919de3b5e" in 1.472s (1.472s including waiting). Image size: 35248896 bytes. kube-system 14m Warning Failed pod/kube-dns-76c489c55b-bs85l Error: services have not yet been read at least once, cannot construct envvars kube-system 14m Normal Pulling pod/kube-dns-76c489c55b-bs85l Pulling image "gke.gcr.io/gke-metrics-collector:20240129_2300_RC0@sha256:63c7b3dab8777fc544998d0623cb27f5858f6d2cc498b0bf23009523b3806332" kube-system 14m Normal Pulled pod/kube-dns-76c489c55b-bs85l Successfully pulled image "gke.gcr.io/gke-metrics-collector:20240129_2300_RC0@sha256:63c7b3dab8777fc544998d0623cb27f5858f6d2cc498b0bf23009523b3806332" in 864ms (864ms including waiting). Image size: 23341293 bytes. kube-system 14m Warning Failed pod/kube-dns-76c489c55b-bs85l Error: services have not yet been read at least once, cannot construct envvars kube-system 14m Normal Pulled pod/kube-dns-76c489c55b-bs85l Container image "gke.gcr.io/k8s-dns-kube-dns:1.23.0-gke.9@sha256:48d7e5c5cdd5b356e55c3e61a7ae8f2657f15b661b385639f7b983fe134c0709" already present on machine kube-system 14m Normal Pulled pod/kube-dns-76c489c55b-bs85l Container image "gke.gcr.io/k8s-dns-dnsmasq-nanny:1.23.0-gke.9@sha256:8c165a991f95755137077c927455e2d996de2c3d5efb0c369f7d94f8dc7d4fb5" already present on machine kube-system 14m Normal Pulled pod/kube-dns-76c489c55b-bs85l Container image "gke.gcr.io/k8s-dns-sidecar:1.23.0-gke.9@sha256:5d99c8b4ffbd794477f16644c3a0e51b79246052c8e4518af0614c3274ff3631" already present on machine kube-system 14m Normal Pulled pod/kube-dns-76c489c55b-bs85l Container image "gke.gcr.io/prometheus-to-sd:v0.11.12-gke.13@sha256:7e93b4bfa310d477fd8a977d5772a2f92dc746152906d31b66e872f919de3b5e" already present on machine kube-system 14m Normal Pulled pod/kube-dns-76c489c55b-bs85l (combined from similar events): Container image "gke.gcr.io/gke-metrics-collector:20240129_2300_RC0@sha256:63c7b3dab8777fc544998d0623cb27f5858f6d2cc498b0bf23009523b3806332" already present on machine kube-system 14m Warning FailedScheduling pod/kube-dns-76c489c55b-wmmk6 no nodes available to schedule pods ... skipping 65 lines ... kube-system 14m Normal Scheduled pod/metrics-server-v1.30.3-7bdd4dfd65-8bsdb Successfully assigned kube-system/metrics-server-v1.30.3-7bdd4dfd65-8bsdb to gke-tchains-e2e-cls18356-default-pool-e4db100d-741p kube-system 14m Normal Pulling pod/metrics-server-v1.30.3-7bdd4dfd65-8bsdb Pulling image "gke.gcr.io/metrics-server:v0.7.1-gke.18@sha256:54a72ccbfe0d4490cccd16d17d58a768ef3a7882a6e27db361ddacbb8ff3236d" kube-system 14m Normal Pulled pod/metrics-server-v1.30.3-7bdd4dfd65-8bsdb Successfully pulled image "gke.gcr.io/metrics-server:v0.7.1-gke.18@sha256:54a72ccbfe0d4490cccd16d17d58a768ef3a7882a6e27db361ddacbb8ff3236d" in 1.666s (2.52s including waiting). Image size: 19253575 bytes. kube-system 14m Normal Created pod/metrics-server-v1.30.3-7bdd4dfd65-8bsdb Created container metrics-server kube-system 14m Normal Started pod/metrics-server-v1.30.3-7bdd4dfd65-8bsdb Started container metrics-server kube-system 13m Normal Killing pod/metrics-server-v1.30.3-7bdd4dfd65-8bsdb Stopping container metrics-server kube-system 13m Warning Unhealthy pod/metrics-server-v1.30.3-7bdd4dfd65-8bsdb Readiness probe failed: Get "https://10.44.2.6:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) kube-system 15m Warning FailedCreate replicaset/metrics-server-v1.30.3-7bdd4dfd65 Error creating: pods "metrics-server-v1.30.3-7bdd4dfd65-" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount "metrics-server" not found kube-system 15m Normal SuccessfulCreate replicaset/metrics-server-v1.30.3-7bdd4dfd65 Created pod: metrics-server-v1.30.3-7bdd4dfd65-8bsdb kube-system 13m Normal SuccessfulDelete replicaset/metrics-server-v1.30.3-7bdd4dfd65 Deleted pod: metrics-server-v1.30.3-7bdd4dfd65-8bsdb kube-system 15m Normal ScalingReplicaSet deployment/metrics-server-v1.30.3 Scaled up replica set metrics-server-v1.30.3-7bdd4dfd65 to 1 kube-system 15m Normal ScalingReplicaSet deployment/metrics-server-v1.30.3 Scaled up replica set metrics-server-v1.30.3-7887b8869c to 1 kube-system 13m Normal ScalingReplicaSet deployment/metrics-server-v1.30.3 Scaled down replica set metrics-server-v1.30.3-7bdd4dfd65 to 0 from 1 kube-system 14m Normal LeaderElection lease/pd-csi-storage-gke-io 1726486835892-2507-pd-csi-storage-gke-io became leader kube-system 14m Normal Scheduled pod/pdcsi-node-56g6b Successfully assigned kube-system/pdcsi-node-56g6b to gke-tchains-e2e-cls18356-default-pool-bbe1f426-xqss kube-system 14m Normal Pulling pod/pdcsi-node-56g6b Pulling image "gke.gcr.io/csi-node-driver-registrar:v2.9.4-gke.8@sha256:c2c21f697f378ced48ecb07573e025fc75436fa3a597c60c1b3377ec221be51f" kube-system 14m Normal Pulled pod/pdcsi-node-56g6b Successfully pulled image "gke.gcr.io/csi-node-driver-registrar:v2.9.4-gke.8@sha256:c2c21f697f378ced48ecb07573e025fc75436fa3a597c60c1b3377ec221be51f" in 1.063s (1.063s including waiting). Image size: 10775814 bytes. kube-system 14m Warning Failed pod/pdcsi-node-56g6b Error: services have not yet been read at least once, cannot construct envvars kube-system 14m Normal Pulling pod/pdcsi-node-56g6b Pulling image "gke.gcr.io/gcp-compute-persistent-disk-csi-driver:v1.14.1-gke.3@sha256:4917abd39f76299d566f0e1382fb2eaa3494b006dcd4031603e990d5351a2681" kube-system 14m Normal Pulled pod/pdcsi-node-56g6b Successfully pulled image "gke.gcr.io/gcp-compute-persistent-disk-csi-driver:v1.14.1-gke.3@sha256:4917abd39f76299d566f0e1382fb2eaa3494b006dcd4031603e990d5351a2681" in 2.896s (2.896s including waiting). Image size: 60824402 bytes. kube-system 14m Warning Failed pod/pdcsi-node-56g6b Error: services have not yet been read at least once, cannot construct envvars kube-system 13m Normal Pulled pod/pdcsi-node-56g6b Container image "gke.gcr.io/csi-node-driver-registrar:v2.9.4-gke.8@sha256:c2c21f697f378ced48ecb07573e025fc75436fa3a597c60c1b3377ec221be51f" already present on machine kube-system 13m Normal Pulled pod/pdcsi-node-56g6b Container image "gke.gcr.io/gcp-compute-persistent-disk-csi-driver:v1.14.1-gke.3@sha256:4917abd39f76299d566f0e1382fb2eaa3494b006dcd4031603e990d5351a2681" already present on machine kube-system 13m Normal Created pod/pdcsi-node-56g6b Created container csi-driver-registrar kube-system 13m Normal Started pod/pdcsi-node-56g6b Started container csi-driver-registrar kube-system 13m Normal Created pod/pdcsi-node-56g6b Created container gce-pd-driver kube-system 13m Normal Started pod/pdcsi-node-56g6b Started container gce-pd-driver ... skipping 64 lines ... tekton-chains 10m Normal Scheduled pod/tekton-chains-controller-6767f679c4-wk7wm Successfully assigned tekton-chains/tekton-chains-controller-6767f679c4-wk7wm to gke-tchains-e2e-cls18356-default-pool-bbe1f426-xqss tekton-chains 10m Normal Pulling pod/tekton-chains-controller-6767f679c4-wk7wm Pulling image "gcr.io/tekton-prow-12/tchains-e2e-img/controller-92006fd957c0afd31de6a40b3e33b39f@sha256:2893445b6baa82bd6aa3fb070a3509a02e4c35629441be1a2bca5b29c46c101e" tekton-chains 10m Normal Pulled pod/tekton-chains-controller-6767f679c4-wk7wm Successfully pulled image "gcr.io/tekton-prow-12/tchains-e2e-img/controller-92006fd957c0afd31de6a40b3e33b39f@sha256:2893445b6baa82bd6aa3fb070a3509a02e4c35629441be1a2bca5b29c46c101e" in 2.302s (2.302s including waiting). Image size: 57763473 bytes. tekton-chains 10m Normal Created pod/tekton-chains-controller-6767f679c4-wk7wm Created container tekton-chains-controller tekton-chains 10m Normal Started pod/tekton-chains-controller-6767f679c4-wk7wm Started container tekton-chains-controller tekton-chains 9m24s Normal Killing pod/tekton-chains-controller-6767f679c4-wk7wm Stopping container tekton-chains-controller tekton-chains 10m Warning FailedCreate replicaset/tekton-chains-controller-6767f679c4 Error creating: pods "tekton-chains-controller-6767f679c4-" is forbidden: error looking up service account tekton-chains/tekton-chains-controller: serviceaccount "tekton-chains-controller" not found tekton-chains 10m Normal SuccessfulCreate replicaset/tekton-chains-controller-6767f679c4 Created pod: tekton-chains-controller-6767f679c4-wk7wm tekton-chains 9m24s Normal SuccessfulDelete replicaset/tekton-chains-controller-6767f679c4 Deleted pod: tekton-chains-controller-6767f679c4-wk7wm tekton-chains 44s Normal Scheduled pod/tekton-chains-controller-bd87ccf47-48w8n Successfully assigned tekton-chains/tekton-chains-controller-bd87ccf47-48w8n to gke-tchains-e2e-cls18356-default-pool-bbe1f426-xqss tekton-chains 44s Normal Pulled pod/tekton-chains-controller-bd87ccf47-48w8n Container image "gcr.io/tekton-prow-12/tchains-e2e-img/controller-92006fd957c0afd31de6a40b3e33b39f@sha256:2893445b6baa82bd6aa3fb070a3509a02e4c35629441be1a2bca5b29c46c101e" already present on machine tekton-chains 44s Normal Created pod/tekton-chains-controller-bd87ccf47-48w8n Created container tekton-chains-controller tekton-chains 44s Normal Started pod/tekton-chains-controller-bd87ccf47-48w8n Started container tekton-chains-controller ... skipping 152 lines ... tekton-pipelines 21s Normal Killing pod/tekton-pipelines-controller-7478fbc5f8-s5znl Stopping container tekton-pipelines-controller tekton-pipelines 73s Normal Scheduled pod/tekton-pipelines-controller-7478fbc5f8-xs6bt Successfully assigned tekton-pipelines/tekton-pipelines-controller-7478fbc5f8-xs6bt to gke-tchains-e2e-cls18356-default-pool-5af8ac8c-q6vs tekton-pipelines 72s Normal Pulled pod/tekton-pipelines-controller-7478fbc5f8-xs6bt Container image "gcr.io/tekton-releases/github.com/tektoncd/pipeline/cmd/controller:v0.59.2@sha256:1acf6f4948852fc9eea74fe85f89ca4d74b059e6d9958522e1bf6b6161013fe9" already present on machine tekton-pipelines 72s Normal Created pod/tekton-pipelines-controller-7478fbc5f8-xs6bt Created container tekton-pipelines-controller tekton-pipelines 72s Normal Started pod/tekton-pipelines-controller-7478fbc5f8-xs6bt Started container tekton-pipelines-controller tekton-pipelines 42s Normal Killing pod/tekton-pipelines-controller-7478fbc5f8-xs6bt Stopping container tekton-pipelines-controller tekton-pipelines 42s Warning Unhealthy pod/tekton-pipelines-controller-7478fbc5f8-xs6bt Readiness probe failed: Get "http://10.44.1.14:8080/readiness": dial tcp 10.44.1.14:8080: connect: connection refused tekton-pipelines 13m Normal SuccessfulCreate replicaset/tekton-pipelines-controller-7478fbc5f8 Created pod: tekton-pipelines-controller-7478fbc5f8-ltdwc tekton-pipelines 73s Normal SuccessfulCreate replicaset/tekton-pipelines-controller-7478fbc5f8 Created pod: tekton-pipelines-controller-7478fbc5f8-xs6bt tekton-pipelines 42s Normal SuccessfulCreate replicaset/tekton-pipelines-controller-7478fbc5f8 Created pod: tekton-pipelines-controller-7478fbc5f8-s5znl tekton-pipelines 21s Normal SuccessfulCreate replicaset/tekton-pipelines-controller-7478fbc5f8 Created pod: tekton-pipelines-controller-7478fbc5f8-bgwxl tekton-pipelines 13m Normal ScalingReplicaSet deployment/tekton-pipelines-controller Scaled up replica set tekton-pipelines-controller-7478fbc5f8 to 1 tekton-pipelines 13m Normal Scheduled pod/tekton-pipelines-webhook-6c74bb8d75-nxt9l Successfully assigned tekton-pipelines/tekton-pipelines-webhook-6c74bb8d75-nxt9l to gke-tchains-e2e-cls18356-default-pool-5af8ac8c-q6vs ... skipping 8 lines ... vault 9m39s Normal Pulling pod/vault-0 Pulling image "hashicorp/vault:1.9.2" vault 9m36s Normal Pulled pod/vault-0 Successfully pulled image "hashicorp/vault:1.9.2" in 3.021s (3.021s including waiting). Image size: 72665935 bytes. vault 9m36s Normal Created pod/vault-0 Created container vault vault 9m36s Normal Started pod/vault-0 Started container vault vault 9m40s Normal SuccessfulCreate statefulset/vault create Pod vault-0 in StatefulSet vault successful *************************************** *** E2E TEST FAILED *** *** End of information dump *** *************************************** 2024/09/16 11:56:14 process.go:155: Step '/home/prow/go/src/github.com/tektoncd/chains/test/e2e-tests.sh --run-tests' finished in 13m40.092347587s 2024/09/16 11:56:14 main.go:319: Something went wrong: encountered 1 errors: [error during /home/prow/go/src/github.com/tektoncd/chains/test/e2e-tests.sh --run-tests: exit status 1] Test subprocess exited with code 0 Artifacts were written to /logs/artifacts Test result code is 1 ================================== ==== INTEGRATION TESTS FAILED ==== ================================== + EXIT_VALUE=1 + set +o xtrace