ResultFAILURE
Tests 1 failed / 39 succeeded
Started2024-11-04 17:07
Elapsed34m13s
Revision8868b479da3b9707e8340f44dc6fdef9d411cbb1
Refs 1202
E2E:Machinen1-standard-4
E2E:MaxNodes3
E2E:MinNodes1
E2E:Regionus-central1
E2E:Version1.30.5-gke.1355000

Test Failures


test TestVaultKMSSpire 1.00s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=test\sTestVaultKMSSpire$'
    e2e_test.go:898: Create namespace earth-qgvjp to deploy to
    e2e_test.go:900: error creating scc: failed to assign SCC: exec: "oc": executable file not found in $PATH, output: 
				from junit_WiCjrNdd.xml

Filter through log files


Show 39 Passed Tests

Show 2 Skipped Tests

Error lines from build-log.txt

... skipping 199 lines ...
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed

  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
100     9  100     9    0     0     43      0 --:--:-- --:--:-- --:--:--    43

gzip: stdin: not in gzip format
tar: Child returned status 1
tar: Error is not recoverable: exiting now
>> Deploying Tekton Pipelines
namespace/tekton-pipelines created
clusterrole.rbac.authorization.k8s.io/tekton-pipelines-controller-cluster-access created
clusterrole.rbac.authorization.k8s.io/tekton-pipelines-controller-tenant-access created
clusterrole.rbac.authorization.k8s.io/tekton-pipelines-webhook-cluster-access created
clusterrole.rbac.authorization.k8s.io/tekton-events-controller-cluster-access created
... skipping 64 lines ...
configmap/hubresolver-config created
deployment.apps/tekton-pipelines-remote-resolvers created
service/tekton-pipelines-remote-resolvers created
horizontalpodautoscaler.autoscaling/tekton-pipelines-webhook created
deployment.apps/tekton-pipelines-webhook created
service/tekton-pipelines-webhook created
error: the server doesn't have a resource type "pipelineresources"
No resources found
No resources found
No resources found
No resources found
Waiting until all pods in namespace tekton-pipelines are up....
All pods are up:
... skipping 208 lines ...
    clients.go:123: Deleting namespace earth-rbxcn
--- PASS: TestProvenanceMaterials (18.96s)
    --- PASS: TestProvenanceMaterials/taskrun (9.24s)
    --- PASS: TestProvenanceMaterials/pipelinerun (9.72s)
=== RUN   TestVaultKMSSpire
    e2e_test.go:898: Create namespace earth-qgvjp to deploy to
    e2e_test.go:900: error creating scc: failed to assign SCC: exec: "oc": executable file not found in $PATH, output: 
--- FAIL: TestVaultKMSSpire (1.00s)
=== RUN   TestExamples
=== RUN   TestExamples/taskrun-examples-slsa-v1
    examples_test.go:201: Create namespace earth-w4rpk to deploy to
    examples_test.go:575: Adding test ../examples/taskruns/task-output-image.yaml
=== RUN   TestExamples/taskrun-examples-slsa-v1/../examples/taskruns/task-output-image.yaml
    examples_test.go:225: creating object ../examples/taskruns/task-output-image.yaml
... skipping 1082 lines ...
    --- PASS: TestExamples/pipelinerun-examples-slsa-v2alpha4 (9.52s)
        --- PASS: TestExamples/pipelinerun-examples-slsa-v2alpha4/../examples/pipelineruns/pipeline-output-image.yaml (6.32s)
    --- PASS: TestExamples/pipelinerun-type-hinted-results-v2alpha4 (23.01s)
        --- PASS: TestExamples/pipelinerun-type-hinted-results-v2alpha4/../examples/v2alpha4/pipeline-with-object-type-hinting.yaml (7.63s)
    --- PASS: TestExamples/pipelinerun-no-repeated-subjects-v2alpha4 (22.60s)
        --- PASS: TestExamples/pipelinerun-no-repeated-subjects-v2alpha4/../examples/v2alpha4/pipeline-with-repeated-results.yaml (7.19s)
FAIL
FAIL	github.com/tektoncd/chains/test	467.469s
FAIL
Finished run, return code is 1
XML report written to /logs/artifacts/junit_WiCjrNdd.xml
>> Tekton Chains Logs
2024/11/04 17:41:46 Registering 4 clients
2024/11/04 17:41:46 Registering 2 informer factories
2024/11/04 17:41:46 Registering 2 informers
... skipping 210 lines ...
{"level":"info","ts":"2024-11-04T17:42:07.057Z","logger":"watcher","caller":"storage/storage.go:61","msg":"configured backends from config: [tekton oci tekton]","commit":"e0b7e4e-dirty"}
{"level":"info","ts":"2024-11-04T17:42:07.057Z","logger":"watcher","caller":"storage/storage.go:100","msg":"successfully initialized backends: [tekton oci]","commit":"e0b7e4e-dirty"}
{"level":"info","ts":"2024-11-04T17:42:07.057Z","logger":"watcher","caller":"pipelinerun/controller.go:68","msg":"could not send close event to WatchBackends()...","commit":"e0b7e4e-dirty"}
{"level":"info","ts":"2024-11-04T17:42:07.057Z","logger":"watcher","caller":"storage/storage.go:61","msg":"configured backends from config: [tekton oci tekton]","commit":"e0b7e4e-dirty"}
{"level":"info","ts":"2024-11-04T17:42:07.057Z","logger":"watcher","caller":"storage/storage.go:100","msg":"successfully initialized backends: [tekton oci]","commit":"e0b7e4e-dirty"}
***************************************
***         E2E TEST FAILED         ***
***    Start of information dump    ***
***************************************
>>> All resources:
NAMESPACE                    NAME                                                                 READY   STATUS      RESTARTS      AGE
earth-r8ppd                  pod/pipeline-test-run-t1-pod                                         0/1     Completed   0             8s
earth-r8ppd                  pod/pipeline-test-run-t2-pod                                         0/1     Completed   0             8s
... skipping 175 lines ...
default                      28m         Normal    NodeAllocatableEnforced                  node/gke-tchains-e2e-cls18534-default-pool-d5c5bc46-xq1j             Updated Node Allocatable limit across pods
default                      24m         Warning   NodeRegistrationCheckerStart             node/gke-tchains-e2e-cls18534-default-pool-d5c5bc46-xq1j             Mon Nov  4 17:13:24 UTC 2024 - ** Starting Node Registration Checker **
default                      24m         Normal    Synced                                   node/gke-tchains-e2e-cls18534-default-pool-d5c5bc46-xq1j             Node synced successfully
default                      24m         Normal    RegisteredNode                           node/gke-tchains-e2e-cls18534-default-pool-d5c5bc46-xq1j             Node gke-tchains-e2e-cls18534-default-pool-d5c5bc46-xq1j event: Registered Node gke-tchains-e2e-cls18534-default-pool-d5c5bc46-xq1j in Controller
default                      24m         Normal    Starting                                 node/gke-tchains-e2e-cls18534-default-pool-d5c5bc46-xq1j             
default                      21m         Warning   NodeRegistrationCheckerDidNotRunChecks   node/gke-tchains-e2e-cls18534-default-pool-d5c5bc46-xq1j             Mon Nov  4 17:20:25 UTC 2024 - **     Node ready and registered. **
default                      5m45s       Warning   FailedToCreateEndpoint                   endpoints/registry                                                   Failed to create endpoint for service earth-wk5px/registry: endpoints "registry" already exists
default                      23m         Warning   FailedToCreateEndpoint                   endpoints/tekton-pipelines-webhook                                   Failed to create endpoint for service tekton-pipelines/tekton-pipelines-webhook: endpoints "tekton-pipelines-webhook" already exists
earth-r8ppd                  9s          Normal    Scheduled                                pod/pipeline-test-run-t1-pod                                         Successfully assigned earth-r8ppd/pipeline-test-run-t1-pod to gke-tchains-e2e-cls18534-default-pool-0eb37a9d-lxh5
earth-r8ppd                  9s          Normal    Pulled                                   pod/pipeline-test-run-t1-pod                                         Container image "gcr.io/tekton-releases/github.com/tektoncd/pipeline/cmd/entrypoint:v0.62.0@sha256:dd24ff7543eaea98ae735820675f1a696956e19a9de4c3a960c8be44959aa930" already present on machine
earth-r8ppd                  9s          Normal    Created                                  pod/pipeline-test-run-t1-pod                                         Created container prepare
earth-r8ppd                  8s          Normal    Started                                  pod/pipeline-test-run-t1-pod                                         Started container prepare
earth-r8ppd                  8s          Normal    Pulled                                   pod/pipeline-test-run-t1-pod                                         Container image "cgr.dev/chainguard/busybox@sha256:19f02276bf8dbdd62f069b922f10c65262cc34b710eea26ff928129a736be791" already present on machine
earth-r8ppd                  8s          Normal    Created                                  pod/pipeline-test-run-t1-pod                                         Created container place-scripts
... skipping 46 lines ...
earth-r8ppd                  7s          Normal    Pending                                  taskrun/pipeline-test-run-t3                                         pod status "Initialized":"False"; message: "containers with incomplete status: [place-scripts]"
earth-r8ppd                  6s          Normal    Pending                                  taskrun/pipeline-test-run-t3                                         pod status "Ready":"False"; message: "containers with unready status: [step-step1]"
earth-r8ppd                  5s          Normal    Running                                  taskrun/pipeline-test-run-t3                                         Not all Steps in the Task have finished executing
earth-r8ppd                  3s          Normal    Succeeded                                taskrun/pipeline-test-run-t3                                         All Steps have completed executing
earth-r8ppd                  9s          Normal    Started                                  pipelinerun/pipeline-test-run                                        
earth-r8ppd                  9s          Normal    FinalizerUpdate                          pipelinerun/pipeline-test-run                                        Updated "pipeline-test-run" finalizers
earth-r8ppd                  9s          Normal    Running                                  pipelinerun/pipeline-test-run                                        Tasks Completed: 0 (Failed: 0, Cancelled 0), Incomplete: 3, Skipped: 0
earth-r8ppd                  4s          Normal    Running                                  pipelinerun/pipeline-test-run                                        Tasks Completed: 1 (Failed: 0, Cancelled 0), Incomplete: 2, Skipped: 0
earth-r8ppd                  3s          Normal    Running                                  pipelinerun/pipeline-test-run                                        Tasks Completed: 2 (Failed: 0, Cancelled 0), Incomplete: 1, Skipped: 0
earth-r8ppd                  3s          Normal    Succeeded                                pipelinerun/pipeline-test-run                                        Tasks Completed: 3 (Failed: 0, Cancelled 0), Skipped: 0
gke-managed-cim              25m         Warning   FailedScheduling                         pod/kube-state-metrics-0                                             no nodes available to schedule pods
gke-managed-cim              24m         Warning   FailedScheduling                         pod/kube-state-metrics-0                                             no nodes available to schedule pods
gke-managed-cim              24m         Normal    Scheduled                                pod/kube-state-metrics-0                                             Successfully assigned gke-managed-cim/kube-state-metrics-0 to gke-tchains-e2e-cls18534-default-pool-4d035c64-jh1q
gke-managed-cim              23m         Normal    Pulling                                  pod/kube-state-metrics-0                                             Pulling image "us-central1-artifactregistry.gcr.io/gke-release/gke-release/kube-state-metrics:v2.7.0-gke.64@sha256:9b7f4be917b3a3c68ae75b47efa0081f23e163d7c94de053bffb0b2884763cdf"
gke-managed-cim              23m         Normal    Pulled                                   pod/kube-state-metrics-0                                             Successfully pulled image "us-central1-artifactregistry.gcr.io/gke-release/gke-release/kube-state-metrics:v2.7.0-gke.64@sha256:9b7f4be917b3a3c68ae75b47efa0081f23e163d7c94de053bffb0b2884763cdf" in 1.487s (4.844s including waiting). Image size: 12923526 bytes.
gke-managed-cim              23m         Normal    Created                                  pod/kube-state-metrics-0                                             Created container kube-state-metrics
gke-managed-cim              23m         Normal    Started                                  pod/kube-state-metrics-0                                             Started container kube-state-metrics
gke-managed-cim              23m         Normal    Pulling                                  pod/kube-state-metrics-0                                             Pulling image "us-central1-artifactregistry.gcr.io/gke-release/gke-release/gke-metrics-collector:20240501_2300_RC0@sha256:af727fbef6a16960bd3541d89b94e1a4938b57041e5869f148995d8c271a6334"
gke-managed-cim              23m         Normal    Pulled                                   pod/kube-state-metrics-0                                             Successfully pulled image "us-central1-artifactregistry.gcr.io/gke-release/gke-release/gke-metrics-collector:20240501_2300_RC0@sha256:af727fbef6a16960bd3541d89b94e1a4938b57041e5869f148995d8c271a6334" in 1.767s (6.39s including waiting). Image size: 23786769 bytes.
gke-managed-cim              23m         Normal    Created                                  pod/kube-state-metrics-0                                             Created container ksm-metrics-collector
gke-managed-cim              23m         Normal    Started                                  pod/kube-state-metrics-0                                             Started container ksm-metrics-collector
gke-managed-cim              23m         Warning   Unhealthy                                pod/kube-state-metrics-0                                             Readiness probe failed: Get "http://10.28.0.6:8081/": dial tcp 10.28.0.6:8081: connect: connection refused
gke-managed-cim              23m         Warning   Unhealthy                                pod/kube-state-metrics-0                                             Liveness probe failed: Get "http://10.28.0.6:8080/healthz": dial tcp 10.28.0.6:8080: connect: connection refused
gke-managed-cim              23m         Normal    Pulled                                   pod/kube-state-metrics-0                                             Container image "us-central1-artifactregistry.gcr.io/gke-release/gke-release/kube-state-metrics:v2.7.0-gke.64@sha256:9b7f4be917b3a3c68ae75b47efa0081f23e163d7c94de053bffb0b2884763cdf" already present on machine
gke-managed-cim              25m         Warning   FailedCreate                             statefulset/kube-state-metrics                                       create Pod kube-state-metrics-0 in StatefulSet kube-state-metrics failed error: pods "kube-state-metrics-0" is forbidden: error looking up service account gke-managed-cim/kube-state-metrics: serviceaccount "kube-state-metrics" not found
gke-managed-cim              25m         Normal    SuccessfulCreate                         statefulset/kube-state-metrics                                       create Pod kube-state-metrics-0 in StatefulSet kube-state-metrics successful
gke-managed-cim              22m         Warning   FailedGetResourceMetric                  horizontalpodautoscaler/kube-state-metrics                           unable to get metric memory: unable to fetch metrics from resource metrics API: the server is currently unable to handle the request (get pods.metrics.k8s.io)
gmp-system                   25m         Warning   FailedScheduling                         pod/alertmanager-0                                                   no nodes available to schedule pods
gmp-system                   24m         Warning   FailedScheduling                         pod/alertmanager-0                                                   no nodes available to schedule pods
gmp-system                   24m         Normal    Scheduled                                pod/alertmanager-0                                                   Successfully assigned gmp-system/alertmanager-0 to gke-tchains-e2e-cls18534-default-pool-4d035c64-jh1q
gmp-system                   23m         Warning   FailedMount                              pod/alertmanager-0                                                   MountVolume.SetUp failed for volume "config" : secret "alertmanager" not found
gmp-system                   25m         Normal    SuccessfulCreate                         statefulset/alertmanager                                             create Pod alertmanager-0 in StatefulSet alertmanager successful
gmp-system                   23m         Normal    SuccessfulDelete                         statefulset/alertmanager                                             delete Pod alertmanager-0 in StatefulSet alertmanager successful
gmp-system                   23m         Normal    Scheduled                                pod/collector-4trwb                                                  Successfully assigned gmp-system/collector-4trwb to gke-tchains-e2e-cls18534-default-pool-0eb37a9d-lxh5
gmp-system                   23m         Normal    Pulled                                   pod/collector-4trwb                                                  Container image "us-central1-artifactregistry.gcr.io/gke-release/gke-release/gke-distroless/bash:gke_distroless_20240807.00_p0@sha256:ec5022c67b5316ae07f44ed374894e9bb55d548884d293da6b0d350a46dff2df" already present on machine
gmp-system                   23m         Normal    Created                                  pod/collector-4trwb                                                  Created container config-init
gmp-system                   23m         Normal    Started                                  pod/collector-4trwb                                                  Started container config-init
... skipping 15 lines ...
gmp-system                   23m         Normal    Started                                  pod/collector-8h4sr                                                  Started container prometheus
gmp-system                   23m         Normal    Pulling                                  pod/collector-8h4sr                                                  Pulling image "us-central1-artifactregistry.gcr.io/gke-release/gke-release/prometheus-engine/config-reloader:v0.13.1-gke.0@sha256:d199f266545ee281fa51d30e0a5f9c4da27da23055b153ca93adbf7483d19633"
gmp-system                   23m         Normal    Pulled                                   pod/collector-8h4sr                                                  Successfully pulled image "us-central1-artifactregistry.gcr.io/gke-release/gke-release/prometheus-engine/config-reloader:v0.13.1-gke.0@sha256:d199f266545ee281fa51d30e0a5f9c4da27da23055b153ca93adbf7483d19633" in 1.267s (1.267s including waiting). Image size: 59834302 bytes.
gmp-system                   23m         Normal    Created                                  pod/collector-8h4sr                                                  Created container config-reloader
gmp-system                   23m         Normal    Started                                  pod/collector-8h4sr                                                  Started container config-reloader
gmp-system                   24m         Normal    Scheduled                                pod/collector-9hdvx                                                  Successfully assigned gmp-system/collector-9hdvx to gke-tchains-e2e-cls18534-default-pool-d5c5bc46-xq1j
gmp-system                   24m         Warning   NetworkNotReady                          pod/collector-9hdvx                                                  network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
gmp-system                   24m         Warning   FailedMount                              pod/collector-9hdvx                                                  MountVolume.SetUp failed for volume "collection-secret" : object "gmp-system"/"collection" not registered
gmp-system                   24m         Warning   FailedMount                              pod/collector-9hdvx                                                  MountVolume.SetUp failed for volume "config" : object "gmp-system"/"collector" not registered
gmp-system                   24m         Warning   FailedMount                              pod/collector-9hdvx                                                  MountVolume.SetUp failed for volume "kube-api-access-mpk5w" : object "gmp-system"/"kube-root-ca.crt" not registered
gmp-system                   23m         Warning   FailedMount                              pod/collector-9hdvx                                                  MountVolume.SetUp failed for volume "collection-secret" : secret "collection" not found
gmp-system                   23m         Warning   FailedMount                              pod/collector-9hdvx                                                  MountVolume.SetUp failed for volume "config" : configmap "collector" not found
gmp-system                   24m         Normal    Scheduled                                pod/collector-sk68m                                                  Successfully assigned gmp-system/collector-sk68m to gke-tchains-e2e-cls18534-default-pool-4d035c64-jh1q
gmp-system                   23m         Warning   FailedMount                              pod/collector-sk68m                                                  MountVolume.SetUp failed for volume "collection-secret" : secret "collection" not found
gmp-system                   23m         Warning   FailedMount                              pod/collector-sk68m                                                  MountVolume.SetUp failed for volume "config" : configmap "collector" not found
gmp-system                   23m         Normal    Scheduled                                pod/collector-xzl8x                                                  Successfully assigned gmp-system/collector-xzl8x to gke-tchains-e2e-cls18534-default-pool-4d035c64-jh1q
gmp-system                   23m         Normal    Pulling                                  pod/collector-xzl8x                                                  Pulling image "us-central1-artifactregistry.gcr.io/gke-release/gke-release/gke-distroless/bash:gke_distroless_20240807.00_p0@sha256:ec5022c67b5316ae07f44ed374894e9bb55d548884d293da6b0d350a46dff2df"
gmp-system                   23m         Normal    Pulled                                   pod/collector-xzl8x                                                  Successfully pulled image "us-central1-artifactregistry.gcr.io/gke-release/gke-release/gke-distroless/bash:gke_distroless_20240807.00_p0@sha256:ec5022c67b5316ae07f44ed374894e9bb55d548884d293da6b0d350a46dff2df" in 208ms (208ms including waiting). Image size: 18373482 bytes.
gmp-system                   23m         Normal    Created                                  pod/collector-xzl8x                                                  Created container config-init
gmp-system                   23m         Normal    Started                                  pod/collector-xzl8x                                                  Started container config-init
gmp-system                   23m         Normal    Pulling                                  pod/collector-xzl8x                                                  Pulling image "us-central1-artifactregistry.gcr.io/gke-release/gke-release/prometheus-engine/prometheus:v2.45.3-gmp.8-gke.0@sha256:3e6493d4b01ab583382731491d980bc164873ad4969e92c0bdd0da278359ccac"
... skipping 2 lines ...
gmp-system                   23m         Normal    Started                                  pod/collector-xzl8x                                                  Started container prometheus
gmp-system                   23m         Normal    Pulling                                  pod/collector-xzl8x                                                  Pulling image "us-central1-artifactregistry.gcr.io/gke-release/gke-release/prometheus-engine/config-reloader:v0.13.1-gke.0@sha256:d199f266545ee281fa51d30e0a5f9c4da27da23055b153ca93adbf7483d19633"
gmp-system                   23m         Normal    Pulled                                   pod/collector-xzl8x                                                  Successfully pulled image "us-central1-artifactregistry.gcr.io/gke-release/gke-release/prometheus-engine/config-reloader:v0.13.1-gke.0@sha256:d199f266545ee281fa51d30e0a5f9c4da27da23055b153ca93adbf7483d19633" in 1.251s (1.251s including waiting). Image size: 59834302 bytes.
gmp-system                   23m         Normal    Created                                  pod/collector-xzl8x                                                  Created container config-reloader
gmp-system                   23m         Normal    Started                                  pod/collector-xzl8x                                                  Started container config-reloader
gmp-system                   24m         Normal    Scheduled                                pod/collector-znk4n                                                  Successfully assigned gmp-system/collector-znk4n to gke-tchains-e2e-cls18534-default-pool-0eb37a9d-lxh5
gmp-system                   24m         Warning   NetworkNotReady                          pod/collector-znk4n                                                  network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
gmp-system                   24m         Warning   FailedMount                              pod/collector-znk4n                                                  MountVolume.SetUp failed for volume "config" : object "gmp-system"/"collector" not registered
gmp-system                   24m         Warning   FailedMount                              pod/collector-znk4n                                                  MountVolume.SetUp failed for volume "collection-secret" : object "gmp-system"/"collection" not registered
gmp-system                   24m         Warning   FailedMount                              pod/collector-znk4n                                                  MountVolume.SetUp failed for volume "kube-api-access-286dl" : object "gmp-system"/"kube-root-ca.crt" not registered
gmp-system                   24m         Warning   FailedMount                              pod/collector-znk4n                                                  MountVolume.SetUp failed for volume "collection-secret" : secret "collection" not found
gmp-system                   23m         Warning   FailedMount                              pod/collector-znk4n                                                  MountVolume.SetUp failed for volume "config" : configmap "collector" not found
gmp-system                   24m         Normal    SuccessfulCreate                         daemonset/collector                                                  Created pod: collector-sk68m
gmp-system                   24m         Normal    SuccessfulCreate                         daemonset/collector                                                  Created pod: collector-9hdvx
gmp-system                   24m         Normal    SuccessfulCreate                         daemonset/collector                                                  Created pod: collector-znk4n
gmp-system                   23m         Normal    SuccessfulDelete                         daemonset/collector                                                  Deleted pod: collector-9hdvx
gmp-system                   23m         Normal    SuccessfulDelete                         daemonset/collector                                                  Deleted pod: collector-znk4n
gmp-system                   23m         Normal    SuccessfulDelete                         daemonset/collector                                                  Deleted pod: collector-sk68m
... skipping 4 lines ...
gmp-system                   24m         Warning   FailedScheduling                         pod/gmp-operator-6b8f5dc4b-8dgw5                                     no nodes available to schedule pods
gmp-system                   24m         Normal    Scheduled                                pod/gmp-operator-6b8f5dc4b-8dgw5                                     Successfully assigned gmp-system/gmp-operator-6b8f5dc4b-8dgw5 to gke-tchains-e2e-cls18534-default-pool-4d035c64-jh1q
gmp-system                   23m         Normal    Pulling                                  pod/gmp-operator-6b8f5dc4b-8dgw5                                     Pulling image "us-central1-artifactregistry.gcr.io/gke-release/gke-release/prometheus-engine/operator:v0.13.1-gke.0@sha256:b06adf14b06c9fc809d4b8db41329e4f3c34d9b1baa2abd45542ad817aed3917"
gmp-system                   23m         Normal    Pulled                                   pod/gmp-operator-6b8f5dc4b-8dgw5                                     Successfully pulled image "us-central1-artifactregistry.gcr.io/gke-release/gke-release/prometheus-engine/operator:v0.13.1-gke.0@sha256:b06adf14b06c9fc809d4b8db41329e4f3c34d9b1baa2abd45542ad817aed3917" in 3.559s (3.559s including waiting). Image size: 84461449 bytes.
gmp-system                   23m         Normal    Created                                  pod/gmp-operator-6b8f5dc4b-8dgw5                                     Created container operator
gmp-system                   23m         Normal    Started                                  pod/gmp-operator-6b8f5dc4b-8dgw5                                     Started container operator
gmp-system                   23m         Warning   Unhealthy                                pod/gmp-operator-6b8f5dc4b-8dgw5                                     Readiness probe failed: Get "http://10.28.0.3:18081/readyz": dial tcp 10.28.0.3:18081: connect: connection refused
gmp-system                   23m         Warning   Unhealthy                                pod/gmp-operator-6b8f5dc4b-8dgw5                                     Liveness probe failed: Get "http://10.28.0.3:18081/healthz": dial tcp 10.28.0.3:18081: connect: connection refused
gmp-system                   23m         Normal    Killing                                  pod/gmp-operator-6b8f5dc4b-8dgw5                                     Container operator failed liveness probe, will be restarted
gmp-system                   23m         Normal    Pulled                                   pod/gmp-operator-6b8f5dc4b-8dgw5                                     Container image "us-central1-artifactregistry.gcr.io/gke-release/gke-release/prometheus-engine/operator:v0.13.1-gke.0@sha256:b06adf14b06c9fc809d4b8db41329e4f3c34d9b1baa2abd45542ad817aed3917" already present on machine
gmp-system                   25m         Normal    SuccessfulCreate                         replicaset/gmp-operator-6b8f5dc4b                                    Created pod: gmp-operator-6b8f5dc4b-8dgw5
gmp-system                   25m         Normal    ScalingReplicaSet                        deployment/gmp-operator                                              Scaled up replica set gmp-operator-6b8f5dc4b to 1
gmp-system                   23m         Normal    Scheduled                                pod/rule-evaluator-577c6bdccc-z8x7d                                  Successfully assigned gmp-system/rule-evaluator-577c6bdccc-z8x7d to gke-tchains-e2e-cls18534-default-pool-0eb37a9d-lxh5
gmp-system                   23m         Normal    Pulling                                  pod/rule-evaluator-577c6bdccc-z8x7d                                  Pulling image "us-central1-artifactregistry.gcr.io/gke-release/gke-release/gke-distroless/bash:gke_distroless_20240807.00_p0@sha256:ec5022c67b5316ae07f44ed374894e9bb55d548884d293da6b0d350a46dff2df"
gmp-system                   23m         Normal    Pulled                                   pod/rule-evaluator-577c6bdccc-z8x7d                                  Successfully pulled image "us-central1-artifactregistry.gcr.io/gke-release/gke-release/gke-distroless/bash:gke_distroless_20240807.00_p0@sha256:ec5022c67b5316ae07f44ed374894e9bb55d548884d293da6b0d350a46dff2df" in 223ms (223ms including waiting). Image size: 18373482 bytes.
... skipping 6 lines ...
gmp-system                   23m         Normal    Pulling                                  pod/rule-evaluator-577c6bdccc-z8x7d                                  Pulling image "us-central1-artifactregistry.gcr.io/gke-release/gke-release/prometheus-engine/rule-evaluator:v0.13.1-gke.0@sha256:2ff728b00f5e0a652045e7a5a877be00cadf049ecb9b026c3654c2391588c045"
gmp-system                   23m         Normal    Pulled                                   pod/rule-evaluator-577c6bdccc-z8x7d                                  Successfully pulled image "us-central1-artifactregistry.gcr.io/gke-release/gke-release/prometheus-engine/rule-evaluator:v0.13.1-gke.0@sha256:2ff728b00f5e0a652045e7a5a877be00cadf049ecb9b026c3654c2391588c045" in 2.768s (4.589s including waiting). Image size: 84791546 bytes.
gmp-system                   23m         Normal    Created                                  pod/rule-evaluator-577c6bdccc-z8x7d                                  Created container evaluator
gmp-system                   23m         Normal    Started                                  pod/rule-evaluator-577c6bdccc-z8x7d                                  Started container evaluator
gmp-system                   23m         Normal    Killing                                  pod/rule-evaluator-577c6bdccc-z8x7d                                  Stopping container config-reloader
gmp-system                   23m         Normal    Killing                                  pod/rule-evaluator-577c6bdccc-z8x7d                                  Stopping container evaluator
gmp-system                   23m         Warning   Unhealthy                                pod/rule-evaluator-577c6bdccc-z8x7d                                  Readiness probe failed: Get "http://10.28.2.3:19092/-/ready": dial tcp 10.28.2.3:19092: connect: connection refused
gmp-system                   23m         Normal    SuccessfulCreate                         replicaset/rule-evaluator-577c6bdccc                                 Created pod: rule-evaluator-577c6bdccc-z8x7d
gmp-system                   23m         Normal    SuccessfulDelete                         replicaset/rule-evaluator-577c6bdccc                                 Deleted pod: rule-evaluator-577c6bdccc-z8x7d
gmp-system                   25m         Warning   FailedScheduling                         pod/rule-evaluator-6f659bc47f-zz4n9                                  no nodes available to schedule pods
gmp-system                   24m         Warning   FailedScheduling                         pod/rule-evaluator-6f659bc47f-zz4n9                                  no nodes available to schedule pods
gmp-system                   24m         Normal    Scheduled                                pod/rule-evaluator-6f659bc47f-zz4n9                                  Successfully assigned gmp-system/rule-evaluator-6f659bc47f-zz4n9 to gke-tchains-e2e-cls18534-default-pool-4d035c64-jh1q
gmp-system                   23m         Warning   FailedMount                              pod/rule-evaluator-6f659bc47f-zz4n9                                  MountVolume.SetUp failed for volume "rules" : configmap "rules-generated" not found
gmp-system                   23m         Warning   FailedMount                              pod/rule-evaluator-6f659bc47f-zz4n9                                  MountVolume.SetUp failed for volume "config" : configmap "rule-evaluator" not found
gmp-system                   23m         Warning   FailedMount                              pod/rule-evaluator-6f659bc47f-zz4n9                                  MountVolume.SetUp failed for volume "rules-secret" : secret "rules" not found
gmp-system                   25m         Normal    SuccessfulCreate                         replicaset/rule-evaluator-6f659bc47f                                 Created pod: rule-evaluator-6f659bc47f-zz4n9
gmp-system                   23m         Normal    SuccessfulDelete                         replicaset/rule-evaluator-6f659bc47f                                 Deleted pod: rule-evaluator-6f659bc47f-zz4n9
gmp-system                   25m         Normal    ScalingReplicaSet                        deployment/rule-evaluator                                            Scaled up replica set rule-evaluator-6f659bc47f to 1
gmp-system                   23m         Normal    ScalingReplicaSet                        deployment/rule-evaluator                                            Scaled up replica set rule-evaluator-577c6bdccc to 1
gmp-system                   23m         Normal    ScalingReplicaSet                        deployment/rule-evaluator                                            Scaled down replica set rule-evaluator-6f659bc47f to 0 from 1
gmp-system                   23m         Normal    ScalingReplicaSet                        deployment/rule-evaluator                                            Scaled down replica set rule-evaluator-577c6bdccc to 0 from 1
... skipping 39 lines ...
kube-system                  23m         Normal    Pulled                                   pod/fluentbit-gke-27grl                                              Successfully pulled image "us-central1-artifactregistry.gcr.io/gke-release/gke-release/gke-metrics-collector:20240930_2300_RC0@sha256:1400cd227bc15faf620fc5926accb2d3b8d995e6ae121ad36dd13b180bd7ccbd" in 1.157s (1.157s including waiting). Image size: 24861074 bytes.
kube-system                  23m         Normal    Created                                  pod/fluentbit-gke-27grl                                              Created container fluentbit-metrics-collector
kube-system                  23m         Normal    Started                                  pod/fluentbit-gke-27grl                                              Started container fluentbit-metrics-collector
kube-system                  24m         Normal    Scheduled                                pod/fluentbit-gke-4ntwk                                              Successfully assigned kube-system/fluentbit-gke-4ntwk to gke-tchains-e2e-cls18534-default-pool-d5c5bc46-xq1j
kube-system                  24m         Normal    Pulling                                  pod/fluentbit-gke-4ntwk                                              Pulling image "us-central1-artifactregistry.gcr.io/gke-release/gke-release/gke-distroless/bash:gke_distroless_20240907.00_p0@sha256:ff5d5abcc0cdd74d3ba43b0959b246c4432c14356b1414ce35d8feff820eb664"
kube-system                  24m         Normal    Pulled                                   pod/fluentbit-gke-4ntwk                                              Successfully pulled image "us-central1-artifactregistry.gcr.io/gke-release/gke-release/gke-distroless/bash:gke_distroless_20240907.00_p0@sha256:ff5d5abcc0cdd74d3ba43b0959b246c4432c14356b1414ce35d8feff820eb664" in 263ms (263ms including waiting). Image size: 18373482 bytes.
kube-system                  23m         Warning   Failed                                   pod/fluentbit-gke-4ntwk                                              Error: services have not yet been read at least once, cannot construct envvars
kube-system                  23m         Normal    Pulled                                   pod/fluentbit-gke-4ntwk                                              Container image "us-central1-artifactregistry.gcr.io/gke-release/gke-release/gke-distroless/bash:gke_distroless_20240907.00_p0@sha256:ff5d5abcc0cdd74d3ba43b0959b246c4432c14356b1414ce35d8feff820eb664" already present on machine
kube-system                  23m         Normal    Created                                  pod/fluentbit-gke-4ntwk                                              Created container fluentbit-gke-init
kube-system                  23m         Normal    Started                                  pod/fluentbit-gke-4ntwk                                              Started container fluentbit-gke-init
kube-system                  23m         Normal    Pulling                                  pod/fluentbit-gke-4ntwk                                              Pulling image "us-central1-artifactregistry.gcr.io/gke-release/gke-release/fluent-bit:v1.8.12-gke.43@sha256:0f02f0fd2845b399af269291ebfb3e9c50592a322433e8ee27c0ce6b30a95c04"
kube-system                  23m         Normal    Pulled                                   pod/fluentbit-gke-4ntwk                                              Successfully pulled image "us-central1-artifactregistry.gcr.io/gke-release/gke-release/fluent-bit:v1.8.12-gke.43@sha256:0f02f0fd2845b399af269291ebfb3e9c50592a322433e8ee27c0ce6b30a95c04" in 3.509s (3.509s including waiting). Image size: 93697591 bytes.
kube-system                  23m         Normal    Created                                  pod/fluentbit-gke-4ntwk                                              Created container fluentbit
... skipping 27 lines ...
kube-system                  24m         Normal    SuccessfulCreate                         daemonset/fluentbit-gke                                              Created pod: fluentbit-gke-7hw5f
kube-system                  25m         Normal    LeaderElection                           lease/gcp-controller-manager                                         gke-9e55ed08529d464b8ba1-6187-0654-vm became leader
kube-system                  25m         Normal    LeaderElection                           lease/gke-common-webhook-lock                                        gke-9e55ed08529d464b8ba1-6187-0654-vm_d3eef became leader
kube-system                  24m         Normal    Scheduled                                pod/gke-metrics-agent-7fmmf                                          Successfully assigned kube-system/gke-metrics-agent-7fmmf to gke-tchains-e2e-cls18534-default-pool-d5c5bc46-xq1j
kube-system                  24m         Normal    Pulling                                  pod/gke-metrics-agent-7fmmf                                          Pulling image "us-central1-artifactregistry.gcr.io/gke-release/gke-release/gke-metrics-agent:1.12.3-gke.1@sha256:b48588f900ff5f13bb2f57c39947f12498d4c37c61ef6c6b0ae90bfea902f7f5"
kube-system                  24m         Normal    Pulled                                   pod/gke-metrics-agent-7fmmf                                          Successfully pulled image "us-central1-artifactregistry.gcr.io/gke-release/gke-release/gke-metrics-agent:1.12.3-gke.1@sha256:b48588f900ff5f13bb2f57c39947f12498d4c37c61ef6c6b0ae90bfea902f7f5" in 1.301s (1.301s including waiting). Image size: 27033679 bytes.
kube-system                  23m         Warning   Failed                                   pod/gke-metrics-agent-7fmmf                                          Error: services have not yet been read at least once, cannot construct envvars
kube-system                  23m         Normal    Pulled                                   pod/gke-metrics-agent-7fmmf                                          Container image "us-central1-artifactregistry.gcr.io/gke-release/gke-release/gke-metrics-agent:1.12.3-gke.1@sha256:b48588f900ff5f13bb2f57c39947f12498d4c37c61ef6c6b0ae90bfea902f7f5" already present on machine
kube-system                  23m         Warning   Failed                                   pod/gke-metrics-agent-7fmmf                                          Error: services have not yet been read at least once, cannot construct envvars
kube-system                  24m         Normal    Pulling                                  pod/gke-metrics-agent-7fmmf                                          Pulling image "us-central1-artifactregistry.gcr.io/gke-release/gke-release/gke-metrics-collector:20240620_2300_RC0@sha256:463e73163c4d343b8a3327e0d2e8e955d22434e9005a1a188275ac55b8cfebb4"
kube-system                  24m         Normal    Pulled                                   pod/gke-metrics-agent-7fmmf                                          Successfully pulled image "us-central1-artifactregistry.gcr.io/gke-release/gke-release/gke-metrics-collector:20240620_2300_RC0@sha256:463e73163c4d343b8a3327e0d2e8e955d22434e9005a1a188275ac55b8cfebb4" in 1.252s (1.252s including waiting). Image size: 24343841 bytes.
kube-system                  24m         Warning   Failed                                   pod/gke-metrics-agent-7fmmf                                          Error: services have not yet been read at least once, cannot construct envvars
kube-system                  23m         Normal    Pulled                                   pod/gke-metrics-agent-7fmmf                                          Container image "us-central1-artifactregistry.gcr.io/gke-release/gke-release/gke-metrics-agent:1.12.3-gke.1@sha256:b48588f900ff5f13bb2f57c39947f12498d4c37c61ef6c6b0ae90bfea902f7f5" already present on machine
kube-system                  23m         Normal    Pulled                                   pod/gke-metrics-agent-7fmmf                                          Container image "us-central1-artifactregistry.gcr.io/gke-release/gke-release/gke-metrics-collector:20240620_2300_RC0@sha256:463e73163c4d343b8a3327e0d2e8e955d22434e9005a1a188275ac55b8cfebb4" already present on machine
kube-system                  24m         Normal    Scheduled                                pod/gke-metrics-agent-pmssm                                          Successfully assigned kube-system/gke-metrics-agent-pmssm to gke-tchains-e2e-cls18534-default-pool-0eb37a9d-lxh5
kube-system                  24m         Normal    Pulling                                  pod/gke-metrics-agent-pmssm                                          Pulling image "us-central1-artifactregistry.gcr.io/gke-release/gke-release/gke-metrics-agent:1.12.3-gke.1@sha256:b48588f900ff5f13bb2f57c39947f12498d4c37c61ef6c6b0ae90bfea902f7f5"
kube-system                  24m         Normal    Pulled                                   pod/gke-metrics-agent-pmssm                                          Successfully pulled image "us-central1-artifactregistry.gcr.io/gke-release/gke-release/gke-metrics-agent:1.12.3-gke.1@sha256:b48588f900ff5f13bb2f57c39947f12498d4c37c61ef6c6b0ae90bfea902f7f5" in 1.562s (1.563s including waiting). Image size: 27033679 bytes.
kube-system                  24m         Normal    Created                                  pod/gke-metrics-agent-pmssm                                          Created container gke-metrics-agent
... skipping 22 lines ...
kube-system                  24m         Normal    SuccessfulCreate                         daemonset/gke-metrics-agent                                          Created pod: gke-metrics-agent-pmssm
kube-system                  25m         Normal    LeaderElection                           lease/ingress-gce-lock                                               gke-9e55ed08529d464b8ba1-30da-7311-vm_b7d79 became leader
kube-system                  25m         Normal    LeaderElection                           lease/ingress-gce-neg-lock                                           gke-9e55ed08529d464b8ba1-30da-7311-vm_b7d79 became leader
kube-system                  23m         Normal    Scheduled                                pod/konnectivity-agent-7bd5f984f7-kpqgp                              Successfully assigned kube-system/konnectivity-agent-7bd5f984f7-kpqgp to gke-tchains-e2e-cls18534-default-pool-d5c5bc46-xq1j
kube-system                  23m         Normal    Pulling                                  pod/konnectivity-agent-7bd5f984f7-kpqgp                              Pulling image "us-central1-artifactregistry.gcr.io/gke-release/gke-release/proxy-agent:v0.30.2-gke.2@sha256:d0346df5dceadc5bd9fa6a00415353bcc85b18c48a40bee5aa0df698c13c39f4"
kube-system                  23m         Normal    Pulled                                   pod/konnectivity-agent-7bd5f984f7-kpqgp                              Successfully pulled image "us-central1-artifactregistry.gcr.io/gke-release/gke-release/proxy-agent:v0.30.2-gke.2@sha256:d0346df5dceadc5bd9fa6a00415353bcc85b18c48a40bee5aa0df698c13c39f4" in 1.52s (1.52s including waiting). Image size: 10288044 bytes.
kube-system                  23m         Warning   Failed                                   pod/konnectivity-agent-7bd5f984f7-kpqgp                              Error: services have not yet been read at least once, cannot construct envvars
kube-system                  23m         Normal    Pulling                                  pod/konnectivity-agent-7bd5f984f7-kpqgp                              Pulling image "us-central1-artifactregistry.gcr.io/gke-release/gke-release/gke-metrics-collector:20240717_2300_RC0@sha256:d460e6b5088332f62b990f8a1f7bf6d9eca7c3f41cb974e3db493d6b0fc4ad70"
kube-system                  23m         Normal    Pulled                                   pod/konnectivity-agent-7bd5f984f7-kpqgp                              Successfully pulled image "us-central1-artifactregistry.gcr.io/gke-release/gke-release/gke-metrics-collector:20240717_2300_RC0@sha256:d460e6b5088332f62b990f8a1f7bf6d9eca7c3f41cb974e3db493d6b0fc4ad70" in 1.225s (1.225s including waiting). Image size: 24425624 bytes.
kube-system                  23m         Normal    Created                                  pod/konnectivity-agent-7bd5f984f7-kpqgp                              Created container konnectivity-agent-metrics-collector
kube-system                  23m         Normal    Started                                  pod/konnectivity-agent-7bd5f984f7-kpqgp                              Started container konnectivity-agent-metrics-collector
kube-system                  23m         Normal    Pulled                                   pod/konnectivity-agent-7bd5f984f7-kpqgp                              Container image "us-central1-artifactregistry.gcr.io/gke-release/gke-release/proxy-agent:v0.30.2-gke.2@sha256:d0346df5dceadc5bd9fa6a00415353bcc85b18c48a40bee5aa0df698c13c39f4" already present on machine
kube-system                  23m         Normal    Created                                  pod/konnectivity-agent-7bd5f984f7-kpqgp                              Created container konnectivity-agent
... skipping 33 lines ...
kube-system                  25m         Normal    ScalingReplicaSet                        deployment/konnectivity-agent                                        Scaled up replica set konnectivity-agent-7bd5f984f7 to 1
kube-system                  23m         Normal    ScalingReplicaSet                        deployment/konnectivity-agent                                        Scaled up replica set konnectivity-agent-7bd5f984f7 to 3 from 1
kube-system                  25m         Normal    LeaderElection                           lease/kube-controller-manager                                        gke-9e55ed08529d464b8ba1-6187-0654-vm_d465dad4-8c9d-4744-abf7-d69a119d6e1b became leader
kube-system                  23m         Normal    Scheduled                                pod/kube-dns-7b8bf554dd-fw4m7                                        Successfully assigned kube-system/kube-dns-7b8bf554dd-fw4m7 to gke-tchains-e2e-cls18534-default-pool-d5c5bc46-xq1j
kube-system                  23m         Normal    Pulling                                  pod/kube-dns-7b8bf554dd-fw4m7                                        Pulling image "us-central1-artifactregistry.gcr.io/gke-release/gke-release/k8s-dns-kube-dns:1.23.0-gke.9@sha256:48d7e5c5cdd5b356e55c3e61a7ae8f2657f15b661b385639f7b983fe134c0709"
kube-system                  23m         Normal    Pulled                                   pod/kube-dns-7b8bf554dd-fw4m7                                        Successfully pulled image "us-central1-artifactregistry.gcr.io/gke-release/gke-release/k8s-dns-kube-dns:1.23.0-gke.9@sha256:48d7e5c5cdd5b356e55c3e61a7ae8f2657f15b661b385639f7b983fe134c0709" in 1.944s (1.944s including waiting). Image size: 32530343 bytes.
kube-system                  23m         Warning   Failed                                   pod/kube-dns-7b8bf554dd-fw4m7                                        Error: services have not yet been read at least once, cannot construct envvars
kube-system                  23m         Normal    Pulling                                  pod/kube-dns-7b8bf554dd-fw4m7                                        Pulling image "us-central1-artifactregistry.gcr.io/gke-release/gke-release/k8s-dns-dnsmasq-nanny:1.23.0-gke.9@sha256:8c165a991f95755137077c927455e2d996de2c3d5efb0c369f7d94f8dc7d4fb5"
kube-system                  23m         Normal    Pulled                                   pod/kube-dns-7b8bf554dd-fw4m7                                        Successfully pulled image "us-central1-artifactregistry.gcr.io/gke-release/gke-release/k8s-dns-dnsmasq-nanny:1.23.0-gke.9@sha256:8c165a991f95755137077c927455e2d996de2c3d5efb0c369f7d94f8dc7d4fb5" in 2.156s (2.156s including waiting). Image size: 37174146 bytes.
kube-system                  23m         Normal    Created                                  pod/kube-dns-7b8bf554dd-fw4m7                                        Created container dnsmasq
kube-system                  23m         Normal    Started                                  pod/kube-dns-7b8bf554dd-fw4m7                                        Started container dnsmasq
kube-system                  23m         Normal    Pulling                                  pod/kube-dns-7b8bf554dd-fw4m7                                        Pulling image "us-central1-artifactregistry.gcr.io/gke-release/gke-release/k8s-dns-sidecar:1.23.0-gke.9@sha256:5d99c8b4ffbd794477f16644c3a0e51b79246052c8e4518af0614c3274ff3631"
kube-system                  23m         Normal    Pulled                                   pod/kube-dns-7b8bf554dd-fw4m7                                        Successfully pulled image "us-central1-artifactregistry.gcr.io/gke-release/gke-release/k8s-dns-sidecar:1.23.0-gke.9@sha256:5d99c8b4ffbd794477f16644c3a0e51b79246052c8e4518af0614c3274ff3631" in 1.192s (1.192s including waiting). Image size: 29040121 bytes.
... skipping 79 lines ...
kube-system                  24m         Normal    Scheduled                                pod/metrics-server-v1.30.3-8987bd844-97nzz                           Successfully assigned kube-system/metrics-server-v1.30.3-8987bd844-97nzz to gke-tchains-e2e-cls18534-default-pool-4d035c64-jh1q
kube-system                  23m         Normal    Pulling                                  pod/metrics-server-v1.30.3-8987bd844-97nzz                           Pulling image "us-central1-artifactregistry.gcr.io/gke-release/gke-release/metrics-server:v0.7.1-gke.23@sha256:525d9a5c0336ada0fd1f81570dab011a3cabc2456576afa769803934e48f4a5a"
kube-system                  23m         Normal    Pulled                                   pod/metrics-server-v1.30.3-8987bd844-97nzz                           Successfully pulled image "us-central1-artifactregistry.gcr.io/gke-release/gke-release/metrics-server:v0.7.1-gke.23@sha256:525d9a5c0336ada0fd1f81570dab011a3cabc2456576afa769803934e48f4a5a" in 2.857s (4.581s including waiting). Image size: 19252717 bytes.
kube-system                  23m         Normal    Created                                  pod/metrics-server-v1.30.3-8987bd844-97nzz                           Created container metrics-server
kube-system                  23m         Normal    Started                                  pod/metrics-server-v1.30.3-8987bd844-97nzz                           Started container metrics-server
kube-system                  22m         Normal    Killing                                  pod/metrics-server-v1.30.3-8987bd844-97nzz                           Stopping container metrics-server
kube-system                  24m         Warning   FailedCreate                             replicaset/metrics-server-v1.30.3-8987bd844                          Error creating: pods "metrics-server-v1.30.3-8987bd844-" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount "metrics-server" not found
kube-system                  24m         Normal    SuccessfulCreate                         replicaset/metrics-server-v1.30.3-8987bd844                          Created pod: metrics-server-v1.30.3-8987bd844-97nzz
kube-system                  22m         Normal    SuccessfulDelete                         replicaset/metrics-server-v1.30.3-8987bd844                          Deleted pod: metrics-server-v1.30.3-8987bd844-97nzz
kube-system                  24m         Normal    ScalingReplicaSet                        deployment/metrics-server-v1.30.3                                    Scaled up replica set metrics-server-v1.30.3-8987bd844 to 1
kube-system                  24m         Normal    ScalingReplicaSet                        deployment/metrics-server-v1.30.3                                    Scaled up replica set metrics-server-v1.30.3-7fff7dc68d to 1
kube-system                  22m         Normal    ScalingReplicaSet                        deployment/metrics-server-v1.30.3                                    Scaled down replica set metrics-server-v1.30.3-8987bd844 to 0 from 1
kube-system                  24m         Normal    LeaderElection                           lease/pd-csi-storage-gke-io                                          1730740605750-6298-pd-csi-storage-gke-io became leader
... skipping 15 lines ...
kube-system                  24m         Normal    Pulled                                   pod/pdcsi-node-9mldp                                                 Successfully pulled image "us-central1-artifactregistry.gcr.io/gke-release/gke-release/gcp-compute-persistent-disk-csi-driver:v1.14.2-gke.7@sha256:65fae72192dd620ab0063b826cf26fc36cf60b6e5114e723175185ad20ebf2d3" in 5.256s (5.256s including waiting). Image size: 60814678 bytes.
kube-system                  24m         Normal    Created                                  pod/pdcsi-node-9mldp                                                 Created container gce-pd-driver
kube-system                  24m         Normal    Started                                  pod/pdcsi-node-9mldp                                                 Started container gce-pd-driver
kube-system                  24m         Normal    Scheduled                                pod/pdcsi-node-z27bt                                                 Successfully assigned kube-system/pdcsi-node-z27bt to gke-tchains-e2e-cls18534-default-pool-d5c5bc46-xq1j
kube-system                  24m         Normal    Pulling                                  pod/pdcsi-node-z27bt                                                 Pulling image "us-central1-artifactregistry.gcr.io/gke-release/gke-release/csi-node-driver-registrar:v2.9.4-gke.16@sha256:9be303bb0c0d209912e57e01d229f78ff08126a641296b200c28a43b02e4ae1e"
kube-system                  24m         Normal    Pulled                                   pod/pdcsi-node-z27bt                                                 Successfully pulled image "us-central1-artifactregistry.gcr.io/gke-release/gke-release/csi-node-driver-registrar:v2.9.4-gke.16@sha256:9be303bb0c0d209912e57e01d229f78ff08126a641296b200c28a43b02e4ae1e" in 908ms (908ms including waiting). Image size: 10772851 bytes.
kube-system                  23m         Warning   Failed                                   pod/pdcsi-node-z27bt                                                 Error: services have not yet been read at least once, cannot construct envvars
kube-system                  24m         Normal    Pulling                                  pod/pdcsi-node-z27bt                                                 Pulling image "us-central1-artifactregistry.gcr.io/gke-release/gke-release/gcp-compute-persistent-disk-csi-driver:v1.14.2-gke.7@sha256:65fae72192dd620ab0063b826cf26fc36cf60b6e5114e723175185ad20ebf2d3"
kube-system                  24m         Normal    Pulled                                   pod/pdcsi-node-z27bt                                                 Successfully pulled image "us-central1-artifactregistry.gcr.io/gke-release/gke-release/gcp-compute-persistent-disk-csi-driver:v1.14.2-gke.7@sha256:65fae72192dd620ab0063b826cf26fc36cf60b6e5114e723175185ad20ebf2d3" in 3.261s (3.261s including waiting). Image size: 60814678 bytes.
kube-system                  23m         Warning   Failed                                   pod/pdcsi-node-z27bt                                                 Error: services have not yet been read at least once, cannot construct envvars
kube-system                  23m         Normal    Pulled                                   pod/pdcsi-node-z27bt                                                 Container image "us-central1-artifactregistry.gcr.io/gke-release/gke-release/csi-node-driver-registrar:v2.9.4-gke.16@sha256:9be303bb0c0d209912e57e01d229f78ff08126a641296b200c28a43b02e4ae1e" already present on machine
kube-system                  23m         Normal    Pulled                                   pod/pdcsi-node-z27bt                                                 Container image "us-central1-artifactregistry.gcr.io/gke-release/gke-release/gcp-compute-persistent-disk-csi-driver:v1.14.2-gke.7@sha256:65fae72192dd620ab0063b826cf26fc36cf60b6e5114e723175185ad20ebf2d3" already present on machine
kube-system                  23m         Normal    Created                                  pod/pdcsi-node-z27bt                                                 Created container csi-driver-registrar
kube-system                  23m         Normal    Started                                  pod/pdcsi-node-z27bt                                                 Started container csi-driver-registrar
kube-system                  24m         Normal    SuccessfulCreate                         daemonset/pdcsi-node                                                 Created pod: pdcsi-node-675jh
kube-system                  24m         Normal    SuccessfulCreate                         daemonset/pdcsi-node                                                 Created pod: pdcsi-node-z27bt
... skipping 45 lines ...
tekton-chains                13m         Normal    Scheduled                                pod/tekton-chains-controller-6b45896cfb-nr89w                        Successfully assigned tekton-chains/tekton-chains-controller-6b45896cfb-nr89w to gke-tchains-e2e-cls18534-default-pool-0eb37a9d-lxh5
tekton-chains                13m         Normal    Pulling                                  pod/tekton-chains-controller-6b45896cfb-nr89w                        Pulling image "gcr.io/tekton-prow-3/tchains-e2e-img/controller-92006fd957c0afd31de6a40b3e33b39f@sha256:331b4a5695f1b3fde001d3bbe477c1ecfe47e83c0405b86d0109fb86a90f1ba1"
tekton-chains                13m         Normal    Pulled                                   pod/tekton-chains-controller-6b45896cfb-nr89w                        Successfully pulled image "gcr.io/tekton-prow-3/tchains-e2e-img/controller-92006fd957c0afd31de6a40b3e33b39f@sha256:331b4a5695f1b3fde001d3bbe477c1ecfe47e83c0405b86d0109fb86a90f1ba1" in 2.785s (2.785s including waiting). Image size: 67804691 bytes.
tekton-chains                13m         Normal    Created                                  pod/tekton-chains-controller-6b45896cfb-nr89w                        Created container tekton-chains-controller
tekton-chains                13m         Normal    Started                                  pod/tekton-chains-controller-6b45896cfb-nr89w                        Started container tekton-chains-controller
tekton-chains                12m         Normal    Killing                                  pod/tekton-chains-controller-6b45896cfb-nr89w                        Stopping container tekton-chains-controller
tekton-chains                13m         Warning   FailedCreate                             replicaset/tekton-chains-controller-6b45896cfb                       Error creating: pods "tekton-chains-controller-6b45896cfb-" is forbidden: error looking up service account tekton-chains/tekton-chains-controller: serviceaccount "tekton-chains-controller" not found
tekton-chains                13m         Normal    SuccessfulCreate                         replicaset/tekton-chains-controller-6b45896cfb                       Created pod: tekton-chains-controller-6b45896cfb-nr89w
tekton-chains                12m         Normal    SuccessfulDelete                         replicaset/tekton-chains-controller-6b45896cfb                       Deleted pod: tekton-chains-controller-6b45896cfb-nr89w
tekton-chains                3m38s       Normal    Scheduled                                pod/tekton-chains-controller-75458bb949-2ft58                        Successfully assigned tekton-chains/tekton-chains-controller-75458bb949-2ft58 to gke-tchains-e2e-cls18534-default-pool-0eb37a9d-lxh5
tekton-chains                3m38s       Normal    Pulled                                   pod/tekton-chains-controller-75458bb949-2ft58                        Container image "gcr.io/tekton-prow-3/tchains-e2e-img/controller-92006fd957c0afd31de6a40b3e33b39f@sha256:331b4a5695f1b3fde001d3bbe477c1ecfe47e83c0405b86d0109fb86a90f1ba1" already present on machine
tekton-chains                3m38s       Normal    Created                                  pod/tekton-chains-controller-75458bb949-2ft58                        Created container tekton-chains-controller
tekton-chains                3m38s       Normal    Started                                  pod/tekton-chains-controller-75458bb949-2ft58                        Started container tekton-chains-controller
... skipping 171 lines ...
vault                        12m         Normal    Pulling                                  pod/vault-0                                                          Pulling image "hashicorp/vault:1.9.2"
vault                        12m         Normal    Pulled                                   pod/vault-0                                                          Successfully pulled image "hashicorp/vault:1.9.2" in 3.728s (3.728s including waiting). Image size: 72665935 bytes.
vault                        12m         Normal    Created                                  pod/vault-0                                                          Created container vault
vault                        12m         Normal    Started                                  pod/vault-0                                                          Started container vault
vault                        12m         Normal    SuccessfulCreate                         statefulset/vault                                                    create Pod vault-0 in StatefulSet vault successful
***************************************
***         E2E TEST FAILED         ***
***     End of information dump     ***
***************************************
2024/11/04 17:42:09 process.go:155: Step '/home/prow/go/src/github.com/tektoncd/chains/test/e2e-tests.sh --run-tests' finished in 23m29.713845207s
2024/11/04 17:42:09 main.go:319: Something went wrong: encountered 1 errors: [error during /home/prow/go/src/github.com/tektoncd/chains/test/e2e-tests.sh --run-tests: exit status 1]
Test subprocess exited with code 0
Artifacts were written to /logs/artifacts
Test result code is 1
==================================
==== INTEGRATION TESTS FAILED ====
==================================
+ EXIT_VALUE=1
+ set +o xtrace