ResultFAILURE
Tests 0 failed / 0 succeeded
Started2024-11-13 20:37
Elapsed11m25s
Revisione601ce536d792042c306e8acbc935b3ea4a8c121
Refs 1250
E2E:Machinen1-standard-4
E2E:MaxNodes3
E2E:MinNodes1
E2E:Regionus-central1
E2E:Version1.30.5-gke.1443001

No Test Failures!


Error lines from build-log.txt

... skipping 196 lines ...
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed

  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
100     9  100     9    0     0     58      0 --:--:-- --:--:-- --:--:--    58

gzip: stdin: not in gzip format
tar: Child returned status 1
tar: Error is not recoverable: exiting now
>> Deploying Tekton Pipelines
namespace/tekton-pipelines created
clusterrole.rbac.authorization.k8s.io/tekton-pipelines-controller-cluster-access created
clusterrole.rbac.authorization.k8s.io/tekton-pipelines-controller-tenant-access created
clusterrole.rbac.authorization.k8s.io/tekton-pipelines-webhook-cluster-access created
clusterrole.rbac.authorization.k8s.io/tekton-events-controller-cluster-access created
... skipping 64 lines ...
configmap/hubresolver-config created
deployment.apps/tekton-pipelines-remote-resolvers created
service/tekton-pipelines-remote-resolvers created
horizontalpodautoscaler.autoscaling/tekton-pipelines-webhook created
deployment.apps/tekton-pipelines-webhook created
service/tekton-pipelines-webhook created
error: the server doesn't have a resource type "pipelineresources"
No resources found
No resources found
No resources found
No resources found
Waiting until all pods in namespace tekton-pipelines are up.....
All pods are up:
... skipping 10 lines ...

2024/11/13 20:45:57 Building github.com/tektoncd/chains/cmd/controller for linux/amd64
clusterrolebinding.rbac.authorization.k8s.io/tekton-chains-controller-cluster-access created
clusterrole.rbac.authorization.k8s.io/tekton-chains-controller-cluster-access created
clusterrole.rbac.authorization.k8s.io/tekton-chains-controller-tenant-access created
clusterrolebinding.rbac.authorization.k8s.io/tekton-chains-controller-tenant-access created
Error from server (NotFound): error when creating "STDIN": namespaces "tekton-chains" not found
Error from server (NotFound): error when creating "STDIN": namespaces "tekton-chains" not found
Error from server (NotFound): error when creating "STDIN": namespaces "tekton-chains" not found
Error from server (NotFound): error when creating "STDIN": namespaces "tekton-chains" not found
Error from server (NotFound): error when creating "STDIN": namespaces "tekton-chains" not found
Error from server (NotFound): error when creating "STDIN": namespaces "tekton-chains" not found
Error from server (NotFound): error when creating "STDIN": namespaces "tekton-chains" not found
Error from server (NotFound): error when creating "STDIN": namespaces "tekton-chains" not found
Error from server (NotFound): error when creating "STDIN": namespaces "tekton-chains" not found
Error from server (NotFound): error when creating "STDIN": namespaces "tekton-chains" not found
Error: error processing import paths in "config/100-deployment.yaml": error resolving image references: build: go build: exit status 1: # github.com/tektoncd/chains/pkg/chains/formats/slsa/v1/pipelinerun
pkg/chains/formats/slsa/v1/pipelinerun/pipelinerun.go:115:20: invalid operation: tr.Status == nil (mismatched types "github.com/tektoncd/pipeline/pkg/apis/pipeline/v1beta1".TaskRunStatus and untyped nil)

ERROR: Tekton Chains installation failed
***************************************
***         E2E TEST FAILED         ***
***    Start of information dump    ***
***************************************
>>> All resources:
NAMESPACE                    NAME                                                                 READY   STATUS    RESTARTS        AGE
gke-managed-cim              pod/kube-state-metrics-0                                             2/2     Running   1 (3m18s ago)   5m6s
gmp-system                   pod/collector-pnbdj                                                  2/2     Running   0               3m11s
... skipping 161 lines ...
gke-managed-cim              3m15s       Normal    Created                                  pod/kube-state-metrics-0                                             Created container kube-state-metrics
gke-managed-cim              3m14s       Normal    Started                                  pod/kube-state-metrics-0                                             Started container kube-state-metrics
gke-managed-cim              3m46s       Normal    Pulling                                  pod/kube-state-metrics-0                                             Pulling image "us-central1-artifactregistry.gcr.io/gke-release/gke-release/gke-metrics-collector:20240501_2300_RC0@sha256:af727fbef6a16960bd3541d89b94e1a4938b57041e5869f148995d8c271a6334"
gke-managed-cim              3m42s       Normal    Pulled                                   pod/kube-state-metrics-0                                             Successfully pulled image "us-central1-artifactregistry.gcr.io/gke-release/gke-release/gke-metrics-collector:20240501_2300_RC0@sha256:af727fbef6a16960bd3541d89b94e1a4938b57041e5869f148995d8c271a6334" in 1.841s (3.679s including waiting). Image size: 23786769 bytes.
gke-managed-cim              3m42s       Normal    Created                                  pod/kube-state-metrics-0                                             Created container ksm-metrics-collector
gke-managed-cim              3m42s       Normal    Started                                  pod/kube-state-metrics-0                                             Started container ksm-metrics-collector
gke-managed-cim              3m19s       Warning   Unhealthy                                pod/kube-state-metrics-0                                             Readiness probe failed: Get "http://10.8.2.5:8081/": dial tcp 10.8.2.5:8081: connect: connection refused
gke-managed-cim              3m19s       Warning   Unhealthy                                pod/kube-state-metrics-0                                             Liveness probe failed: Get "http://10.8.2.5:8080/healthz": dial tcp 10.8.2.5:8080: connect: connection refused
gke-managed-cim              3m19s       Normal    Killing                                  pod/kube-state-metrics-0                                             Container kube-state-metrics failed liveness probe, will be restarted
gke-managed-cim              3m15s       Normal    Pulled                                   pod/kube-state-metrics-0                                             Container image "us-central1-artifactregistry.gcr.io/gke-release/gke-release/kube-state-metrics:v2.7.0-gke.64@sha256:9b7f4be917b3a3c68ae75b47efa0081f23e163d7c94de053bffb0b2884763cdf" already present on machine
gke-managed-cim              5m7s        Warning   FailedCreate                             statefulset/kube-state-metrics                                       create Pod kube-state-metrics-0 in StatefulSet kube-state-metrics failed error: pods "kube-state-metrics-0" is forbidden: error looking up service account gke-managed-cim/kube-state-metrics: serviceaccount "kube-state-metrics" not found
gke-managed-cim              5m7s        Normal    SuccessfulCreate                         statefulset/kube-state-metrics                                       create Pod kube-state-metrics-0 in StatefulSet kube-state-metrics successful
gke-managed-cim              2m53s       Warning   FailedGetResourceMetric                  horizontalpodautoscaler/kube-state-metrics                           unable to get metric memory: unable to fetch metrics from resource metrics API: the server is currently unable to handle the request (get pods.metrics.k8s.io)
gmp-system                   4m12s       Warning   FailedScheduling                         pod/alertmanager-0                                                   no nodes available to schedule pods
gmp-system                   4m2s        Warning   FailedScheduling                         pod/alertmanager-0                                                   0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.
gmp-system                   3m50s       Normal    Scheduled                                pod/alertmanager-0                                                   Successfully assigned gmp-system/alertmanager-0 to gke-tchains-e2e-cls18567-default-pool-338eee48-1158
gmp-system                   3m18s       Warning   FailedMount                              pod/alertmanager-0                                                   MountVolume.SetUp failed for volume "config" : secret "alertmanager" not found
gmp-system                   4m37s       Normal    SuccessfulCreate                         statefulset/alertmanager                                             create Pod alertmanager-0 in StatefulSet alertmanager successful
gmp-system                   3m13s       Normal    SuccessfulDelete                         statefulset/alertmanager                                             delete Pod alertmanager-0 in StatefulSet alertmanager successful
gmp-system                   4m10s       Normal    Scheduled                                pod/collector-bddf5                                                  Successfully assigned gmp-system/collector-bddf5 to gke-tchains-e2e-cls18567-default-pool-338eee48-1158
gmp-system                   3m57s       Warning   NetworkNotReady                          pod/collector-bddf5                                                  network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
gmp-system                   3m59s       Warning   FailedMount                              pod/collector-bddf5                                                  MountVolume.SetUp failed for volume "config" : object "gmp-system"/"collector" not registered
gmp-system                   3m59s       Warning   FailedMount                              pod/collector-bddf5                                                  MountVolume.SetUp failed for volume "collection-secret" : object "gmp-system"/"collection" not registered
gmp-system                   3m59s       Warning   FailedMount                              pod/collector-bddf5                                                  MountVolume.SetUp failed for volume "kube-api-access-f4pw6" : object "gmp-system"/"kube-root-ca.crt" not registered
gmp-system                   3m35s       Warning   FailedMount                              pod/collector-bddf5                                                  MountVolume.SetUp failed for volume "config" : configmap "collector" not found
gmp-system                   3m35s       Warning   FailedMount                              pod/collector-bddf5                                                  MountVolume.SetUp failed for volume "collection-secret" : secret "collection" not found
gmp-system                   3m46s       Normal    Scheduled                                pod/collector-lr5rz                                                  Successfully assigned gmp-system/collector-lr5rz to gke-tchains-e2e-cls18567-default-pool-8b4cc668-npkf
gmp-system                   3m34s       Warning   NetworkNotReady                          pod/collector-lr5rz                                                  network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
gmp-system                   3m38s       Warning   FailedMount                              pod/collector-lr5rz                                                  MountVolume.SetUp failed for volume "config" : object "gmp-system"/"collector" not registered
gmp-system                   3m38s       Warning   FailedMount                              pod/collector-lr5rz                                                  MountVolume.SetUp failed for volume "collection-secret" : object "gmp-system"/"collection" not registered
gmp-system                   3m37s       Warning   FailedMount                              pod/collector-lr5rz                                                  MountVolume.SetUp failed for volume "kube-api-access-zhz6l" : object "gmp-system"/"kube-root-ca.crt" not registered
gmp-system                   3m30s       Warning   FailedMount                              pod/collector-lr5rz                                                  MountVolume.SetUp failed for volume "config" : configmap "collector" not found
gmp-system                   3m30s       Warning   FailedMount                              pod/collector-lr5rz                                                  MountVolume.SetUp failed for volume "collection-secret" : secret "collection" not found
gmp-system                   3m12s       Normal    Scheduled                                pod/collector-pnbdj                                                  Successfully assigned gmp-system/collector-pnbdj to gke-tchains-e2e-cls18567-default-pool-8b4cc668-npkf
gmp-system                   3m10s       Normal    Pulled                                   pod/collector-pnbdj                                                  Container image "us-central1-artifactregistry.gcr.io/gke-release/gke-release/gke-distroless/bash:gke_distroless_20240807.00_p0@sha256:ec5022c67b5316ae07f44ed374894e9bb55d548884d293da6b0d350a46dff2df" already present on machine
gmp-system                   3m8s        Normal    Created                                  pod/collector-pnbdj                                                  Created container config-init
gmp-system                   3m8s        Normal    Started                                  pod/collector-pnbdj                                                  Started container config-init
gmp-system                   3m7s        Normal    Pulling                                  pod/collector-pnbdj                                                  Pulling image "us-central1-artifactregistry.gcr.io/gke-release/gke-release/prometheus-engine/prometheus:v2.45.3-gmp.8-gke.0@sha256:3e6493d4b01ab583382731491d980bc164873ad4969e92c0bdd0da278359ccac"
gmp-system                   3m4s        Normal    Pulled                                   pod/collector-pnbdj                                                  Successfully pulled image "us-central1-artifactregistry.gcr.io/gke-release/gke-release/prometheus-engine/prometheus:v2.45.3-gmp.8-gke.0@sha256:3e6493d4b01ab583382731491d980bc164873ad4969e92c0bdd0da278359ccac" in 3.323s (3.323s including waiting). Image size: 113010021 bytes.
... skipping 26 lines ...
gmp-system                   3m7s        Normal    Created                                  pod/collector-tzwjr                                                  Created container prometheus
gmp-system                   3m7s        Normal    Started                                  pod/collector-tzwjr                                                  Started container prometheus
gmp-system                   3m7s        Normal    Pulling                                  pod/collector-tzwjr                                                  Pulling image "us-central1-artifactregistry.gcr.io/gke-release/gke-release/prometheus-engine/config-reloader:v0.13.1-gke.0@sha256:d199f266545ee281fa51d30e0a5f9c4da27da23055b153ca93adbf7483d19633"
gmp-system                   3m6s        Normal    Pulled                                   pod/collector-tzwjr                                                  Successfully pulled image "us-central1-artifactregistry.gcr.io/gke-release/gke-release/prometheus-engine/config-reloader:v0.13.1-gke.0@sha256:d199f266545ee281fa51d30e0a5f9c4da27da23055b153ca93adbf7483d19633" in 1.26s (1.26s including waiting). Image size: 59834302 bytes.
gmp-system                   3m6s        Normal    Created                                  pod/collector-tzwjr                                                  Created container config-reloader
gmp-system                   3m6s        Normal    Started                                  pod/collector-tzwjr                                                  Started container config-reloader
gmp-system                   3m36s       Warning   NetworkNotReady                          pod/collector-xrf4j                                                  network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
gmp-system                   3m45s       Normal    Scheduled                                pod/collector-xrf4j                                                  Successfully assigned gmp-system/collector-xrf4j to gke-tchains-e2e-cls18567-default-pool-6f14a5b1-9653
gmp-system                   3m37s       Warning   FailedMount                              pod/collector-xrf4j                                                  MountVolume.SetUp failed for volume "config" : object "gmp-system"/"collector" not registered
gmp-system                   3m37s       Warning   FailedMount                              pod/collector-xrf4j                                                  MountVolume.SetUp failed for volume "collection-secret" : object "gmp-system"/"collection" not registered
gmp-system                   3m37s       Warning   FailedMount                              pod/collector-xrf4j                                                  MountVolume.SetUp failed for volume "kube-api-access-lrwg8" : object "gmp-system"/"kube-root-ca.crt" not registered
gmp-system                   3m29s       Warning   FailedMount                              pod/collector-xrf4j                                                  MountVolume.SetUp failed for volume "collection-secret" : secret "collection" not found
gmp-system                   3m29s       Warning   FailedMount                              pod/collector-xrf4j                                                  MountVolume.SetUp failed for volume "config" : configmap "collector" not found
gmp-system                   4m10s       Normal    SuccessfulCreate                         daemonset/collector                                                  Created pod: collector-bddf5
gmp-system                   3m46s       Normal    SuccessfulCreate                         daemonset/collector                                                  Created pod: collector-lr5rz
gmp-system                   3m45s       Normal    SuccessfulCreate                         daemonset/collector                                                  Created pod: collector-xrf4j
gmp-system                   3m14s       Normal    SuccessfulDelete                         daemonset/collector                                                  Deleted pod: collector-xrf4j
gmp-system                   3m14s       Normal    SuccessfulDelete                         daemonset/collector                                                  Deleted pod: collector-lr5rz
gmp-system                   3m14s       Normal    SuccessfulDelete                         daemonset/collector                                                  Deleted pod: collector-bddf5
... skipping 4 lines ...
gmp-system                   4m2s        Warning   FailedScheduling                         pod/gmp-operator-65d7cb5d6f-x8dqc                                    0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.
gmp-system                   3m50s       Normal    Scheduled                                pod/gmp-operator-65d7cb5d6f-x8dqc                                    Successfully assigned gmp-system/gmp-operator-65d7cb5d6f-x8dqc to gke-tchains-e2e-cls18567-default-pool-338eee48-1158
gmp-system                   3m49s       Normal    Pulling                                  pod/gmp-operator-65d7cb5d6f-x8dqc                                    Pulling image "us-central1-artifactregistry.gcr.io/gke-release/gke-release/prometheus-engine/operator:v0.13.1-gke.0@sha256:b06adf14b06c9fc809d4b8db41329e4f3c34d9b1baa2abd45542ad817aed3917"
gmp-system                   3m43s       Normal    Pulled                                   pod/gmp-operator-65d7cb5d6f-x8dqc                                    Successfully pulled image "us-central1-artifactregistry.gcr.io/gke-release/gke-release/prometheus-engine/operator:v0.13.1-gke.0@sha256:b06adf14b06c9fc809d4b8db41329e4f3c34d9b1baa2abd45542ad817aed3917" in 2.91s (5.424s including waiting). Image size: 84461449 bytes.
gmp-system                   3m15s       Normal    Created                                  pod/gmp-operator-65d7cb5d6f-x8dqc                                    Created container operator
gmp-system                   3m14s       Normal    Started                                  pod/gmp-operator-65d7cb5d6f-x8dqc                                    Started container operator
gmp-system                   3m19s       Warning   Unhealthy                                pod/gmp-operator-65d7cb5d6f-x8dqc                                    Readiness probe failed: Get "http://10.8.2.10:18081/readyz": dial tcp 10.8.2.10:18081: connect: connection refused
gmp-system                   3m19s       Warning   Unhealthy                                pod/gmp-operator-65d7cb5d6f-x8dqc                                    Liveness probe failed: Get "http://10.8.2.10:18081/healthz": dial tcp 10.8.2.10:18081: connect: connection refused
gmp-system                   3m19s       Normal    Killing                                  pod/gmp-operator-65d7cb5d6f-x8dqc                                    Container operator failed liveness probe, will be restarted
gmp-system                   3m15s       Normal    Pulled                                   pod/gmp-operator-65d7cb5d6f-x8dqc                                    Container image "us-central1-artifactregistry.gcr.io/gke-release/gke-release/prometheus-engine/operator:v0.13.1-gke.0@sha256:b06adf14b06c9fc809d4b8db41329e4f3c34d9b1baa2abd45542ad817aed3917" already present on machine
gmp-system                   4m38s       Normal    SuccessfulCreate                         replicaset/gmp-operator-65d7cb5d6f                                   Created pod: gmp-operator-65d7cb5d6f-x8dqc
gmp-system                   4m38s       Normal    ScalingReplicaSet                        deployment/gmp-operator                                              Scaled up replica set gmp-operator-65d7cb5d6f to 1
gmp-system                   3m13s       Normal    Scheduled                                pod/rule-evaluator-55cbc6f848-nqxqk                                  Successfully assigned gmp-system/rule-evaluator-55cbc6f848-nqxqk to gke-tchains-e2e-cls18567-default-pool-8b4cc668-npkf
gmp-system                   3m12s       Normal    Pulling                                  pod/rule-evaluator-55cbc6f848-nqxqk                                  Pulling image "us-central1-artifactregistry.gcr.io/gke-release/gke-release/gke-distroless/bash:gke_distroless_20240807.00_p0@sha256:ec5022c67b5316ae07f44ed374894e9bb55d548884d293da6b0d350a46dff2df"
gmp-system                   3m12s       Normal    Pulled                                   pod/rule-evaluator-55cbc6f848-nqxqk                                  Successfully pulled image "us-central1-artifactregistry.gcr.io/gke-release/gke-release/gke-distroless/bash:gke_distroless_20240807.00_p0@sha256:ec5022c67b5316ae07f44ed374894e9bb55d548884d293da6b0d350a46dff2df" in 240ms (240ms including waiting). Image size: 18373482 bytes.
... skipping 2 lines ...
gmp-system                   3m11s       Normal    Killing                                  pod/rule-evaluator-55cbc6f848-nqxqk                                  Stopping container config-init
gmp-system                   3m13s       Normal    SuccessfulCreate                         replicaset/rule-evaluator-55cbc6f848                                 Created pod: rule-evaluator-55cbc6f848-nqxqk
gmp-system                   3m12s       Normal    SuccessfulDelete                         replicaset/rule-evaluator-55cbc6f848                                 Deleted pod: rule-evaluator-55cbc6f848-nqxqk
gmp-system                   4m12s       Warning   FailedScheduling                         pod/rule-evaluator-6f659bc47f-lntvm                                  no nodes available to schedule pods
gmp-system                   4m2s        Warning   FailedScheduling                         pod/rule-evaluator-6f659bc47f-lntvm                                  0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.
gmp-system                   3m50s       Normal    Scheduled                                pod/rule-evaluator-6f659bc47f-lntvm                                  Successfully assigned gmp-system/rule-evaluator-6f659bc47f-lntvm to gke-tchains-e2e-cls18567-default-pool-338eee48-1158
gmp-system                   3m18s       Warning   FailedMount                              pod/rule-evaluator-6f659bc47f-lntvm                                  MountVolume.SetUp failed for volume "rules-secret" : secret "rules" not found
gmp-system                   3m18s       Warning   FailedMount                              pod/rule-evaluator-6f659bc47f-lntvm                                  MountVolume.SetUp failed for volume "rules" : configmap "rules-generated" not found
gmp-system                   3m18s       Warning   FailedMount                              pod/rule-evaluator-6f659bc47f-lntvm                                  MountVolume.SetUp failed for volume "config" : configmap "rule-evaluator" not found
gmp-system                   4m37s       Normal    SuccessfulCreate                         replicaset/rule-evaluator-6f659bc47f                                 Created pod: rule-evaluator-6f659bc47f-lntvm
gmp-system                   3m13s       Normal    SuccessfulDelete                         replicaset/rule-evaluator-6f659bc47f                                 Deleted pod: rule-evaluator-6f659bc47f-lntvm
gmp-system                   4m37s       Normal    ScalingReplicaSet                        deployment/rule-evaluator                                            Scaled up replica set rule-evaluator-6f659bc47f to 1
gmp-system                   3m13s       Normal    ScalingReplicaSet                        deployment/rule-evaluator                                            Scaled up replica set rule-evaluator-55cbc6f848 to 1
gmp-system                   3m13s       Normal    ScalingReplicaSet                        deployment/rule-evaluator                                            Scaled down replica set rule-evaluator-6f659bc47f to 0 from 1
gmp-system                   3m13s       Normal    ScalingReplicaSet                        deployment/rule-evaluator                                            Scaled down replica set rule-evaluator-55cbc6f848 to 0 from 1
... skipping 162 lines ...
kube-system                  4m47s       Normal    SuccessfulCreate                         replicaset/konnectivity-agent-autoscaler-696cc5598c                  Created pod: konnectivity-agent-autoscaler-696cc5598c-n2h58
kube-system                  4m47s       Normal    ScalingReplicaSet                        deployment/konnectivity-agent-autoscaler                             Scaled up replica set konnectivity-agent-autoscaler-696cc5598c to 1
kube-system                  4m48s       Normal    ScalingReplicaSet                        deployment/konnectivity-agent                                        Scaled up replica set konnectivity-agent-5c6fc96b6f to 1
kube-system                  3m14s       Normal    ScalingReplicaSet                        deployment/konnectivity-agent                                        Scaled up replica set konnectivity-agent-5c6fc96b6f to 3 from 1
kube-system                  5m28s       Normal    LeaderElection                           lease/kube-controller-manager                                        gke-c423b0225e1a42e992f4-0f16-a013-vm_2685ee8e-ebff-46fe-ace1-151d1bde874c became leader
kube-system                  3m16s       Normal    Scheduled                                pod/kube-dns-64cd95ff56-gdvb9                                        Successfully assigned kube-system/kube-dns-64cd95ff56-gdvb9 to gke-tchains-e2e-cls18567-default-pool-6f14a5b1-9653
kube-system                  3m14s       Warning   FailedMount                              pod/kube-dns-64cd95ff56-gdvb9                                        MountVolume.SetUp failed for volume "kubedns-metrics-collector-config-map-vol" : failed to sync configmap cache: timed out waiting for the condition
kube-system                  3m13s       Normal    Pulling                                  pod/kube-dns-64cd95ff56-gdvb9                                        Pulling image "us-central1-artifactregistry.gcr.io/gke-release/gke-release/k8s-dns-kube-dns:1.23.0-gke.9@sha256:48d7e5c5cdd5b356e55c3e61a7ae8f2657f15b661b385639f7b983fe134c0709"
kube-system                  3m11s       Normal    Pulled                                   pod/kube-dns-64cd95ff56-gdvb9                                        Successfully pulled image "us-central1-artifactregistry.gcr.io/gke-release/gke-release/k8s-dns-kube-dns:1.23.0-gke.9@sha256:48d7e5c5cdd5b356e55c3e61a7ae8f2657f15b661b385639f7b983fe134c0709" in 2.382s (2.382s including waiting). Image size: 32530343 bytes.
kube-system                  3m11s       Normal    Created                                  pod/kube-dns-64cd95ff56-gdvb9                                        Created container kubedns
kube-system                  3m11s       Normal    Started                                  pod/kube-dns-64cd95ff56-gdvb9                                        Started container kubedns
kube-system                  3m11s       Normal    Pulling                                  pod/kube-dns-64cd95ff56-gdvb9                                        Pulling image "us-central1-artifactregistry.gcr.io/gke-release/gke-release/k8s-dns-dnsmasq-nanny:1.23.0-gke.9@sha256:8c165a991f95755137077c927455e2d996de2c3d5efb0c369f7d94f8dc7d4fb5"
kube-system                  3m5s        Normal    Pulled                                   pod/kube-dns-64cd95ff56-gdvb9                                        Successfully pulled image "us-central1-artifactregistry.gcr.io/gke-release/gke-release/k8s-dns-dnsmasq-nanny:1.23.0-gke.9@sha256:8c165a991f95755137077c927455e2d996de2c3d5efb0c369f7d94f8dc7d4fb5" in 5.78s (5.78s including waiting). Image size: 37174146 bytes.
... skipping 31 lines ...
kube-system                  3m39s       Normal    Created                                  pod/kube-dns-64cd95ff56-kflbf                                        Created container prometheus-to-sd
kube-system                  3m38s       Normal    Started                                  pod/kube-dns-64cd95ff56-kflbf                                        Started container prometheus-to-sd
kube-system                  3m38s       Normal    Pulling                                  pod/kube-dns-64cd95ff56-kflbf                                        Pulling image "us-central1-artifactregistry.gcr.io/gke-release/gke-release/gke-metrics-collector:20240721_2300_RC0@sha256:3d23f78b137bf59ae1a9c71c54daf3186e07640719d1055c4ee84eb251edda64"
kube-system                  3m38s       Normal    Pulled                                   pod/kube-dns-64cd95ff56-kflbf                                        Successfully pulled image "us-central1-artifactregistry.gcr.io/gke-release/gke-release/gke-metrics-collector:20240721_2300_RC0@sha256:3d23f78b137bf59ae1a9c71c54daf3186e07640719d1055c4ee84eb251edda64" in 937ms (937ms including waiting). Image size: 24425611 bytes.
kube-system                  3m37s       Normal    Created                                  pod/kube-dns-64cd95ff56-kflbf                                        Created container kubedns-metrics-collector
kube-system                  3m37s       Normal    Started                                  pod/kube-dns-64cd95ff56-kflbf                                        Started container kubedns-metrics-collector
kube-system                  3m19s       Warning   Unhealthy                                pod/kube-dns-64cd95ff56-kflbf                                        Readiness probe failed: Get "http://10.8.2.3:8081/readiness": dial tcp 10.8.2.3:8081: connect: connection refused
kube-system                  5m14s       Normal    SuccessfulCreate                         replicaset/kube-dns-64cd95ff56                                       Created pod: kube-dns-64cd95ff56-kflbf
kube-system                  3m16s       Normal    SuccessfulCreate                         replicaset/kube-dns-64cd95ff56                                       Created pod: kube-dns-64cd95ff56-gdvb9
kube-system                  4m19s       Warning   FailedScheduling                         pod/kube-dns-autoscaler-6f896b6968-xbj9g                             no nodes available to schedule pods
kube-system                  4m9s        Warning   FailedScheduling                         pod/kube-dns-autoscaler-6f896b6968-xbj9g                             0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.
kube-system                  3m50s       Normal    Scheduled                                pod/kube-dns-autoscaler-6f896b6968-xbj9g                             Successfully assigned kube-system/kube-dns-autoscaler-6f896b6968-xbj9g to gke-tchains-e2e-cls18567-default-pool-338eee48-1158
kube-system                  3m49s       Normal    Pulling                                  pod/kube-dns-autoscaler-6f896b6968-xbj9g                             Pulling image "gke.gcr.io/cluster-proportional-autoscaler:v1.8.11-gke.7@sha256:e3849fe9443dcead35f0ad364f5807ce830b63c16070a95a06bb525756643c4e"
... skipping 38 lines ...
kube-system                  3m49s       Normal    Pulling                                  pod/metrics-server-v1.30.3-8987bd844-h99nh                           Pulling image "us-central1-artifactregistry.gcr.io/gke-release/gke-release/metrics-server:v0.7.1-gke.23@sha256:525d9a5c0336ada0fd1f81570dab011a3cabc2456576afa769803934e48f4a5a"
kube-system                  3m44s       Normal    Pulled                                   pod/metrics-server-v1.30.3-8987bd844-h99nh                           Successfully pulled image "us-central1-artifactregistry.gcr.io/gke-release/gke-release/metrics-server:v0.7.1-gke.23@sha256:525d9a5c0336ada0fd1f81570dab011a3cabc2456576afa769803934e48f4a5a" in 1.645s (4.394s including waiting). Image size: 19252717 bytes.
kube-system                  3m12s       Normal    Created                                  pod/metrics-server-v1.30.3-8987bd844-h99nh                           Created container metrics-server
kube-system                  3m12s       Normal    Started                                  pod/metrics-server-v1.30.3-8987bd844-h99nh                           Started container metrics-server
kube-system                  3m12s       Normal    Pulled                                   pod/metrics-server-v1.30.3-8987bd844-h99nh                           Container image "us-central1-artifactregistry.gcr.io/gke-release/gke-release/metrics-server:v0.7.1-gke.23@sha256:525d9a5c0336ada0fd1f81570dab011a3cabc2456576afa769803934e48f4a5a" already present on machine
kube-system                  2m49s       Normal    Killing                                  pod/metrics-server-v1.30.3-8987bd844-h99nh                           Stopping container metrics-server
kube-system                  4m31s       Warning   FailedCreate                             replicaset/metrics-server-v1.30.3-8987bd844                          Error creating: pods "metrics-server-v1.30.3-8987bd844-" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount "metrics-server" not found
kube-system                  4m30s       Normal    SuccessfulCreate                         replicaset/metrics-server-v1.30.3-8987bd844                          Created pod: metrics-server-v1.30.3-8987bd844-h99nh
kube-system                  2m49s       Normal    SuccessfulDelete                         replicaset/metrics-server-v1.30.3-8987bd844                          Deleted pod: metrics-server-v1.30.3-8987bd844-h99nh
kube-system                  4m33s       Normal    ScalingReplicaSet                        deployment/metrics-server-v1.30.3                                    Scaled up replica set metrics-server-v1.30.3-8987bd844 to 1
kube-system                  4m30s       Normal    ScalingReplicaSet                        deployment/metrics-server-v1.30.3                                    Scaled up replica set metrics-server-v1.30.3-7fff7dc68d to 1
kube-system                  2m49s       Normal    ScalingReplicaSet                        deployment/metrics-server-v1.30.3                                    Scaled down replica set metrics-server-v1.30.3-8987bd844 to 0 from 1
kube-system                  4m25s       Normal    LeaderElection                           lease/pd-csi-storage-gke-io                                          1731530616781-7881-pd-csi-storage-gke-io became leader
... skipping 57 lines ...
tekton-pipelines             2m41s       Normal    Created                                  pod/tekton-pipelines-webhook-fb7fdd4cd-xpvrg                         Created container webhook
tekton-pipelines             2m41s       Normal    Started                                  pod/tekton-pipelines-webhook-fb7fdd4cd-xpvrg                         Started container webhook
tekton-pipelines             2m44s       Normal    SuccessfulCreate                         replicaset/tekton-pipelines-webhook-fb7fdd4cd                        Created pod: tekton-pipelines-webhook-fb7fdd4cd-xpvrg
tekton-pipelines             2m45s       Normal    ScalingReplicaSet                        deployment/tekton-pipelines-webhook                                  Scaled up replica set tekton-pipelines-webhook-fb7fdd4cd to 1
tekton-pipelines             2m          Warning   FailedGetResourceMetric                  horizontalpodautoscaler/tekton-pipelines-webhook                     No recommendation
***************************************
***         E2E TEST FAILED         ***
***     End of information dump     ***
***************************************
2024/11/13 20:48:28 process.go:155: Step '/home/prow/go/src/github.com/tektoncd/chains/test/e2e-tests.sh --run-tests' finished in 2m55.21943643s
2024/11/13 20:48:28 main.go:319: Something went wrong: encountered 1 errors: [error during /home/prow/go/src/github.com/tektoncd/chains/test/e2e-tests.sh --run-tests: exit status 1]
Test subprocess exited with code 0
Artifacts were written to /logs/artifacts
Test result code is 1
==================================
==== INTEGRATION TESTS FAILED ====
==================================
+ EXIT_VALUE=1
+ set +o xtrace