ResultFAILURE
Tests 0 failed / 0 succeeded
Started2024-12-13 14:10
Elapsed12m32s
Revision8de60f4a2d54b5734f3284acb69d9c882fa720ac
Refs 1264
E2E:Machinen1-standard-4
E2E:MaxNodes3
E2E:MinNodes1
E2E:Regionus-central1
E2E:Version1.30.5-gke.1699000

No Test Failures!


Error lines from build-log.txt

... skipping 371 lines ...
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed

  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
100     9  100     9    0     0     46      0 --:--:-- --:--:-- --:--:--    46

gzip: stdin: not in gzip format
tar: Child returned status 1
tar: Error is not recoverable: exiting now
>> Deploying Tekton Pipelines
namespace/tekton-pipelines created
clusterrole.rbac.authorization.k8s.io/tekton-pipelines-controller-cluster-access created
clusterrole.rbac.authorization.k8s.io/tekton-pipelines-controller-tenant-access created
clusterrole.rbac.authorization.k8s.io/tekton-pipelines-webhook-cluster-access created
clusterrole.rbac.authorization.k8s.io/tekton-events-controller-cluster-access created
... skipping 64 lines ...
configmap/hubresolver-config created
deployment.apps/tekton-pipelines-remote-resolvers created
service/tekton-pipelines-remote-resolvers created
horizontalpodautoscaler.autoscaling/tekton-pipelines-webhook created
deployment.apps/tekton-pipelines-webhook created
service/tekton-pipelines-webhook created
error: the server doesn't have a resource type "pipelineresources"
No resources found
No resources found
No resources found
No resources found
Waiting until all pods in namespace tekton-pipelines are up.........
All pods are up:
... skipping 10 lines ...

2024/12/13 14:19:52 Building github.com/tektoncd/chains/cmd/controller for linux/amd64
clusterrolebinding.rbac.authorization.k8s.io/tekton-chains-controller-cluster-access created
clusterrole.rbac.authorization.k8s.io/tekton-chains-controller-cluster-access created
clusterrole.rbac.authorization.k8s.io/tekton-chains-controller-tenant-access created
clusterrolebinding.rbac.authorization.k8s.io/tekton-chains-controller-tenant-access created
Error from server (NotFound): error when creating "STDIN": namespaces "tekton-chains" not found
Error from server (NotFound): error when creating "STDIN": namespaces "tekton-chains" not found
Error from server (NotFound): error when creating "STDIN": namespaces "tekton-chains" not found
Error from server (NotFound): error when creating "STDIN": namespaces "tekton-chains" not found
Error from server (NotFound): error when creating "STDIN": namespaces "tekton-chains" not found
Error from server (NotFound): error when creating "STDIN": namespaces "tekton-chains" not found
Error from server (NotFound): error when creating "STDIN": namespaces "tekton-chains" not found
Error from server (NotFound): error when creating "STDIN": namespaces "tekton-chains" not found
Error from server (NotFound): error when creating "STDIN": namespaces "tekton-chains" not found
Error from server (NotFound): error when creating "STDIN": namespaces "tekton-chains" not found
Error: error processing import paths in "config/100-deployment.yaml": error resolving image references: build: go build: exit status 1: # cloud.google.com/go/storage
vendor/cloud.google.com/go/storage/storage.go:264:25: undefined: stats.NewMetrics

ERROR: Tekton Chains installation failed
***************************************
***         E2E TEST FAILED         ***
***    Start of information dump    ***
***************************************
>>> All resources:
NAMESPACE                    NAME                                                                 READY   STATUS    RESTARTS   AGE
gke-managed-cim              pod/kube-state-metrics-0                                             2/2     Running   0          5m57s
gmp-system                   pod/collector-mcgkn                                                  2/2     Running   0          3m48s
... skipping 165 lines ...
gke-managed-cim              3m52s       Normal    Started                                  pod/kube-state-metrics-0                                             Started container ksm-metrics-collector
gke-managed-cim              5m58s       Normal    SuccessfulCreate                         statefulset/kube-state-metrics                                       create Pod kube-state-metrics-0 in StatefulSet kube-state-metrics successful
gke-managed-cim              3m9s        Warning   FailedGetResourceMetric                  horizontalpodautoscaler/kube-state-metrics                           unable to get metric memory: unable to fetch metrics from resource metrics API: the server is currently unable to handle the request (get pods.metrics.k8s.io)
gmp-system                   4m49s       Warning   FailedScheduling                         pod/alertmanager-0                                                   no nodes available to schedule pods
gmp-system                   4m39s       Warning   FailedScheduling                         pod/alertmanager-0                                                   0/3 nodes are available: 3 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling.
gmp-system                   4m25s       Normal    Scheduled                                pod/alertmanager-0                                                   Successfully assigned gmp-system/alertmanager-0 to gke-tchains-e2e-cls18675-default-pool-d245ea6d-4wr6
gmp-system                   3m56s       Warning   FailedMount                              pod/alertmanager-0                                                   MountVolume.SetUp failed for volume "config" : secret "alertmanager" not found
gmp-system                   5m36s       Normal    SuccessfulCreate                         statefulset/alertmanager                                             create Pod alertmanager-0 in StatefulSet alertmanager successful
gmp-system                   3m50s       Normal    SuccessfulDelete                         statefulset/alertmanager                                             delete Pod alertmanager-0 in StatefulSet alertmanager successful
gmp-system                   4m45s       Normal    Scheduled                                pod/collector-ck95d                                                  Successfully assigned gmp-system/collector-ck95d to gke-tchains-e2e-cls18675-default-pool-da3f5458-61nr
gmp-system                   3m55s       Warning   FailedMount                              pod/collector-ck95d                                                  MountVolume.SetUp failed for volume "config" : configmap "collector" not found
gmp-system                   3m55s       Warning   FailedMount                              pod/collector-ck95d                                                  MountVolume.SetUp failed for volume "collection-secret" : secret "collection" not found
gmp-system                   4m44s       Normal    Scheduled                                pod/collector-kjhcv                                                  Successfully assigned gmp-system/collector-kjhcv to gke-tchains-e2e-cls18675-default-pool-02a7e857-lftt
gmp-system                   4m6s        Warning   FailedMount                              pod/collector-kjhcv                                                  MountVolume.SetUp failed for volume "collection-secret" : secret "collection" not found
gmp-system                   4m6s        Warning   FailedMount                              pod/collector-kjhcv                                                  MountVolume.SetUp failed for volume "config" : configmap "collector" not found
gmp-system                   3m49s       Normal    Scheduled                                pod/collector-mcgkn                                                  Successfully assigned gmp-system/collector-mcgkn to gke-tchains-e2e-cls18675-default-pool-02a7e857-lftt
gmp-system                   3m49s       Normal    Pulling                                  pod/collector-mcgkn                                                  Pulling image "us-central1-artifactregistry.gcr.io/gke-release/gke-release/gke-distroless/bash:gke_distroless_20240807.00_p0@sha256:ec5022c67b5316ae07f44ed374894e9bb55d548884d293da6b0d350a46dff2df"
gmp-system                   3m48s       Normal    Pulled                                   pod/collector-mcgkn                                                  Successfully pulled image "us-central1-artifactregistry.gcr.io/gke-release/gke-release/gke-distroless/bash:gke_distroless_20240807.00_p0@sha256:ec5022c67b5316ae07f44ed374894e9bb55d548884d293da6b0d350a46dff2df" in 191ms (191ms including waiting). Image size: 18373482 bytes.
gmp-system                   3m48s       Normal    Created                                  pod/collector-mcgkn                                                  Created container config-init
gmp-system                   3m48s       Normal    Started                                  pod/collector-mcgkn                                                  Started container config-init
gmp-system                   3m42s       Normal    Pulling                                  pod/collector-mcgkn                                                  Pulling image "us-central1-artifactregistry.gcr.io/gke-release/gke-release/prometheus-engine/prometheus:v2.45.3-gmp.8-gke.0@sha256:3e6493d4b01ab583382731491d980bc164873ad4969e92c0bdd0da278359ccac"
... skipping 2 lines ...
gmp-system                   3m40s       Normal    Started                                  pod/collector-mcgkn                                                  Started container prometheus
gmp-system                   3m40s       Normal    Pulling                                  pod/collector-mcgkn                                                  Pulling image "us-central1-artifactregistry.gcr.io/gke-release/gke-release/prometheus-engine/config-reloader:v0.13.1-gke.0@sha256:d199f266545ee281fa51d30e0a5f9c4da27da23055b153ca93adbf7483d19633"
gmp-system                   3m38s       Normal    Pulled                                   pod/collector-mcgkn                                                  Successfully pulled image "us-central1-artifactregistry.gcr.io/gke-release/gke-release/prometheus-engine/config-reloader:v0.13.1-gke.0@sha256:d199f266545ee281fa51d30e0a5f9c4da27da23055b153ca93adbf7483d19633" in 1.212s (1.212s including waiting). Image size: 59834302 bytes.
gmp-system                   3m38s       Normal    Created                                  pod/collector-mcgkn                                                  Created container config-reloader
gmp-system                   3m38s       Normal    Started                                  pod/collector-mcgkn                                                  Started container config-reloader
gmp-system                   4m46s       Normal    Scheduled                                pod/collector-t95gp                                                  Successfully assigned gmp-system/collector-t95gp to gke-tchains-e2e-cls18675-default-pool-d245ea6d-4wr6
gmp-system                   3m56s       Warning   FailedMount                              pod/collector-t95gp                                                  MountVolume.SetUp failed for volume "collection-secret" : secret "collection" not found
gmp-system                   3m56s       Warning   FailedMount                              pod/collector-t95gp                                                  MountVolume.SetUp failed for volume "config" : configmap "collector" not found
gmp-system                   3m48s       Normal    Scheduled                                pod/collector-vjlqk                                                  Successfully assigned gmp-system/collector-vjlqk to gke-tchains-e2e-cls18675-default-pool-d245ea6d-4wr6
gmp-system                   3m48s       Normal    Pulling                                  pod/collector-vjlqk                                                  Pulling image "us-central1-artifactregistry.gcr.io/gke-release/gke-release/gke-distroless/bash:gke_distroless_20240807.00_p0@sha256:ec5022c67b5316ae07f44ed374894e9bb55d548884d293da6b0d350a46dff2df"
gmp-system                   3m44s       Normal    Pulled                                   pod/collector-vjlqk                                                  Successfully pulled image "us-central1-artifactregistry.gcr.io/gke-release/gke-release/gke-distroless/bash:gke_distroless_20240807.00_p0@sha256:ec5022c67b5316ae07f44ed374894e9bb55d548884d293da6b0d350a46dff2df" in 323ms (3.247s including waiting). Image size: 18373482 bytes.
gmp-system                   3m44s       Normal    Created                                  pod/collector-vjlqk                                                  Created container config-init
gmp-system                   3m44s       Normal    Started                                  pod/collector-vjlqk                                                  Started container config-init
gmp-system                   3m39s       Normal    Pulling                                  pod/collector-vjlqk                                                  Pulling image "us-central1-artifactregistry.gcr.io/gke-release/gke-release/prometheus-engine/prometheus:v2.45.3-gmp.8-gke.0@sha256:3e6493d4b01ab583382731491d980bc164873ad4969e92c0bdd0da278359ccac"
... skipping 34 lines ...
gmp-system                   3m51s       Normal    Started                                  pod/gmp-operator-787d7b4bb-rj9v2                                     Started container operator
gmp-system                   5m36s       Normal    SuccessfulCreate                         replicaset/gmp-operator-787d7b4bb                                    Created pod: gmp-operator-787d7b4bb-rj9v2
gmp-system                   5m36s       Normal    ScalingReplicaSet                        deployment/gmp-operator                                              Scaled up replica set gmp-operator-787d7b4bb to 1
gmp-system                   4m49s       Warning   FailedScheduling                         pod/rule-evaluator-6f659bc47f-jmqd8                                  no nodes available to schedule pods
gmp-system                   4m39s       Warning   FailedScheduling                         pod/rule-evaluator-6f659bc47f-jmqd8                                  0/3 nodes are available: 3 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling.
gmp-system                   4m25s       Normal    Scheduled                                pod/rule-evaluator-6f659bc47f-jmqd8                                  Successfully assigned gmp-system/rule-evaluator-6f659bc47f-jmqd8 to gke-tchains-e2e-cls18675-default-pool-d245ea6d-4wr6
gmp-system                   3m56s       Warning   FailedMount                              pod/rule-evaluator-6f659bc47f-jmqd8                                  MountVolume.SetUp failed for volume "rules-secret" : secret "rules" not found
gmp-system                   3m56s       Warning   FailedMount                              pod/rule-evaluator-6f659bc47f-jmqd8                                  MountVolume.SetUp failed for volume "rules" : configmap "rules-generated" not found
gmp-system                   3m56s       Warning   FailedMount                              pod/rule-evaluator-6f659bc47f-jmqd8                                  MountVolume.SetUp failed for volume "config" : configmap "rule-evaluator" not found
gmp-system                   5m36s       Normal    SuccessfulCreate                         replicaset/rule-evaluator-6f659bc47f                                 Created pod: rule-evaluator-6f659bc47f-jmqd8
gmp-system                   3m50s       Normal    SuccessfulDelete                         replicaset/rule-evaluator-6f659bc47f                                 Deleted pod: rule-evaluator-6f659bc47f-jmqd8
gmp-system                   3m50s       Normal    Scheduled                                pod/rule-evaluator-dcfd7b8cc-5d9kd                                   Successfully assigned gmp-system/rule-evaluator-dcfd7b8cc-5d9kd to gke-tchains-e2e-cls18675-default-pool-da3f5458-61nr
gmp-system                   3m49s       Normal    Pulling                                  pod/rule-evaluator-dcfd7b8cc-5d9kd                                   Pulling image "us-central1-artifactregistry.gcr.io/gke-release/gke-release/gke-distroless/bash:gke_distroless_20240807.00_p0@sha256:ec5022c67b5316ae07f44ed374894e9bb55d548884d293da6b0d350a46dff2df"
gmp-system                   3m49s       Normal    Pulled                                   pod/rule-evaluator-dcfd7b8cc-5d9kd                                   Successfully pulled image "us-central1-artifactregistry.gcr.io/gke-release/gke-release/gke-distroless/bash:gke_distroless_20240807.00_p0@sha256:ec5022c67b5316ae07f44ed374894e9bb55d548884d293da6b0d350a46dff2df" in 195ms (195ms including waiting). Image size: 18373482 bytes.
gmp-system                   3m49s       Normal    Created                                  pod/rule-evaluator-dcfd7b8cc-5d9kd                                   Created container config-init
... skipping 145 lines ...
kube-system                  3m51s       Normal    Started                                  pod/konnectivity-agent-5c7bc46b5-c4wbs                               Started container konnectivity-agent
kube-system                  3m51s       Normal    Pulling                                  pod/konnectivity-agent-5c7bc46b5-c4wbs                               Pulling image "us-central1-artifactregistry.gcr.io/gke-release/gke-release/gke-metrics-collector:20240717_2300_RC0@sha256:d460e6b5088332f62b990f8a1f7bf6d9eca7c3f41cb974e3db493d6b0fc4ad70"
kube-system                  3m50s       Normal    Pulled                                   pod/konnectivity-agent-5c7bc46b5-c4wbs                               Successfully pulled image "us-central1-artifactregistry.gcr.io/gke-release/gke-release/gke-metrics-collector:20240717_2300_RC0@sha256:d460e6b5088332f62b990f8a1f7bf6d9eca7c3f41cb974e3db493d6b0fc4ad70" in 985ms (985ms including waiting). Image size: 24425624 bytes.
kube-system                  3m50s       Normal    Created                                  pod/konnectivity-agent-5c7bc46b5-c4wbs                               Created container konnectivity-agent-metrics-collector
kube-system                  3m50s       Normal    Started                                  pod/konnectivity-agent-5c7bc46b5-c4wbs                               Started container konnectivity-agent-metrics-collector
kube-system                  3m53s       Normal    Scheduled                                pod/konnectivity-agent-5c7bc46b5-zjq8r                               Successfully assigned kube-system/konnectivity-agent-5c7bc46b5-zjq8r to gke-tchains-e2e-cls18675-default-pool-02a7e857-lftt
kube-system                  3m52s       Warning   FailedMount                              pod/konnectivity-agent-5c7bc46b5-zjq8r                               MountVolume.SetUp failed for volume "konnectivity-agent-metrics-collector-config-map-vol" : failed to sync configmap cache: timed out waiting for the condition
kube-system                  3m51s       Normal    Pulling                                  pod/konnectivity-agent-5c7bc46b5-zjq8r                               Pulling image "us-central1-artifactregistry.gcr.io/gke-release/gke-release/proxy-agent:v0.30.2-gke.2@sha256:d0346df5dceadc5bd9fa6a00415353bcc85b18c48a40bee5aa0df698c13c39f4"
kube-system                  3m50s       Normal    Pulled                                   pod/konnectivity-agent-5c7bc46b5-zjq8r                               Successfully pulled image "us-central1-artifactregistry.gcr.io/gke-release/gke-release/proxy-agent:v0.30.2-gke.2@sha256:d0346df5dceadc5bd9fa6a00415353bcc85b18c48a40bee5aa0df698c13c39f4" in 1.165s (1.165s including waiting). Image size: 10288044 bytes.
kube-system                  3m50s       Normal    Created                                  pod/konnectivity-agent-5c7bc46b5-zjq8r                               Created container konnectivity-agent
kube-system                  3m50s       Normal    Started                                  pod/konnectivity-agent-5c7bc46b5-zjq8r                               Started container konnectivity-agent
kube-system                  3m50s       Normal    Pulling                                  pod/konnectivity-agent-5c7bc46b5-zjq8r                               Pulling image "us-central1-artifactregistry.gcr.io/gke-release/gke-release/gke-metrics-collector:20240717_2300_RC0@sha256:d460e6b5088332f62b990f8a1f7bf6d9eca7c3f41cb974e3db493d6b0fc4ad70"
kube-system                  3m49s       Normal    Pulled                                   pod/konnectivity-agent-5c7bc46b5-zjq8r                               Successfully pulled image "us-central1-artifactregistry.gcr.io/gke-release/gke-release/gke-metrics-collector:20240717_2300_RC0@sha256:d460e6b5088332f62b990f8a1f7bf6d9eca7c3f41cb974e3db493d6b0fc4ad70" in 1.082s (1.082s including waiting). Image size: 24425624 bytes.
... skipping 96 lines ...
kube-system                  4m25s       Normal    Scheduled                                pod/metrics-server-v1.30.3-5c9bdb779d-sl8n4                          Successfully assigned kube-system/metrics-server-v1.30.3-5c9bdb779d-sl8n4 to gke-tchains-e2e-cls18675-default-pool-d245ea6d-4wr6
kube-system                  4m2s        Normal    Pulling                                  pod/metrics-server-v1.30.3-5c9bdb779d-sl8n4                          Pulling image "us-central1-artifactregistry.gcr.io/gke-release/gke-release/metrics-server:v0.7.1-gke.24@sha256:7a16f036168572dcde9b6bce67d6399fcaf91ddb5f8315f1755970711318221b"
kube-system                  3m54s       Normal    Pulled                                   pod/metrics-server-v1.30.3-5c9bdb779d-sl8n4                          Successfully pulled image "us-central1-artifactregistry.gcr.io/gke-release/gke-release/metrics-server:v0.7.1-gke.24@sha256:7a16f036168572dcde9b6bce67d6399fcaf91ddb5f8315f1755970711318221b" in 2.245s (7.85s including waiting). Image size: 19252714 bytes.
kube-system                  3m54s       Normal    Created                                  pod/metrics-server-v1.30.3-5c9bdb779d-sl8n4                          Created container metrics-server
kube-system                  3m54s       Normal    Started                                  pod/metrics-server-v1.30.3-5c9bdb779d-sl8n4                          Started container metrics-server
kube-system                  2m43s       Normal    Killing                                  pod/metrics-server-v1.30.3-5c9bdb779d-sl8n4                          Stopping container metrics-server
kube-system                  5m30s       Warning   FailedCreate                             replicaset/metrics-server-v1.30.3-5c9bdb779d                         Error creating: pods "metrics-server-v1.30.3-5c9bdb779d-" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount "metrics-server" not found
kube-system                  5m29s       Normal    SuccessfulCreate                         replicaset/metrics-server-v1.30.3-5c9bdb779d                         Created pod: metrics-server-v1.30.3-5c9bdb779d-sl8n4
kube-system                  2m43s       Normal    SuccessfulDelete                         replicaset/metrics-server-v1.30.3-5c9bdb779d                         Deleted pod: metrics-server-v1.30.3-5c9bdb779d-sl8n4
kube-system                  4m52s       Warning   FailedScheduling                         pod/metrics-server-v1.30.3-75c9b65594-9n6cf                          no nodes available to schedule pods
kube-system                  4m42s       Warning   FailedScheduling                         pod/metrics-server-v1.30.3-75c9b65594-9n6cf                          0/3 nodes are available: 3 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling.
kube-system                  4m25s       Normal    Scheduled                                pod/metrics-server-v1.30.3-75c9b65594-9n6cf                          Successfully assigned kube-system/metrics-server-v1.30.3-75c9b65594-9n6cf to gke-tchains-e2e-cls18675-default-pool-d245ea6d-4wr6
kube-system                  3m59s       Normal    Pulling                                  pod/metrics-server-v1.30.3-75c9b65594-9n6cf                          Pulling image "us-central1-artifactregistry.gcr.io/gke-release/gke-release/metrics-server:v0.7.1-gke.24@sha256:7a16f036168572dcde9b6bce67d6399fcaf91ddb5f8315f1755970711318221b"
... skipping 65 lines ...
tekton-pipelines             3m55s       Normal    Created                                  pod/tekton-pipelines-webhook-fb7fdd4cd-x8xbm                         Created container webhook
tekton-pipelines             3m55s       Normal    Started                                  pod/tekton-pipelines-webhook-fb7fdd4cd-x8xbm                         Started container webhook
tekton-pipelines             3m58s       Normal    SuccessfulCreate                         replicaset/tekton-pipelines-webhook-fb7fdd4cd                        Created pod: tekton-pipelines-webhook-fb7fdd4cd-x8xbm
tekton-pipelines             3m58s       Normal    ScalingReplicaSet                        deployment/tekton-pipelines-webhook                                  Scaled up replica set tekton-pipelines-webhook-fb7fdd4cd to 1
tekton-pipelines             2m43s       Warning   FailedGetResourceMetric                  horizontalpodautoscaler/tekton-pipelines-webhook                     No recommendation
***************************************
***         E2E TEST FAILED         ***
***     End of information dump     ***
***************************************
2024/12/13 14:23:28 process.go:155: Step '/home/prow/go/src/github.com/tektoncd/chains/test/e2e-tests.sh --run-tests' finished in 4m9.862323475s
2024/12/13 14:23:28 main.go:319: Something went wrong: encountered 1 errors: [error during /home/prow/go/src/github.com/tektoncd/chains/test/e2e-tests.sh --run-tests: exit status 1]
Test subprocess exited with code 0
Artifacts were written to /logs/artifacts
Test result code is 1
==================================
==== INTEGRATION TESTS FAILED ====
==================================
+ EXIT_VALUE=1
+ set +o xtrace