-
Notifications
You must be signed in to change notification settings - Fork 129
Description
Describe the bug
After upgrading from 1.6.2 to 2.0.2 the podAnnotation configuration is only applied to the data-plane of the setup.
Since we use Grafana cloud, we set the Grafana annotations ourselves through the helm chart those are not propagated to both pods only the data-plane.
To Reproduce
Steps to reproduce the behavior:
install the version 2.0.2 of nginx-gateway-fabric chart
We installed them with this template:
service:
type: ClusterIP
terminationGracePeriodSeconds: 60
nginx:
config:
telemetry:
exporter:
endpoint: k8s-monitoring-alloy.k8s-monitoring.svc.cluster.local:4317
spanAttributes:
- key: k8s.cluster.name
value: ${environment}-${cluster_name}
- key: deployment.environment
value: ${environment_short}
lifecycle: # Lifecycle is necessary otherwise you'll get broken connections during downscale
preStop:
exec:
command:
- /bin/sleep
- "60" # This flag is optional, the default is 30s
service:
type: ClusterIP
nginxGateway:
configAnnotations:
k8s.grafana.com/job: nginx-gateway-fabric
gatewayClassAnnotations:
k8s.grafana.com/job: nginx-gateway-fabric
podAnnotations:
k8s.grafana.com/metrics.portNumber: "9113"
k8s.grafana.com/scrape: "true"
k8s.grafana.com/job: nginx-gateway-fabric
lifecycle: # Lifecycle is necessary otherwise you'll get broken connections during downscale
preStop:
exec:
command:
- /usr/bin/gateway
- sleep
- --duration=60s # This flag is optional, the default is 30s
resources:
requests:
cpu: "0.00841"
memory: 75Mi
service:
annotations:
k8s.grafana.com/metrics.portNumber: "9113"
k8s.grafana.com/scrape: "true"
k8s.grafana.com/job: nginx-gateway-fabric
# Try to prevent upscaled pod to start on worker-node already housing one
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app.kubernetes.io/name
operator: In
values:
- nginx-gateway-fabric
topologyKey: "kubernetes.io/hostname"
gateways:
- name: default-http
annotations:
k8s.grafana.com/metrics.portNumber: "9113"
k8s.grafana.com/scrape: "true"
k8s.grafana.com/job: nginx-gateway-fabric
spec:
infrastructure:
annotations:
k8s.grafana.com/metrics.portNumber: "9113"
k8s.grafana.com/scrape: "true"
k8s.grafana.com/job: nginx-gateway-fabric
gatewayClassName: nginx
listeners:
- protocol: HTTP
port: 80
name: http
allowedRoutes:
kinds:
- kind: HTTPRoute
namespaces:
from: All
Expected behavior
We expect to get metrics from both the control and the dataplane into our Grafana instance, but only see those of the data-plan (pods named: nginx-gateway-fabric-{hash}-{id}
).
Meaning that the metric we used to scale our gateway with, no longer is available (active connections).
Your environment
- Version of the NGINX Gateway Fabric - 2.0.2
- Kubernetes platform EKS (Kubernetes 1.32)
Additional context
When we enable metrics setting, it seems the Prometheus annotations hard-coded in the chart are propagated to both control & data -plane pods.
We've tried all available keys which mention annotation in their key, but non seem to work in this specific case.
Currently we're missing the metrics in our cloud instances like:
nginx_http_connection_count_connections
nginx_http_connections_total
nginx_http_request_count_requests
nginx_http_requests_total
Which would be very helpful to have to determine our replica count and were available in v1 of this chart.
Metadata
Metadata
Assignees
Labels
Type
Projects
Status