Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Getting ingress controller fake certificate in ssl-passthrough mode #12897

Open
feiluo-db opened this issue Feb 24, 2025 · 10 comments
Open

Getting ingress controller fake certificate in ssl-passthrough mode #12897

feiluo-db opened this issue Feb 24, 2025 · 10 comments
Labels
needs-kind Indicates a PR lacks a `kind/foo` label and requires one. needs-priority needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one.

Comments

@feiluo-db
Copy link

feiluo-db commented Feb 24, 2025

I deployed the ingress controller with --enable-ssl-passthrough flag on. Verified in the nginx.conf file that it is indeed turned on.
Ingress controller is started with --ingress-class=my-test-nginx to match the ingress class annotation on Ingress resource.
On my ingress resource annotation, I added

"nginx.ingress.kubernetes.io/ssl-passthrough": "true"

The full configuration looks like the following

{
    apiVersion: "networking.k8s.io/v1",
    kind: "Ingress",
    metadata: {
      name: "my-test-ingress",
      namespace: "my-test-ns",
      annotations: {
        "kubernetes.io/ingress.class": "my-test-nginx",
        "nginx.ingress.kubernetes.io/ssl-passthrough": "true",
        "nginx.ingress.kubernetes.io/ssl-redirect": "true",
      },
    },
    spec: {
      rules: [
        {
          host: "my-test.dev.example.com",
          http: {
            paths: [
              {
                path: "/",
                pathType: "Prefix",
                backend: {
                  service: {
                    name: "my-test-svc",
                    port: {
                      number: 8443,
                    },
                  },
                },
              },
            ],
          },
        },
      ],
    },
  },
{
    appName:: "my-test-svc",
    apiVersion: "v1",
    kind: "Service",

    metadata: {
      name: "my-test-svc",
      namespace: "my-test-ns",
    },
    spec: {
      ports: [
        {
          name: "doesnt matter",
          port: 8443,
          targetPort: 8443,
          protocol: "TCP",
        },
      ],

      selector: { app: "my-test-app" },
      type: "ClusterIP",
    },
  },

My ingress controller is deployed on AWS as a AWS ELB. No TLS cert is configured on the listener of ELB as it shouldn't terminate TLS.
Any advice on how to further debug this would be very much appreciated!

@k8s-ci-robot
Copy link
Contributor

This issue is currently awaiting triage.

If Ingress contributors determines this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@k8s-ci-robot k8s-ci-robot added needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. needs-kind Indicates a PR lacks a `kind/foo` label and requires one. needs-priority labels Feb 24, 2025
@longwuyuan
Copy link
Contributor

show the curl request and the response with flags -iv.

if you answer the questions asked in a new issue template, then the readers can have data to analyze and base comments on.

@feiluo-db
Copy link
Author

I noticed a mis-configuration. Sorry for the noise!

@feiluo-db feiluo-db reopened this Feb 26, 2025
@feiluo-db
Copy link
Author

Actually it still doesn't work.

curl request and response with flags -iv

curl -iv https://runbot-fei-luo-new-8-ci-shard.dev.databricks.com
*   Trying 44.227.188.95:443...
* TCP_NODELAY set
* Connected to runbot-fei-luo-new-8-ci-shard.dev.databricks.com (44.227.188.95) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
*   CAfile: /etc/ssl/certs/ca-certificates.crt
  CApath: /etc/ssl/certs
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* TLSv1.2 (IN), TLS handshake, Certificate (11):
* TLSv1.2 (OUT), TLS alert, unknown CA (560):
* SSL certificate problem: unable to get local issuer certificate
* Closing connection 0
curl: (60) SSL certificate problem: unable to get local issuer certificate
More details here: https://curl.haxx.se/docs/sslcerts.html

curl failed to verify the legitimacy of the server and therefore could not
establish a secure connection to it. To learn more about this situation and
how to fix it, please visit the web page mentioned above.

@longwuyuan
Copy link
Contributor

@feiluo-db you will still get only guess based comments.

You showed the curl command and output.

But unless yo ushow the kubectl describe command output of all related resources like ;

  • kubectl -n ingress-nginx get all
  • kubectl -n ingress-nginx describe svc ingress-nginx-controller
  • kubectl -n my-test-ns get all
  • kubectl -n my-test-ns describe svc my-test-svc
  • kubectl -n my-test-ns describe my-test-ingress

In the post above, you are sending request to a host different from the one sen in the ingress yaml so the data you sent is useless for analysis.

@feiluo-db
Copy link
Author

@longwuyuan Yeah the my-test names are fake names I made up for simplicity.

Here are the actual outputs:

k describe svc cci-ingress-controller-service

Name:                        cci-ingress-controller-service
Namespace:                   jenkins-build
Labels:                      app.kubernetes.io/component=controller
                             app.kubernetes.io/instance=ingress-nginx
                             app.kubernetes.io/name=ingress-nginx
                             app.kubernetes.io/part-of=ingress-nginx
                             app.kubernetes.io/version=0.41.0
Annotations:            
                           
                             service.beta.kubernetes.io/aws-load-balancer-type: elb
Selector:                    app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx,app=cci-ingress-controller
Type:                        LoadBalancer
IP Family Policy:            SingleStack
IP Families:                 IPv4
IP:                          10.3.203.29
IPs:                         10.3.203.29
LoadBalancer Ingress:        a70fe2029834541878f29b5f25767d0f-557761935.us-west-2.elb.amazonaws.com
Port:                        metrics  10254/TCP
TargetPort:                  10254/TCP
NodePort:                    metrics  31293/TCP
Endpoints:                   10.6.23.194:10254
Port:                        http  80/TCP
TargetPort:                  80/TCP
NodePort:                    http  32145/TCP
Endpoints:                   10.6.23.194:80
Port:                        https  443/TCP
TargetPort:                  https/TCP
NodePort:                    https  32003/TCP
Endpoints:                   10.6.23.194:443
Session Affinity:            None
External Traffic Policy:     Local
HealthCheck NodePort:        31511

k describe svc runbot-fei-luo-new-8-ci-shard-internal-ext

Name:              runbot-fei-luo-new-8-ci-shard-internal-ext
Namespace:         jenkins-build
Labels:            <none>
Annotations:       databricks/last_modified_by: fei.luo
Selector:          app=runbot-fei-luo-new-8-ci-shard
Type:              ClusterIP
IP Family Policy:  SingleStack
IP Families:       IPv4
IP:                10.3.158.7
IPs:               10.3.158.7
Port:              runbot-port-ingress  8443/TCP
TargetPort:        8443/TCP
Endpoints:         10.6.16.174:8443
Session Affinity:  None
Events:            <none>

k describe ing runbot-fei-luo-new-8-ci-shard-ingress

Name:             runbot-fei-luo-new-8-ci-shard-ingress
Labels:           <none>
Namespace:        jenkins-build
Address:          
Default backend:  default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
Rules:
  Host                                              Path  Backends
  ----                                              ----  --------
  runbot-fei-luo-new-8-ci-shard.dev.databricks.com  
                                                    /   runbot-fei-luo-new-8-ci-shard-internal-ext:8443 (10.6.16.174:8443)
Annotations:                                        kubernetes.io/ingress.class: ci-shard-nginx
                                                    nginx.ingress.kubernetes.io/backend-protocol: HTTPS
                                                    nginx.ingress.kubernetes.io/ssl-passthrough: true
Events:
  Type    Reason  Age    From                      Message
  ----    ------  ----   ----                      -------
  Normal  Sync    4m30s  nginx-ingress-controller  Scheduled for sync
  Normal  Sync    4m28s  nginx-ingress-controller  Scheduled for sync

@longwuyuan
Copy link
Contributor

longwuyuan commented Feb 26, 2025 via email

@feiluo-db
Copy link
Author

Thanks. Yes I did follow https://argo-cd.readthedocs.io/en/stable/operator-manual/ingress/#kubernetesingress-nginx to add the annotations to the ingress resource. It didn't work if I remember correctly. I can try that again but likely it won't solve the issue.

Secondly, I curl'ed the service cluster IP from within the same cluster to make sure the server certificate is valid. I did get the right server certificate which has DigiCert as root CA in the chain.

I think the ingress controller image we're using is v-1.3.1. I can try to upgrade to a newer version, but again I don't think it is the root cause as I think ssl-passthrough should be well supported even before that version.

@longwuyuan
Copy link
Contributor

longwuyuan commented Feb 26, 2025 via email

@longwuyuan
Copy link
Contributor

Important points to note are as follows.

  • If you have problems then it does not mean that you should use annotations as per your own choice. Regardless of problems, the ingress should be configured only and only with these 2 annotations
    nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
    nginx.ingress.kubernetes.io/ssl-passthrough: "true"

Not using these 2 annotations only and using backend-protocol annotation is INVALID test

  • The certificate you used inside Jenkins server needs to be a pem/cert file that has the cert + fullchain hash. If you use only the cert hash, some clients may not know digicert CA. In any case you need to use the openssl s_client -connect ..... command to see which cert is presented from outside the cluster. Testing over clusterIP, however right it may seem, is INVALID test

  • If ssl-passthrough is broken feature, then the whole community of ssl-passthrough users will report the problem. Since that is not happening, the root-cause is in your environment and not in the controller.

  • Hence easy way to know root-cause is show the real live data from the cluster, in zoom session as posting the certificate hash etc is going to be not secure for you. Reach out to me on slack as that will be live communication

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
needs-kind Indicates a PR lacks a `kind/foo` label and requires one. needs-priority needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one.
Projects
Development

No branches or pull requests

3 participants