Kubernetes zawsze udostępnia usługę 503 tymczasowo niedostępną z wieloma Ingress TLS

13

Mam konfigurację klastra kubernetes przez kops w Amazon Web Services

Mam konfigurację 2 witryn. Jeden jest zabezpieczony przez SSL / TLS / https, a drugi to tylko http. Oba są stronami Wordpress. Domeny zostały zmienione w celu ochrony tożsamości witryny

Konfiguracja Ingress:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: my-rules
spec:
  tls:
  - hosts:
    - site1.com
    secretName: site1-tls-secret
  - hosts:
    - www.site1.com
    secretName: site1-tls-secret
  rules:
  - host: site1.com
    http:
      paths:
      - path: /
        backend:
          serviceName: site1
          servicePort: 80
  - host: www.site1.com
    http:
      paths:
      - path: /
        backend:
          serviceName: site1
          servicePort: 80
  - host: blog.site2.com
    http:
      paths:
      - path: /
        backend:
          serviceName: site2
          servicePort: 80

Usługa Ingress

apiVersion: v1
kind: Service
metadata:
  name: nginx-ingress
  labels:
    app: nginx-ingress
    k8s-addon: ingress-nginx.addons.k8s.io
  annotations:
    service.beta.kubernetes.io/aws-load-balancer-backend-protocol: 'tcp'
    service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: '*'
    service.beta.kubernetes.io/aws-load-balancer-ssl-ports: '443'
spec:
  type: LoadBalancer
  ports:
  - name: http
    port: 80
    targetPort: 80
  - name: https
    port: 443
    targetPort: 443
  selector:
    app: nginx-ingress

Wdrażanie Ingress:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: nginx-ingress
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: nginx-ingress
    spec:
      containers:
      - name: nginx-ingress
        image: gcr.io/google_containers/nginx-ingress-controller:0.9.0-beta.11
        imagePullPolicy: Always
        readinessProbe:
          httpGet:
            path: /healthz
            port: 10254
            scheme: HTTP
        livenessProbe:
          httpGet:
            path: /healthz
            port: 10254
            scheme: HTTP
          initialDelaySeconds: 10
          timeoutSeconds: 1
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        ports:
        - containerPort: 80
          hostPort: 80
        args:
        - /nginx-ingress-controller
        - --default-backend-service=$(POD_NAMESPACE)/echoheaders-default
        - --configmap=$(POD_NAMESPACE)/nginx-load-balancer-conf

Wygenerowano nginx.conf

daemon off;

worker_processes 1;
pid /run/nginx.pid;

worker_rlimit_nofile 1047552;
events {
    multi_accept        on;
    worker_connections  16384;
    use                 epoll;
}

http {
    set_real_ip_from    0.0.0.0/0;
    real_ip_header      proxy_protocol;

    real_ip_recursive   on;

    geoip_country       /etc/nginx/GeoIP.dat;
    geoip_city          /etc/nginx/GeoLiteCity.dat;
    geoip_proxy_recursive on;
    # lua section to return proper error codes when custom pages are used
    lua_package_path '.?.lua;/etc/nginx/lua/?.lua;/etc/nginx/lua/vendor/lua-resty-http/lib/?.lua;';
    init_by_lua_block {
        require("error_page")
    }

    sendfile            on;
    aio                 threads;
    tcp_nopush          on;
    tcp_nodelay         on;

    log_subrequest      on;

    reset_timedout_connection on;
    keepalive_timeout  75s;
    keepalive_requests 100;

    client_header_buffer_size       1k;
    large_client_header_buffers     4 8k;
    client_body_buffer_size         8k;

    http2_max_field_size            4k;
    http2_max_header_size           16k;

    types_hash_max_size             2048;
    server_names_hash_max_size      1024;
    server_names_hash_bucket_size   64;
    map_hash_bucket_size            64;

    proxy_headers_hash_max_size     512;
    proxy_headers_hash_bucket_size  64;

    variables_hash_bucket_size      64;
    variables_hash_max_size         2048;

    underscores_in_headers          off;
    ignore_invalid_headers          on;

    include /etc/nginx/mime.types;
    default_type text/html;
    gzip on;
    gzip_comp_level 5;
    gzip_http_version 1.1;
    gzip_min_length 256;
    gzip_types application/atom+xml application/javascript application/x-javascr
ipt application/json application/rss+xml application/vnd.ms-fontobject applicati
on/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml applicat
ion/xml font/opentype image/svg+xml image/x-icon text/css text/plain text/x-comp
onent;
    gzip_proxied any;

    # Custom headers for response

    server_tokens on;

    # disable warnings
    uninitialized_variable_warn off;

    log_format upstreaminfo '$the_real_ip - [$the_real_ip] - $remote_user [$time
_local] "$request" $status $body_bytes_sent "$http_referer" "$http_user_agent" $
request_length $request_time [$proxy_upstream_name] $upstream_addr $upstream_res
ponse_length $upstream_response_time $upstream_status';

    map $request_uri $loggable {
        default 1;
    }

    access_log /var/log/nginx/access.log upstreaminfo if=$loggable;
    error_log  /var/log/nginx/error.log notice;

    resolver 100.64.0.10 valid=30s;

    # Retain the default nginx handling of requests without a "Connection" heade
r
    map $http_upgrade $connection_upgrade {
        default          upgrade;
        ''               close;
    }
    # trust http_x_forwarded_proto headers correctly indicate ssl offloading
    map $http_x_forwarded_proto $pass_access_scheme {
        default          $http_x_forwarded_proto;
        ''               $scheme;
    }

    map $http_x_forwarded_port $pass_server_port {
       default           $http_x_forwarded_port;
       ''                $server_port;
    }

    map $http_x_forwarded_for $the_real_ip {
        default          $http_x_forwarded_for;
        ''               $proxy_protocol_addr;
    }

    # map port 442 to 443 for header X-Forwarded-Port
    map $pass_server_port $pass_port {
        442              443;
        default          $pass_server_port;
    }

    # Map a response error watching the header Content-Type
    map $http_accept $httpAccept {
        default          html;
        application/json json;
        application/xml  xml;
        text/plain       text;
    }

    map $httpAccept $httpReturnType {
        default          text/html;
        json             application/json;
        xml              application/xml;
        text             text/plain;
    }

    # Obtain best http host
    map $http_host $this_host {
        default          $http_host;
        ''               $host;
    }

    map $http_x_forwarded_host $best_http_host {
        default          $http_x_forwarded_host;
        ''               $this_host;
    }

    server_name_in_redirect off;
    port_in_redirect        off;

    ssl_protocols TLSv1 TLSv1.1 TLSv1.2;

    # turn on session caching to drastically improve performance
    ssl_session_cache builtin:1000 shared:SSL:10m;
    ssl_session_timeout 10m;

    # allow configuring ssl session tickets
    ssl_session_tickets on;

    # slightly reduce the time-to-first-byte
    ssl_buffer_size 4k;

    # allow configuring custom ssl ciphers
    ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE
-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:D
HE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-
SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE
-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-
SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AE
S256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AE
S256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPOR
T:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-D
ES-CBC3-SHA';
    ssl_prefer_server_ciphers on;

    ssl_ecdh_curve secp384r1;

    proxy_ssl_session_reuse on;

    upstream upstream-default-backend {
        # Load balance algorithm; empty for round robin, which is the default
        least_conn;
        server 100.96.1.49:8080 max_fails=0 fail_timeout=0;
    }

    upstream default-site1-80 {
        # Load balance algorithm; empty for round robin, which is the default
        least_conn;
        server 127.0.0.1:8181 max_fails=0 fail_timeout=0;
    }

    upstream default-site2blog-80 {
        # Load balance algorithm; empty for round robin, which is the default
        least_conn;
        server 100.96.2.127:80 max_fails=0 fail_timeout=0;
        server 100.96.1.52:80 max_fails=0 fail_timeout=0;
    }
    server {
        server_name _;
        listen 80 proxy_protocol default_server reuseport backlog=511;
        listen [::]:80 proxy_protocol default_server reuseport backlog=511;
        set $proxy_upstream_name "-";

        listen 442 proxy_protocol default_server reuseport backlog=511 ssl http2;
        listen [::]:442 proxy_protocol  default_server reuseport backlog=511 ssl http2;
        # PEM sha: ------
        ssl_certificate                         /ingress-controller/ssl/default-fake-certificate.pem;
        ssl_certificate_key                     /ingress-controller/ssl/default-fake-certificate.pem;

        more_set_headers                        "Strict-Transport-Security: max-age=15724800; includeSubDomains;";
        location / {
            set $proxy_upstream_name "upstream-default-backend";

            port_in_redirect off;

            proxy_set_header Host                   $best_http_host;

            # Pass the extracted client certificate to the backend

            # Allow websocket connections
            proxy_set_header                        Upgrade           $http_upgrade;
            proxy_set_header                        Connection        $connection_upgrade;

            proxy_set_header X-Real-IP              $the_real_ip;
            proxy_set_header X-Forwarded-For        $the_real_ip;
            proxy_set_header X-Forwarded-Host       $best_http_host;
            proxy_set_header X-Forwarded-Port       $pass_port;
            proxy_set_header X-Forwarded-Proto      $pass_access_scheme;
            proxy_set_header X-Original-URI         $request_uri;
            proxy_set_header X-Scheme               $pass_access_scheme;

            # mitigate HTTPoxy Vulnerability
            # https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/
            proxy_set_header Proxy                  "";

            # Custom headers to proxied server

            proxy_connect_timeout                   10s;
            proxy_send_timeout                      120s;
            proxy_read_timeout                      120s;

            proxy_redirect                          off;
            proxy_buffering                         off;
            proxy_buffer_size                       "4k";
            proxy_buffers                           4 "4k";

            proxy_http_version                      1.1;

            proxy_cookie_domain                     off;
            proxy_cookie_path                       off;

            # In case of errors try the next upstream server before returning an error
            proxy_next_upstream                     error timeout invalid_header http_502 http_503 http_504;

            proxy_pass http://upstream-default-backend;
        }

        # health checks in cloud providers require the use of port 80
        location /healthz {
            access_log off;
            return 200;
        }

        # this is required to avoid error if nginx is being monitored
        # with an external software (like sysdig)
        location /nginx_status {
            allow 127.0.0.1;
            allow ::1;
            deny all;

            access_log off;
            stub_status on;
        }
    }
    server {
        server_name blog.site2.com;
        listen 80 proxy_protocol;
        listen [::]:80 proxy_protocol;
        set $proxy_upstream_name "-";
        location / {
            set $proxy_upstream_name "default-site2blog-80";

            port_in_redirect off;

            client_max_body_size                    "20m";

            proxy_set_header Host                   $best_http_host;

            # Pass the extracted client certificate to the backend

            # Allow websocket connections
            proxy_set_header                        Upgrade           $http_upgrade;
            proxy_set_header                        Connection        $connection_upgrade;

            proxy_set_header X-Real-IP              $the_real_ip;
            proxy_set_header X-Forwarded-For        $the_real_ip;
            proxy_set_header X-Forwarded-Host       $best_http_host;
            proxy_set_header X-Forwarded-Port       $pass_port;
            proxy_set_header X-Forwarded-Proto      $pass_access_scheme;
            proxy_set_header X-Original-URI         $request_uri;
            proxy_set_header X-Scheme               $pass_access_scheme;

            # mitigate HTTPoxy Vulnerability
            # https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/
            proxy_set_header Proxy                  "";
            # Custom headers to proxied server

            proxy_connect_timeout                   10s;
            proxy_send_timeout                      120s;
            proxy_read_timeout                      120s;

            proxy_redirect                          off;
            proxy_buffering                         off;
            proxy_buffer_size                       "4k";
            proxy_buffers                           4 "4k";

            proxy_http_version                      1.1;

            proxy_cookie_domain                     off;
            proxy_cookie_path                       off;

            # In case of errors try the next upstream server before returning an error
            proxy_next_upstream                     error timeout invalid_header http_502 http_503 http_504;

            proxy_pass http://default-site2blog-80;
        }

    }

    server {
        server_name site1.com;
        listen 80 proxy_protocol;
        listen [::]:80 proxy_protocol;
        set $proxy_upstream_name "-";

        listen 442 proxy_protocol ssl http2;
        listen [::]:442 proxy_protocol  ssl http2;
        # PEM sha: ---
        ssl_certificate                         /ingress-controller/ssl/default-site1-tls-secret.pem;
        ssl_certificate_key                     /ingress-controller/ssl/default-site1-tls-secret.pem;

        more_set_headers                        "Strict-Transport-Security: max-age=15724800; includeSubDomains;";
        location / {
            set $proxy_upstream_name "default-site1-80";

            # enforce ssl on server side
            if ($pass_access_scheme = http) {
                return 301 https://$best_http_host$request_uri;
            }
            port_in_redirect off;

            client_max_body_size                    "20m";

            proxy_set_header Host                   $best_http_host;

            # Pass the extracted client certificate to the backend

            # Allow websocket connections
            proxy_set_header                        Upgrade           $http_upgr
ade;
            proxy_set_header                        Connection        $connectio
n_upgrade;

            proxy_set_header X-Real-IP              $the_real_ip;
            proxy_set_header X-Forwarded-For        $the_real_ip;
            proxy_set_header X-Forwarded-Host       $best_http_host;
            proxy_set_header X-Forwarded-Port       $pass_port;
            proxy_set_header X-Forwarded-Proto      $pass_access_scheme;
            proxy_set_header X-Original-URI         $request_uri;
            proxy_set_header X-Scheme               $pass_access_scheme;

            # mitigate HTTPoxy Vulnerability
            # https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/
            proxy_set_header Proxy                  "";

            # Custom headers to proxied server

            proxy_connect_timeout                   10s;
            proxy_send_timeout                      120s;
            proxy_read_timeout                      120s;

            proxy_redirect                          off;
            proxy_buffering                         off;
            proxy_buffer_size                       "4k";
            proxy_buffers                           4 "4k";

            proxy_http_version                      1.1;

            proxy_cookie_domain                     off;
            proxy_cookie_path                       off;

            # In case of errors try the next upstream server before returning an error
            proxy_next_upstream                     error timeout invalid_header http_502 http_503 http_504;

            proxy_pass http://default-site1-80;
        }

    }

    # default server, used for NGINX healthcheck and access to nginx stats
    server {
        # Use the port 18080 (random value just to avoid known ports) as default port for nginx.
        # Changing this value requires a change in:
        # https://github.com/kubernetes/contrib/blob/master/ingress/controllers/nginx/nginx/command.go#L104
        listen 18080 default_server reuseport backlog=511;
        listen [::]:18080 default_server reuseport backlog=511;
        set $proxy_upstream_name "-";

        location /healthz {
            access_log off;
            return 200;
        }

        location /nginx_status {
            set $proxy_upstream_name "internal";

            access_log off;
            stub_status on;
        }

        # this location is used to extract nginx metrics
        # using prometheus.
        # TODO: enable extraction for vts module.
        location /internal_nginx_status {
            set $proxy_upstream_name "internal";

            allow 127.0.0.1;
            allow ::1;
            deny all;

            access_log off;
            stub_status on;
        }

        location / {
            set $proxy_upstream_name "upstream-default-backend";
            proxy_pass             http://upstream-default-backend;
        }

    }

    # default server for services without endpoints
    server {
        listen 8181;
        set $proxy_upstream_name "-";

        location / {
            return 503;
        }
    }
}

stream {
    log_format log_stream [$time_local] $protocol $status $bytes_sent $bytes_received $session_time;

    access_log /var/log/nginx/access.log log_stream;

    error_log  /var/log/nginx/error.log;

    # TCP services

    # UDP services
}
Greg Pagendam-Turner
źródło
1
Naprawiłem 503 dla strony http blog.site2.com, odtwarzając wdrożenie i usługę dla tej witryny. To nie naprawiło strony https.
Greg Pagendam-Turner,

Odpowiedzi:

17

Było to spowodowane konfiguracją Ingress odwołującą się do niepoprawnej nazwy usług. po zaktualizowaniu odwołania Ingress do usługi nie otrzymuję już 503.

Greg Pagendam-Turner
źródło
3
Pamiętaj też, że możesz mieć wiele wejść. Generalnie tworzę jeden ingres na jedną usługę. Nie ma potrzeby zatykać wszystkiego w jednym pliku i mylić się.
cohadar
2

Możesz otrzymać 503błąd od nginx, gdy basic-authjest włączony w Ingress, a adnotacja nginx.ingress.kubernetes.io/auth-secretodnosi się do nieistniejącego sekretu.

Dodanie brakującego klucza tajnego lub usunięcie wszystkich basic-authadnotacji z Ingress może rozwiązać tę sytuację.

Jorge P.
źródło
0

W moim przypadku było to spowodowane niewłaściwym portem servicePort (innym portem niż ten zdefiniowany w usłudze).

herm
źródło
0

Zwykle w pliku wejściowym YAML nie pisz poprawnej nazwy usługi (- backend: serviceName :) Dlatego zachowaj ostrożność podczas pisania plików YAML.

Lubić:

service.yml:

apiVersion: v1
kind: Service
metadata:
  name: my-service-name

ingress.yml:

spec:
  tls:
  - hosts:
    - my-domanin-name.com
    secretName: ingress-tls
  rules:
  - host: my-domanin-name.com
    http:
      paths:
      - backend:
          serviceName: MY-SERVICE-NAME
Ognjen Stanisavljevic
źródło