|
1 | 1 | # Overview
|
2 | 2 |
|
3 |
| -The nginx-k8s-edge-controller is intended to run within Kubernetes and watch for changes in Ingress resources. |
4 |
| -When changes are made -- creation, deletion, modification -- to Ingress definitions the controller will synchronize the nginx+ downstream objects. |
| 3 | +The nginx-k8s-edge-controller runs in a Kubernetes Cluster and responds to changes in resources of interest, updating designated NGINX Plus hosts with the appropriate configuration. |
5 | 4 |
|
6 | 5 | ## Basic Architecture
|
7 | 6 |
|
8 |
| -The controller is deployed in a Kubernetes Cluster. Upon startup, it registers interest in changes to Ingress resources within the Cluster. |
9 |
| -A Watcher handles the events raised by the Cluster and uses the appropriate Translator to convert the events into NGINX+ requests, |
10 |
| -and then uses the Synchronizer to update the target NGINX+ instance via the [NGINX+ Configuration API](https://docs.nginx.com/nginx/admin-guide/load-balancer/dynamic-configuration-api/). |
| 7 | +The controller is deployed in a Kubernetes Cluster. Upon startup, it registers interest in changes to Service resources in the "nginx-ingress" namespace. |
| 8 | +The Handler accepts the events raised by the Cluster and calls the Translator to convert the events into event definitions that are used to update NGINX Plus hosts. |
| 9 | +Next, the Handler calls the Synchronizer with the list of translated events which are fanned-out for each NGINX host. |
| 10 | +Lastly, the Synchronizer calls the [NGINX+ Configuration API](https://docs.nginx.com/nginx/admin-guide/load-balancer/dynamic-configuration-api/) using the [NGINX Plus Go client](https://github.com/nginxinc/nginx-plus-go-client) to update the target NGINX Plus host(s). |
11 | 11 |
|
12 | 12 | ```mermaid
|
13 | 13 | stateDiagram-v2
|
14 |
| - Controller --> Watcher |
| 14 | + Controller --> Watcher |
| 15 | + Controller --> Settings |
15 | 16 | Watcher --> Handler : "nkl-handler queue"
|
16 | 17 | Handler --> Translator
|
17 | 18 | Translator --> Handler
|
18 | 19 | Handler --> Synchronizer : "nkl-synchronizer queue"
|
19 |
| - Synchronizer --> NGINX+ |
| 20 | + Synchronizer --> NGINXPlusLB1 |
| 21 | + Synchronizer --> NGINXPlusLB2 |
| 22 | + Synchronizer --> NGINXPlusLB... |
| 23 | + Synchronizer --> NGINXPlusLBn |
20 | 24 | ```
|
21 | 25 |
|
22 |
| -### Event Handler |
| 26 | +### Settings |
23 | 27 |
|
24 |
| -The event handling is implemented using two [k8s work queues](https://pkg.go.dev/k8s.io/client-go/util/workqueue). |
25 |
| -The first queue, "nkl-handler", is populated with `core.Event` instances by the Watcher which are based upon the events |
26 |
| -raised by k8s. |
| 28 | +The Settings module is responsible for loading the configuration settings from the "nkl-config" ConfigMap resource in the "nkl" namespace. |
| 29 | +The Settings are loaded when the controller starts and are reloaded when the "nkl-config" ConfigMap resource is updated. |
27 | 30 |
|
28 |
| -The Handler then takes the `core.Event` instances and calls the `translation.Translator` to convert the event into a `nginx.Nginx` instance. |
29 |
| -The `core.Event` instance is update with the `nginx.Nginx` instance and then placed on the second queue, named "nkl-synchronizer". |
| 31 | +### Watcher |
30 | 32 |
|
31 |
| -### Synchronizer |
| 33 | +The Watcher is responsible for monitoring changes to Service resources in the "nginx-ingress" namespace. |
| 34 | +It registers methods that handle each event type. Events are handled by creating a `core.Event` instance and adding it to the "nkl-handler" queue. |
| 35 | +When adding the event to the queue, the Watcher also retrieves the list of Node IPs and adds the list to the event. |
| 36 | +The master node ip is excluded from the list. (NOTE: This should be configurable.) |
| 37 | + |
| 38 | +### Handler |
32 | 39 |
|
33 |
| -The Synchronizer is responsible for taking the `core.Event` instances from the "nkl-synchronizer" queue and updating the target NGINX+ |
34 |
| -using the `nginx.Nginx` member of the event. |
| 40 | +The Handler is responsible for taking the `core.Event` instances from the "nkl-handler" queue and calling the Translator to convert the event into a `core.ServerUpdateEvent` instance, |
| 41 | +adding each `core.ServerUpdateEvent` to the "nkl-synchronizer" queue. |
35 | 42 |
|
36 | 43 | ### Translator
|
37 | 44 |
|
38 |
| -The Translator is responsible for converting the `k8s.Ingress` resource definition into an `nginxClient.UpstreamServer` definition. |
| 45 | +The Translator is responsible for converting the `core.Event` event into an `nginxClient.UpstreamServer` event. |
| 46 | +This involves filtering out the `core.Event` instances that are not of interest to the controller by accepting only Port names starting with the NklPrefix value (currently _nkl-_). |
| 47 | +The event is then fanned-out based on the defined Ports, one event per defined Port. Each port is then augmented with the Ingress name (the name configured in the Port definition with the NklPrefix value removed), |
| 48 | +and the list of the Node's IP addresses. |
| 49 | + |
| 50 | +The Translator passes the list of events to the Synchronizer by calling the `AddEvents` method. |
| 51 | + |
| 52 | +**NOTE: It is important that the Port names match the name of the defined NGINX Plus Upstreams.** |
| 53 | + |
| 54 | +In the following example the NGINX Plus Upstreams are named "nkl-nginx-lb-http" and "nkl-nginx-lb-https". These match the name in the NGINX Plus configuration. |
| 55 | + |
| 56 | +```yaml |
| 57 | +apiVersion: v1 |
| 58 | +kind: Service |
| 59 | +metadata: |
| 60 | + name: nginx-ingress |
| 61 | + namespace: nginx-ingress |
| 62 | +spec: |
| 63 | + type: NodePort |
| 64 | + ports: |
| 65 | + - port: 80 |
| 66 | + targetPort: 80 |
| 67 | + protocol: TCP |
| 68 | + name: nkl-nginx-lb-http |
| 69 | + - port: 443 |
| 70 | + targetPort: 443 |
| 71 | + protocol: TCP |
| 72 | + name: nkl-nginx-lb-https |
| 73 | + selector: |
| 74 | + app: nginx-ingress |
| 75 | +``` |
| 76 | +
|
| 77 | +### Synchronizer |
| 78 | +
|
| 79 | +The Synchronizer is responsible for fanning-out the given list of `core.ServerUpdateEvent` events, one for each configured NGINX Plus host. |
| 80 | +The NGINX Plus hosts are configured using a ConfigMap resource named "nkl-config" in the "nkl" namespace. An example of the ConfigMap is shown below. |
| 81 | + |
| 82 | +```yaml |
| 83 | +apiVersion: v1 |
| 84 | +kind: ConfigMap |
| 85 | +data: |
| 86 | + nginx-hosts: |
| 87 | + "http://10.1.1.4:9000/api,http://10.1.1.5:9000/api" |
| 88 | +metadata: |
| 89 | + name: nkl-config |
| 90 | + namespace: nkl |
| 91 | +``` |
| 92 | + |
| 93 | +This example includes two NGINX Plus hosts to support High Availability. |
| 94 | + |
| 95 | +Additionally, the Synchronizer is responsible for taking the `core.ServerUpdateEvent` instances from the "nkl-synchronizer" queue and updating the target NGINX Plus host. |
| 96 | +The Synchronizer uses the [NGINX Plus Go client](https://github.com/nginxinc/nginx-plus-go-client) to communicate with each NGINX Plus host. |
| 97 | + |
| 98 | + |
| 99 | +#### Retry Mechanism |
| 100 | + |
| 101 | +The Synchronizer uses a retry mechanism to handle failures when updating the NGINX Plus hosts. |
| 102 | +The retry mechanism is implemented in the workqueue using the `workqueue.NewItemExponentialFailureRateLimiter`, |
| 103 | +having defaults set to a base of 2 seconds, and a maximum of 60 seconds. |
39 | 104 |
|
40 |
| -### Retry Mechanism |
| 105 | +### Jitter |
41 | 106 |
|
42 |
| -The Synchronizer uses a retry mechanism to handle failures when updating the NGINX+ instance. |
43 |
| -The retry mechanism is implemented in the workqueue using the `workqueue.NewItemExponentialFailureRateLimiter`. |
44 |
| -Each workqueue can be configured independently, having defaults set to a base of 2 seconds, and a maximum of 60 seconds. |
| 107 | +The Synchronizer uses a jitter mechanism to avoid thrashing the NGINX Plus hosts. Each `core.ServerUpdateEvent` instance |
| 108 | +is added to the "nkl-synchronizer" queue with a random jitter value between 250 and 750 milliseconds. |
0 commit comments