From 40705e84bfd578d5dc3666665a04f4f54cecc9ba Mon Sep 17 00:00:00 2001 From: Steve Wagner Date: Fri, 14 Apr 2023 11:52:10 -0700 Subject: [PATCH 1/9] Merging deocumentation changes --- README.md | 145 ++++++++++-------- DESIGN.md => docs/DESIGN.md | 0 docs/README.md | 79 ++++++++++ nkl-logo.svg | 297 ++++++++++++++++++++++++++++++++++++ 4 files changed, 455 insertions(+), 66 deletions(-) rename DESIGN.md => docs/DESIGN.md (100%) create mode 100644 docs/README.md create mode 100644 nkl-logo.svg diff --git a/README.md b/README.md index 21fa5d70..bf4111d3 100644 --- a/README.md +++ b/README.md @@ -1,112 +1,125 @@ -# nginx-k8s-loadbalancer +
+ + + + +

nginx-k8s-loadbalancer

+
+
-
+The NGINX K8s Loadbalancer, or _NKL_, is a Kubernetes controller that provides TCP load balancing external to a Kubernetes cluster running on-premise. -# Welcome to the Nginx Kubernetes Load Balancer Solution! +## Requirements -
+### Who needs NKL? -![Nginx K8s LB](docs/media/nkl-logo.png) | ![Nginx K8s LB](docs/media/nginx-2020.png) ---- | --- +- [ ] If you find yourself living in a world where Kubernetes is running on-premise instead of a cloud provider, you might need NKL. +- [ ] If you want exceptional, best-in-class load-balancing for your Kubernetes applications, you might need NKL. +- [ ] If you want the ability to manage your load-balancing configuration with the same tools you use to manage your Kubernetes cluster, you might need NKL. -
+### Why NKL? -This repo contains source code and documents for a new `Kubernetes Controller from Nginx`, that provides TCP and HTTP load balancing external to a Kubernetes Cluster running On Premises. +NKL provides a simple, easy-to-manage way to manage load-balancing for your Kubernetes applications by leveraging NGINX Plus hosts running outside your cluster. -
+NKL installs easily, has a small footprint, and is easy to configure and manage. ->>**This is a replacement for a Cloud Providers `Service Type Loadbalancer`, that is not available for On Premises Kubernetes Clusters.** +? {{review for embetterment}}: NKL does not require any specific domain knowledge for configuration, though you will have to understand NGINX configuration to get the most out of this solution. There is thorough documentation available about these specifics in the `docs/` directory. -
-
+### What does NKL do? +tl;dr: -# Overview +_**NKL is a Kubernetes controller that monitors Services and Nodes in your cluster, and then sends API calls to an external NGINX Plus server to manage NGINX Plus Upstream servers automatically.**_ -- `NKL - Nginx Kubernetes Loadbalancer` is a new K8s Controller from Nginx, that monitors specified K8s Services, and then sends API calls to an external Nginx Plus server to manage Nginx Upstream servers dynamically. -- This will `synchronize` the K8s Service Endpoint list, with the Nginx LB Server's upstream list. -- The primary use case and Solution provided is for tracking the K8s` NodePort` IP:Port definitions for the Nginx Ingress Controller's `nginx-ingress Service`. -- NKL is a native Kubernetes Controller, running, configured and managed with standard K8s commands. -- NKL paired with the Nginx Plus Server located external to the K8s cluster, this new controller LB function will provide a `TCP Load Balancer Service` for On Premises K8s clusters, which do not have access to a Cloud providers "Service Type LoadBalancer". -- NKL paired with the Nginx Plus Server located external to the Cluster, using Nginx's advanced HTTP features, provide an `HTTP Load Balancer Service` for Enterprise traffic management solutions, such as: - - MultiCluster Active/Active Load Balancing - - Horizontal Cluster Scaling - - HTTP Split Clients - for A/B, Blue/Green, and Canary test and production traffic steering. Allows Cluster operations/maintainence like upgrades, patching, expansion and troubleshooting with no downtime or reloads - - Advanced TLS Processing - MutualTLS, OCSP, FIPS, dynamic cert loading - - Advanced Security features - Oauth, JWT, App Protect WAF Firewall, Rate and Bandwidth limits - - Nginx Java Script (NJS) for custom solutions - - Nginx Zone Sync of KeyVal data +That's all well and good, but what does that mean? From the outside in, Kubernetes clusters require some tooling to handling routing from outside (e.g.: the Internet, corporate network, etc.) to the cluster. +This is typically done with a load balancer. The load balancer is responsible for routing traffic to the appropriate Kubernetes worker node which then forwards the traffic to the appropriate Pod. -
+If you are using a hosted web solution -- Digital Ocean, AWS, Azure, etc. -- you can use the cloud provider's load balancer service. This service will create a load balancer for you, and then manage the configuration of the load balancer for you. +You can use the cloud provider's API to manage the load balancer, or you can use the cloud provider's web console. -## NKL Controller Software Design Overview - How it works +However, if you checked the first box above, you are running Kubernetes on-premise and will need to manage your own load balancer. This is where NKL comes in. -[NKL Controller DESIGN and Architecture](DESIGN.md) +NKL itself does not perform load balancing. Instead, NKL allows you to manage Service resources within your cluster and have the load balancers automatically be updated to support those changes, all with tooling you are most likely already using. -
+## Getting Started -## Reference Diagram for NKL TCP Load Balancer Service +There are few bits of administrivia to get out of the way before you can start leveraging NKL for your load balancing needs. -
+As noted above, NKL really shines when you have one or more Kubernetes clusters running on-premise. With this in place, +you need to have at least one NGINX Plus host running outside your cluster (Please refer to the [Roadmap](#Roadmap) for information about other load balancer servers). -![NKL Stream Diagram](docs/media/nkl-stream-diagram.png) +You will not need to clone this repo to use NKL. Instead, you can install NKL using the included Manifest files (just copy the `deployments/` directory), which pulls the NKL image from the Container Registry. -
+### RBAC -## Sample Screenshots of Solution at Runtime +As with everything Kubernetes, NKL requires RBAC permissions to function properly. The necessary resources are defined in the various YAML files in `deployement/rabc/`. -
+For convenience, two scripts are included, `apply.sh`, and `unapply.sh`. These scripts will apply or remove the RBAC resources, respectively. -![NGINX LB ConfigMap](docs/media/nkl-configmap.png) -### ConfigMap with 2 Nginx LB Servers defined for HA +The permissions required by NKL are modest. NKL requires the ability to read Resources via shared informers; the resources are Services, Nodes, and ConfigMaps. The Services and ConfigMap are restricted to a specific namespace (default: "nkl"). The Nodes resource is cluster-wide. -
+### Configuration -![NGINX LB Create Nodeport](docs/media/nkl-stream-create-nodeport.png) -### Nginx LB Server Dashboard, NodePort, and NKL Controller Logging +NKL is configured via a ConfigMap, the default settings are found in `deployment/configmap.yaml`. Presently there is a single configuration value exposed in the ConfigMap, `nginx-hosts`. +This contains a comma-separated list of NGINX Plus hosts that NKL will maintain. -### Legend: -- Red - kubectl nodeport commands -- Blue - nodeport and upstreams for http traffic -- Indigo - nodeport and upstreams for https traffic -- Green - NKL log for api calls to LB Server #1 -- Orange - Nginx LB Server upstream dashboard details -- Kubernetes Worker Nodes are 10.1.1.8 and 10.1.1.10 +You will need to update this ConfigMap to reflect the NGINX Plus hosts you wish to manage. -
+If you were to deploy the ConfigMap and start NKL without updating the `nginx-hosts` value, don't fear; the ConfigMap is monitored for changes and NKL will update the NGINX Plus hosts accordingly when the resource is changed, no restart required. -The `Installation Guide` for TCP Loadbalancer Solution is located in the docs/tcp folder: +### Deployment -[TCP Installation Guide](docs/tcp/tcp-installation-guide.md) +There is an extensive [Installation Guide](docs/InstallationGuide.md) available in the `docs/` directory. Please refer to that for detailed instructions on how to deploy NKL and run a demo application. -
+To get NKL up and running in ten steps or fewer, follow these instructions (NOTE, all the aforementioned prerequisites must be met for this to work): -The `Installation Guide` for HTTP Loadbalancer Solution is located in the docs/http folder: +1. Clone this repo (optional, you can simply copy the `deployments/` directory) +`git clone git@github.com:nginxinc/nginx-k8s-loadbalancer.git` -[HTTP Installation Guide](docs/http/http-installation-guide.md) +2. Apply the RBAC resources +`./deployments/rbac/apply.sh` -
+3. Apply the Namespace +`kubectl apply -f deployments/namespace.yaml` -## Requirements +4. Update / Apply the ConfigMap + - For best results add the `nginx-hosts` value to the ConfigMap + - `kubectl apply -f deployments/configmap.yaml` + +5. Apply the Deployment +`kubectl apply -f deployments/deployment.yaml` + +6. Check the logs +`kubectl -n nkl get pods | grep nkl-deployment | cut -f1 -d" " | xargs kubectl logs -n nkl --follow $1` + +At this point NKL should be up and running. Now would be a great time to go over to the [Installation Guide](docs/InstallationGuide.md) and follow the instructions to deploy a demo application. -Please see the /docs folder and Installation Guides for detailed documentation. +### Monitoring -
+Presently NKL includes a fair amount of logging. This is intended to be used for debugging purposes. There are plans to add more robust monitoring and alerting in the future. -## Development +As a rule, we support the use of OpenTelemetry for observability, and we will be adding support in the near future. -Read the [`CONTRIBUTING.md`](https://github.com/nginxinc/nginx-k8s-loadbalancer/blob/main/CONTRIBUTING.md) file. +## Contributing -
+Presently we are not accepting pull requests. However, we welcome your feedback and suggestions. Please open an issue to let us know what you think! -## Authors -- Chris Akker - Solutions Architect - Community and Alliances @ F5, Inc. -- Steve Wagner - Solutions Architect - Community and Alliances @ F5, Inc. +## Roadmap -
+While NKL was initially written specifically for NGINX Plus, we recognize there are other load-balancers that can be supported. + +To this end, NKL has been architected to be extensible to support other "Border Servers". +Border Servers are the term NKL uses to describe load-balancers, reverse proxies, etc. that run outside the cluster and handle +routing outside traffic to your cluster. + +While we have identified a few potential targets, we are open to suggestions. Please open an issue to share your thoughts on potential targets. + +We do hope to realize enough community interest to warrant opening the project to pull requests and other contributions. ## License [Apache License, Version 2.0](https://github.com/nginxinc/nginx-k8s-loadbalancer/blob/main/LICENSE) -© [F5 Networks, Inc.](https://www.f5.com/) 2023 +© [F5, Inc.](https://www.f5.com/) 2023 + +(but don't let that scare you, we're really nice people...) diff --git a/DESIGN.md b/docs/DESIGN.md similarity index 100% rename from DESIGN.md rename to docs/DESIGN.md diff --git a/docs/README.md b/docs/README.md new file mode 100644 index 00000000..76d14bf7 --- /dev/null +++ b/docs/README.md @@ -0,0 +1,79 @@ +# nginx-k8s-loadbalancer + +## Welcome to the Nginx Kubernetes Load Balancer project ! + +
+ +This repo contains source code and documents for a new Kubernetes Controller, that provides TCP load balancing external to a Kubernetes Cluster running On Premises. + +
+ +>>**This is a replacement for a Cloud Providers "Service Type Loadbalancer", that is missing from On Premises Kubernetes Clusters.** + +
+ +## Overview + +- Create a new K8s Controller, that will monitor specified k8s Services, and then send API calls to an external Nginx Plus server to manage Nginx Upstream servers automatically. +- This will `synchronize` the K8s Service Endpoint list, with the Nginx LB server's Upstream server list. +- The primary use case is for tracking the NodePort IP:Port definitions for the Nginx Ingress Controller's `nginx-ingress Service`. +- With the Nginx Plus Server located external to the K8s cluster, this new controller LB function would provide an alternative TCP "Load Balancer Service" for On Premises K8s clusters, which do not have access to a Cloud providers "Service Type LoadBalancer". +- Make the solution a native Kubernetes Component, running, configured and managed with standard K8s commands. + +
+ +## Reference Diagram + +
+ +![NGINX LB Server](media/nginxlb-nklv2.png) + +
+ +## Sample Screenshots of Runtime + +
+ +### Configuration with 2 Nginx LB Servers defined (HA): + +![NGINX LB ConfigMap](media/nkl-pod-configmap.png) + +
+ +### Nginx LB Server Dashboard and Logging + +![NGINX LB Create Nodeport](media/nkl-create-nodeport.png) + +Legend: +- Red - kubectl commands +- Blue - nodeport and upstreams for http traffic +- Indigo - nodeport and upstreams for https traffic +- Green - logs for api calls to LB Server #1 +- Orange - Nginx LB Server upstream dashboard details +- Kubernetes Worker Nodes are 10.1.1.8 and 10.1.1.10 + +
+ +## Requirements + +Please see the /docs folder for detailed documentation. + +
+ +## Installation + +Please see the /docs folder for Installation Guide. + +
+ +## Development + +Read the [`CONTRIBUTING.md`](https://github.com/nginxinc/nginx-k8s-loadbalancer/blob/main/CONTRIBUTING.md) file. + +
+ +## License + +[Apache License, Version 2.0](https://github.com/nginxinc/nginx-k8s-loadbalancer/blob/main/LICENSE) + +© [F5 Networks, Inc.](https://www.f5.com/) 2023 diff --git a/nkl-logo.svg b/nkl-logo.svg new file mode 100644 index 00000000..22fb582d --- /dev/null +++ b/nkl-logo.svg @@ -0,0 +1,297 @@ + + + + + + + + image/svg+xml + + + + + Openclipart + + + 2010-12-08T01:10:16 + more triangle madness. + https://openclipart.org/detail/99799/hexatarget-by-10binary + + + 10binary + + + + + black + hexagon + target + triangle + white + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + From 1df76f331fa346e4a6b8964a8e1a76c8475eba65 Mon Sep 17 00:00:00 2001 From: Steve Wagner Date: Fri, 14 Apr 2023 14:53:20 -0700 Subject: [PATCH 2/9] CHECKPOINT - More tweaking --- README.md | 56 +++++++++++++++++++++++++++++++++---------------------- 1 file changed, 34 insertions(+), 22 deletions(-) diff --git a/README.md b/README.md index bf4111d3..df67a038 100644 --- a/README.md +++ b/README.md @@ -19,7 +19,7 @@ The NGINX K8s Loadbalancer, or _NKL_, is a Kubernetes controller that provides T ### Why NKL? -NKL provides a simple, easy-to-manage way to manage load-balancing for your Kubernetes applications by leveraging NGINX Plus hosts running outside your cluster. +NKL provides a simple, easy-to-manage way to automate load balancing for your Kubernetes applications by leveraging NGINX Plus hosts running outside your cluster. NKL installs easily, has a small footprint, and is easy to configure and manage. @@ -31,15 +31,15 @@ tl;dr: _**NKL is a Kubernetes controller that monitors Services and Nodes in your cluster, and then sends API calls to an external NGINX Plus server to manage NGINX Plus Upstream servers automatically.**_ -That's all well and good, but what does that mean? From the outside in, Kubernetes clusters require some tooling to handling routing from outside (e.g.: the Internet, corporate network, etc.) to the cluster. -This is typically done with a load balancer. The load balancer is responsible for routing traffic to the appropriate Kubernetes worker node which then forwards the traffic to the appropriate Pod. +That's all well and good, but what does that mean? Well, Kubernetes clusters require some tooling to handling routing traffic from the outside world (e.g.: the Internet, corporate network, etc.) to the cluster. +This is typically done with a load balancer. The load balancer is responsible for routing traffic to the appropriate Kubernetes worker node which then forwards the traffic to the appropriate Service / Pod. -If you are using a hosted web solution -- Digital Ocean, AWS, Azure, etc. -- you can use the cloud provider's load balancer service. This service will create a load balancer for you, and then manage the configuration of the load balancer for you. +If you are using a hosted web solution -- Digital Ocean, AWS, Azure, etc. -- you can use the cloud provider's load balancer service. This service will create a load balancer for you. You can use the cloud provider's API to manage the load balancer, or you can use the cloud provider's web console. However, if you checked the first box above, you are running Kubernetes on-premise and will need to manage your own load balancer. This is where NKL comes in. -NKL itself does not perform load balancing. Instead, NKL allows you to manage Service resources within your cluster and have the load balancers automatically be updated to support those changes, all with tooling you are most likely already using. +NKL itself does not perform load balancing. Instead, NKL allows you to manage resources within your cluster and have the load balancers automatically be updated to support those changes, with tooling you are most likely already using. ## Getting Started @@ -52,11 +52,12 @@ You will not need to clone this repo to use NKL. Instead, you can install NKL us ### RBAC -As with everything Kubernetes, NKL requires RBAC permissions to function properly. The necessary resources are defined in the various YAML files in `deployement/rabc/`. +As with everything Kubernetes, NKL requires RBAC permissions to function properly. The necessary resources are defined in the various YAML files in `deployement/rbac/`. For convenience, two scripts are included, `apply.sh`, and `unapply.sh`. These scripts will apply or remove the RBAC resources, respectively. -The permissions required by NKL are modest. NKL requires the ability to read Resources via shared informers; the resources are Services, Nodes, and ConfigMaps. The Services and ConfigMap are restricted to a specific namespace (default: "nkl"). The Nodes resource is cluster-wide. +The permissions required by NKL are modest. NKL requires the ability to read Resources via shared informers; the resources are Services, Nodes, and ConfigMaps. +The Services and ConfigMap are restricted to a specific namespace (default: "nkl"). The Nodes resource is cluster-wide. ### Configuration @@ -65,44 +66,53 @@ This contains a comma-separated list of NGINX Plus hosts that NKL will maintain. You will need to update this ConfigMap to reflect the NGINX Plus hosts you wish to manage. -If you were to deploy the ConfigMap and start NKL without updating the `nginx-hosts` value, don't fear; the ConfigMap is monitored for changes and NKL will update the NGINX Plus hosts accordingly when the resource is changed, no restart required. +If you were to deploy the ConfigMap and start NKL without updating the `nginx-hosts` value, don't fear; the ConfigMap resource is monitored for changes and NKL will update the NGINX Plus hosts accordingly when the resource is changed, no restart required. ### Deployment -There is an extensive [Installation Guide](docs/InstallationGuide.md) available in the `docs/` directory. Please refer to that for detailed instructions on how to deploy NKL and run a demo application. +There is an extensive [Installation Guide](docs/InstallationGuide.md) available in the `docs/` directory. +Please refer to that for detailed instructions on how to deploy NKL and run a demo application. To get NKL up and running in ten steps or fewer, follow these instructions (NOTE, all the aforementioned prerequisites must be met for this to work): 1. Clone this repo (optional, you can simply copy the `deployments/` directory) -`git clone git@github.com:nginxinc/nginx-k8s-loadbalancer.git` + +```git clone git@github.com:nginxinc/nginx-k8s-loadbalancer.git``` 2. Apply the RBAC resources -`./deployments/rbac/apply.sh` + +```./deployments/rbac/apply.sh``` 3. Apply the Namespace -`kubectl apply -f deployments/namespace.yaml` -4. Update / Apply the ConfigMap - - For best results add the `nginx-hosts` value to the ConfigMap - - `kubectl apply -f deployments/configmap.yaml` +```kubectl apply -f deployments/namespace.yaml``` + +4. Update / Apply the ConfigMap (For best results update the `nginx-hosts` values first) + +```kubectl apply -f deployments/configmap.yaml``` 5. Apply the Deployment -`kubectl apply -f deployments/deployment.yaml` + +```kubectl apply -f deployments/deployment.yaml``` 6. Check the logs -`kubectl -n nkl get pods | grep nkl-deployment | cut -f1 -d" " | xargs kubectl logs -n nkl --follow $1` -At this point NKL should be up and running. Now would be a great time to go over to the [Installation Guide](docs/InstallationGuide.md) and follow the instructions to deploy a demo application. +```kubectl -n nkl get pods | grep nkl-deployment | cut -f1 -d" " | xargs kubectl logs -n nkl --follow $1``` + +At this point NKL should be up and running. Now would be a great time to go over to the [Installation Guide](docs/InstallationGuide.md) +and follow the instructions to deploy a demo application. ### Monitoring -Presently NKL includes a fair amount of logging. This is intended to be used for debugging purposes. There are plans to add more robust monitoring and alerting in the future. +Presently NKL includes a fair amount of logging. This is intended to be used for debugging purposes. +There are plans to add more robust monitoring and alerting in the future. -As a rule, we support the use of OpenTelemetry for observability, and we will be adding support in the near future. +As a rule, we support the use of [OpenTelemetry](https://opentelemetry.io/) for observability, and we will be adding support in the near future. ## Contributing -Presently we are not accepting pull requests. However, we welcome your feedback and suggestions. Please open an issue to let us know what you think! +Presently we are not accepting pull requests. However, we welcome your feedback and suggestions. +Please open an issue to let us know what you think! ## Roadmap @@ -114,7 +124,9 @@ routing outside traffic to your cluster. While we have identified a few potential targets, we are open to suggestions. Please open an issue to share your thoughts on potential targets. -We do hope to realize enough community interest to warrant opening the project to pull requests and other contributions. +We look forward to building a community around NKL and value all feedback and suggestions. Varying perspectives and embracing +diverse ideas will be key to NKL becoming a solution that is useful to the community. We will consider it a success +when we are able to accept pull requests from the community. ## License From c2a9d4950dba60d6282059092612d403aa81c13b Mon Sep 17 00:00:00 2001 From: Steve Wagner Date: Fri, 14 Apr 2023 15:00:42 -0700 Subject: [PATCH 3/9] CHECKPOINT - More tweaking --- README.md | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-) diff --git a/README.md b/README.md index df67a038..f465d558 100644 --- a/README.md +++ b/README.md @@ -34,7 +34,7 @@ _**NKL is a Kubernetes controller that monitors Services and Nodes in your clust That's all well and good, but what does that mean? Well, Kubernetes clusters require some tooling to handling routing traffic from the outside world (e.g.: the Internet, corporate network, etc.) to the cluster. This is typically done with a load balancer. The load balancer is responsible for routing traffic to the appropriate Kubernetes worker node which then forwards the traffic to the appropriate Service / Pod. -If you are using a hosted web solution -- Digital Ocean, AWS, Azure, etc. -- you can use the cloud provider's load balancer service. This service will create a load balancer for you. +If you are using a hosted web solution -- Digital Ocean, AWS, Azure, etc. -- you can use the cloud provider's load balancer service. Those services will create a load balancer for you. You can use the cloud provider's API to manage the load balancer, or you can use the cloud provider's web console. However, if you checked the first box above, you are running Kubernetes on-premise and will need to manage your own load balancer. This is where NKL comes in. @@ -114,6 +114,11 @@ As a rule, we support the use of [OpenTelemetry](https://opentelemetry.io/) for Presently we are not accepting pull requests. However, we welcome your feedback and suggestions. Please open an issue to let us know what you think! +One way to contribute is to help us test NKL. We are looking for people to test NKL in a variety of environments. + +If you are curious about the implementation, you should certainly browse the code, but first you might wish to refer to the [Design](docs/DESIGN.md) document. +Some of the design decisions are explained there. + ## Roadmap While NKL was initially written specifically for NGINX Plus, we recognize there are other load-balancers that can be supported. From 5b9284991fac44b6dfcb78c0c5ec9da11c8953fd Mon Sep 17 00:00:00 2001 From: Steve Wagner Date: Fri, 14 Apr 2023 15:07:31 -0700 Subject: [PATCH 4/9] CHECKPOINT - More tweaking --- README.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index f465d558..ebe8296a 100644 --- a/README.md +++ b/README.md @@ -116,7 +116,7 @@ Please open an issue to let us know what you think! One way to contribute is to help us test NKL. We are looking for people to test NKL in a variety of environments. -If you are curious about the implementation, you should certainly browse the code, but first you might wish to refer to the [Design](docs/DESIGN.md) document. +If you are curious about the implementation, you should certainly browse the code, but first you might wish to refer to the [design document](docs/DESIGN.md). Some of the design decisions are explained there. ## Roadmap @@ -127,7 +127,7 @@ To this end, NKL has been architected to be extensible to support other "Border Border Servers are the term NKL uses to describe load-balancers, reverse proxies, etc. that run outside the cluster and handle routing outside traffic to your cluster. -While we have identified a few potential targets, we are open to suggestions. Please open an issue to share your thoughts on potential targets. +While we have identified a few potential targets, we are open to suggestions. Please open an issue to share your thoughts on potential implementations. We look forward to building a community around NKL and value all feedback and suggestions. Varying perspectives and embracing diverse ideas will be key to NKL becoming a solution that is useful to the community. We will consider it a success From 5dc3e380b2750c511a08f2e828554d94c0546a22 Mon Sep 17 00:00:00 2001 From: Steve Wagner Date: Mon, 24 Apr 2023 15:52:31 -0700 Subject: [PATCH 5/9] Updates README --- README.md | 53 +++++++++++++++++++++++++++++++++++------------------ 1 file changed, 35 insertions(+), 18 deletions(-) diff --git a/README.md b/README.md index ebe8296a..85e82e9a 100644 --- a/README.md +++ b/README.md @@ -11,11 +11,21 @@ The NGINX K8s Loadbalancer, or _NKL_, is a Kubernetes controller that provides T ## Requirements -### Who needs NKL? +[//]: # (### Who needs NKL?) -- [ ] If you find yourself living in a world where Kubernetes is running on-premise instead of a cloud provider, you might need NKL. -- [ ] If you want exceptional, best-in-class load-balancing for your Kubernetes applications, you might need NKL. -- [ ] If you want the ability to manage your load-balancing configuration with the same tools you use to manage your Kubernetes cluster, you might need NKL. +[//]: # () +[//]: # (- [ ] If you find yourself living in a world where Kubernetes is running on-premise instead of a cloud provider, you might need NKL.) + +[//]: # (- [ ] If you want exceptional, best-in-class load-balancing for your Kubernetes clusters by using NGINX Plus, you might need NKL.) + +[//]: # (- [ ] If you want the ability to manage your load-balancing configuration with the same tools you use to manage your Kubernetes cluster, you might need NKL.) + +### What you will need + +- [ ] A Kubernetes cluster running on-premise. +- [ ] One or more NGINX Plus hosts running outside your Kubernetes cluster (NGINX Plus hosts must have the ability to route traffic to the cluster). + +There is a more detailed [Installation Guide](docs/InstallationGuide.md) available in the `docs/` directory. ### Why NKL? @@ -23,7 +33,8 @@ NKL provides a simple, easy-to-manage way to automate load balancing for your Ku NKL installs easily, has a small footprint, and is easy to configure and manage. -? {{review for embetterment}}: NKL does not require any specific domain knowledge for configuration, though you will have to understand NGINX configuration to get the most out of this solution. There is thorough documentation available about these specifics in the `docs/` directory. +NKL does not require learning a custom object model, you only have to understand NGINX configuration to get the most out of this solution. +There is thorough documentation available with the specifics in the `docs/` directory. ### What does NKL do? @@ -31,49 +42,55 @@ tl;dr: _**NKL is a Kubernetes controller that monitors Services and Nodes in your cluster, and then sends API calls to an external NGINX Plus server to manage NGINX Plus Upstream servers automatically.**_ -That's all well and good, but what does that mean? Well, Kubernetes clusters require some tooling to handling routing traffic from the outside world (e.g.: the Internet, corporate network, etc.) to the cluster. -This is typically done with a load balancer. The load balancer is responsible for routing traffic to the appropriate Kubernetes worker node which then forwards the traffic to the appropriate Service / Pod. +That's all well and good, but what does it mean? Kubernetes clusters require some tooling to handling routing traffic from the outside world (e.g.: the Internet, corporate network, etc.) to the cluster. +This is typically done with a load balancer. The load balancer is responsible for routing traffic to the appropriate worker node which then forwards the traffic to the appropriate Service / Pod. If you are using a hosted web solution -- Digital Ocean, AWS, Azure, etc. -- you can use the cloud provider's load balancer service. Those services will create a load balancer for you. You can use the cloud provider's API to manage the load balancer, or you can use the cloud provider's web console. -However, if you checked the first box above, you are running Kubernetes on-premise and will need to manage your own load balancer. This is where NKL comes in. +If you are running Kubernetes on-premise and will need to manage your own load balancer, NKL can help. -NKL itself does not perform load balancing. Instead, NKL allows you to manage resources within your cluster and have the load balancers automatically be updated to support those changes, with tooling you are most likely already using. +NKL itself does not perform load balancing. Rather, NKL allows you to manage Service resources within your cluster to update your load balancers, with tooling you are most likely already using. ## Getting Started There are few bits of administrivia to get out of the way before you can start leveraging NKL for your load balancing needs. -As noted above, NKL really shines when you have one or more Kubernetes clusters running on-premise. With this in place, +As noted above, NKL is intended for when you have one or more Kubernetes clusters running on-premise. In addition to this, you need to have at least one NGINX Plus host running outside your cluster (Please refer to the [Roadmap](#Roadmap) for information about other load balancer servers). -You will not need to clone this repo to use NKL. Instead, you can install NKL using the included Manifest files (just copy the `deployments/` directory), which pulls the NKL image from the Container Registry. +### Deployment -### RBAC +#### RBAC As with everything Kubernetes, NKL requires RBAC permissions to function properly. The necessary resources are defined in the various YAML files in `deployement/rbac/`. For convenience, two scripts are included, `apply.sh`, and `unapply.sh`. These scripts will apply or remove the RBAC resources, respectively. -The permissions required by NKL are modest. NKL requires the ability to read Resources via shared informers; the resources are Services, Nodes, and ConfigMaps. +The permissions required by NKL are modest. NKL requires the ability to read Resources via shared informers; the resources are Services, Nodes, and ConfigMaps. The Services and ConfigMap are restricted to a specific namespace (default: "nkl"). The Nodes resource is cluster-wide. -### Configuration +#### Configuration -NKL is configured via a ConfigMap, the default settings are found in `deployment/configmap.yaml`. Presently there is a single configuration value exposed in the ConfigMap, `nginx-hosts`. +NKL is configured via a ConfigMap, the default settings are found in `deployment/configmap.yaml`. Presently there is a single configuration value exposed in the ConfigMap, `nginx-hosts`. This contains a comma-separated list of NGINX Plus hosts that NKL will maintain. You will need to update this ConfigMap to reflect the NGINX Plus hosts you wish to manage. If you were to deploy the ConfigMap and start NKL without updating the `nginx-hosts` value, don't fear; the ConfigMap resource is monitored for changes and NKL will update the NGINX Plus hosts accordingly when the resource is changed, no restart required. -### Deployment - There is an extensive [Installation Guide](docs/InstallationGuide.md) available in the `docs/` directory. Please refer to that for detailed instructions on how to deploy NKL and run a demo application. -To get NKL up and running in ten steps or fewer, follow these instructions (NOTE, all the aforementioned prerequisites must be met for this to work): +#### Versioning + +Versioning is a work in progress. The CI/CD pipeline is being developed and will be used to build and publish NKL images to the Container Registry. +Once in place, semantic versioning will be used for published images. + +#### Deployment Steps + +To get NKL up and running in ten steps or fewer, follow these instructions (NOTE, all the aforementioned prerequisites must be met for this to work). +There is a much more detailed [Installation Guide](docs/InstallationGuide.md) available in the `docs/` directory. 1. Clone this repo (optional, you can simply copy the `deployments/` directory) From d29b6f1a5065167d46ecf60e3caf5cbff2c95ec9 Mon Sep 17 00:00:00 2001 From: Steve Wagner Date: Tue, 2 May 2023 10:57:28 -0700 Subject: [PATCH 6/9] Updates the original README --- docs/README.md | 81 +++++++++++++++++++++++++++++++++++--------------- 1 file changed, 57 insertions(+), 24 deletions(-) diff --git a/docs/README.md b/docs/README.md index 76d14bf7..214b4c67 100644 --- a/docs/README.md +++ b/docs/README.md @@ -1,68 +1,95 @@ # nginx-k8s-loadbalancer -## Welcome to the Nginx Kubernetes Load Balancer project ! -
-This repo contains source code and documents for a new Kubernetes Controller, that provides TCP load balancing external to a Kubernetes Cluster running On Premises. +# Welcome to the Nginx Kubernetes Load Balancer Solution!
->>**This is a replacement for a Cloud Providers "Service Type Loadbalancer", that is missing from On Premises Kubernetes Clusters.** +![Nginx K8s LB](media/nkl-logo.png) | ![Nginx K8s LB](media/nginx-2020.png) +--- | ---
-## Overview +This repo contains source code and documents for a new `Kubernetes Controller from Nginx`, that provides TCP and HTTP load balancing external to a Kubernetes Cluster running On Premises. + +
-- Create a new K8s Controller, that will monitor specified k8s Services, and then send API calls to an external Nginx Plus server to manage Nginx Upstream servers automatically. -- This will `synchronize` the K8s Service Endpoint list, with the Nginx LB server's Upstream server list. -- The primary use case is for tracking the NodePort IP:Port definitions for the Nginx Ingress Controller's `nginx-ingress Service`. -- With the Nginx Plus Server located external to the K8s cluster, this new controller LB function would provide an alternative TCP "Load Balancer Service" for On Premises K8s clusters, which do not have access to a Cloud providers "Service Type LoadBalancer". -- Make the solution a native Kubernetes Component, running, configured and managed with standard K8s commands. +>>**This is a replacement for a Cloud Providers `Service Type Loadbalancer`, that is not available for On Premises Kubernetes Clusters.**
+
+ -## Reference Diagram +# Overview + +- `NKL - Nginx Kubernetes Loadbalancer` is a new K8s Controller from Nginx, that monitors specified K8s Services, and then sends API calls to an external Nginx Plus server to manage Nginx Upstream servers dynamically. +- This will `synchronize` the K8s Service Endpoint list, with the Nginx LB Server's upstream list. +- The primary use case and Solution provided is for tracking the K8s` NodePort` IP:Port definitions for the Nginx Ingress Controller's `nginx-ingress Service`. +- NKL is a native Kubernetes Controller, running, configured and managed with standard K8s commands. +- NKL paired with the Nginx Plus Server located external to the K8s cluster, this new controller LB function will provide a `TCP Load Balancer Service` for On Premises K8s clusters, which do not have access to a Cloud providers "Service Type LoadBalancer". +- NKL paired with the Nginx Plus Server located external to the Cluster, using Nginx's advanced HTTP features, provide an `HTTP Load Balancer Service` for Enterprise traffic management solutions, such as: + - MultiCluster Active/Active Load Balancing + - Horizontal Cluster Scaling + - HTTP Split Clients - for A/B, Blue/Green, and Canary test and production traffic steering. Allows Cluster operations/maintainence like upgrades, patching, expansion and troubleshooting with no downtime or reloads + - Advanced TLS Processing - MutualTLS, OCSP, FIPS, dynamic cert loading + - Advanced Security features - Oauth, JWT, App Protect WAF Firewall, Rate and Bandwidth limits + - Nginx Java Script (NJS) for custom solutions + - Nginx Zone Sync of KeyVal data
-![NGINX LB Server](media/nginxlb-nklv2.png) +## NKL Controller Software Design Overview - How it works + +[NKL Controller DESIGN and Architecture](DESIGN.md)
-## Sample Screenshots of Runtime +## Reference Diagram for NKL TCP Load Balancer Service
-### Configuration with 2 Nginx LB Servers defined (HA): +![NKL Stream Diagram](media/nkl-stream-diagram.png) -![NGINX LB ConfigMap](media/nkl-pod-configmap.png) +
+ +## Sample Screenshots of Solution at Runtime
-### Nginx LB Server Dashboard and Logging +![NGINX LB ConfigMap](media/nkl-configmap.png) +### ConfigMap with 2 Nginx LB Servers defined for HA + +
-![NGINX LB Create Nodeport](media/nkl-create-nodeport.png) +![NGINX LB Create Nodeport](media/nkl-stream-create-nodeport.png) +### Nginx LB Server Dashboard, NodePort, and NKL Controller Logging -Legend: -- Red - kubectl commands +### Legend: +- Red - kubectl nodeport commands - Blue - nodeport and upstreams for http traffic - Indigo - nodeport and upstreams for https traffic -- Green - logs for api calls to LB Server #1 +- Green - NKL log for api calls to LB Server #1 - Orange - Nginx LB Server upstream dashboard details - Kubernetes Worker Nodes are 10.1.1.8 and 10.1.1.10
-## Requirements +The `Installation Guide` for TCP Loadbalancer Solution is located in the tcp folder: + +[TCP Installation Guide](tcp/tcp-installation-guide.md) + +
-Please see the /docs folder for detailed documentation. +The `Installation Guide` for HTTP Loadbalancer Solution is located in the http folder: + +[HTTP Installation Guide](http/http-installation-guide.md)
-## Installation +## Requirements -Please see the /docs folder for Installation Guide. +Please see the /docs folder and Installation Guides for detailed documentation.
@@ -72,6 +99,12 @@ Read the [`CONTRIBUTING.md`](https://github.com/nginxinc/nginx-k8s-loadbalancer/
+## Authors +- Chris Akker - Solutions Architect - Community and Alliances @ F5, Inc. +- Steve Wagner - Solutions Architect - Community and Alliances @ F5, Inc. + +
+ ## License [Apache License, Version 2.0](https://github.com/nginxinc/nginx-k8s-loadbalancer/blob/main/LICENSE) From 2c6fe6ba0a2f5aa0c1b04f92d3a06e401ce9eaaa Mon Sep 17 00:00:00 2001 From: Steve Wagner Date: Tue, 2 May 2023 11:02:28 -0700 Subject: [PATCH 7/9] Updates links in the READM --- docs/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/README.md b/docs/README.md index 214b4c67..c9be0efb 100644 --- a/docs/README.md +++ b/docs/README.md @@ -49,7 +49,7 @@ This repo contains source code and documents for a new `Kubernetes Controller fr
-![NKL Stream Diagram](media/nkl-stream-diagram.png) +![NKL Stream Diagram](media/nkl-blog-diagram-v1.png)
From 32de01537025ac7b68d9de6ed15d0f1e8cc4b190 Mon Sep 17 00:00:00 2001 From: Steve Wagner Date: Tue, 2 May 2023 11:06:41 -0700 Subject: [PATCH 8/9] Add diagram to main README --- README.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/README.md b/README.md index 85e82e9a..d01a577a 100644 --- a/README.md +++ b/README.md @@ -52,6 +52,8 @@ If you are running Kubernetes on-premise and will need to manage your own load b NKL itself does not perform load balancing. Rather, NKL allows you to manage Service resources within your cluster to update your load balancers, with tooling you are most likely already using. + + ## Getting Started There are few bits of administrivia to get out of the way before you can start leveraging NKL for your load balancing needs. From fa1b3e3cc103ca22ae40a1f2759d425b62cd458f Mon Sep 17 00:00:00 2001 From: Steve Wagner Date: Tue, 2 May 2023 11:29:55 -0700 Subject: [PATCH 9/9] header alignment --- README.md | 11 +++++------ 1 file changed, 5 insertions(+), 6 deletions(-) diff --git a/README.md b/README.md index d01a577a..96febe54 100644 --- a/README.md +++ b/README.md @@ -1,12 +1,11 @@ -
+
- - - -

nginx-k8s-loadbalancer

-
+ +

nginx-k8s-loadbalancer

+
+ The NGINX K8s Loadbalancer, or _NKL_, is a Kubernetes controller that provides TCP load balancing external to a Kubernetes cluster running on-premise. ## Requirements