diff --git a/pipeline/filters/kubernetes.md b/pipeline/filters/kubernetes.md index d4ad92b1b..492c86862 100644 --- a/pipeline/filters/kubernetes.md +++ b/pipeline/filters/kubernetes.md @@ -271,7 +271,7 @@ There are some configuration setup needed for this feature. Role Configuration for Fluent Bit DaemonSet Example: -```text +```yaml --- apiVersion: v1 kind: ServiceAccount @@ -314,34 +314,34 @@ The difference is that kubelet need a special permission for resource `nodes/pro Fluent Bit Configuration Example: ```text - [INPUT] - Name tail - Tag kube.* - Path /var/log/containers/*.log - DB /var/log/flb_kube.db - Parser docker - Docker_Mode On - Mem_Buf_Limit 50MB - Skip_Long_Lines On - Refresh_Interval 10 - - [FILTER] - Name kubernetes - Match kube.* - Kube_URL https://kubernetes.default.svc.cluster.local:443 - Kube_CA_File /var/run/secrets/kubernetes.io/serviceaccount/ca.crt - Kube_Token_File /var/run/secrets/kubernetes.io/serviceaccount/token - Merge_Log On - Buffer_Size 0 - Use_Kubelet true - Kubelet_Port 10250 +[INPUT] + Name tail + Tag kube.* + Path /var/log/containers/*.log + DB /var/log/flb_kube.db + Parser docker + Docker_Mode On + Mem_Buf_Limit 50MB + Skip_Long_Lines On + Refresh_Interval 10 + +[FILTER] + Name kubernetes + Match kube.* + Kube_URL https://kubernetes.default.svc.cluster.local:443 + Kube_CA_File /var/run/secrets/kubernetes.io/serviceaccount/ca.crt + Kube_Token_File /var/run/secrets/kubernetes.io/serviceaccount/token + Merge_Log On + Buffer_Size 0 + Use_Kubelet true + Kubelet_Port 10250 ``` So for fluent bit configuration, you need to set the `Use_Kubelet` to true to enable this feature. DaemonSet config Example: -```text +```yaml --- apiVersion: apps/v1 kind: DaemonSet diff --git a/pipeline/filters/lua.md b/pipeline/filters/lua.md index 95890991f..4f1bab092 100644 --- a/pipeline/filters/lua.md +++ b/pipeline/filters/lua.md @@ -196,12 +196,12 @@ We want to extract the `sandboxbsh` name and add it to our record as a special k {% tabs %} {% tab title="fluent-bit.conf" %} ``` - [FILTER] - Name lua - Alias filter-iots-lua - Match iots_thread.* - Script filters.lua - Call set_landscape_deployment +[FILTER] + Name lua + Alias filter-iots-lua + Match iots_thread.* + Script filters.lua + Call set_landscape_deployment ``` {% endtab %} @@ -358,23 +358,23 @@ Configuration to get istio logs and apply response code filter to them. {% tabs %} {% tab title="fluent-bit.conf" %} ```ini - [INPUT] - Name tail - Path /var/log/containers/*_istio-proxy-*.log - multiline.parser docker, cri - Tag istio.* - Mem_Buf_Limit 64MB - Skip_Long_Lines Off - - [FILTER] - Name lua - Match istio.* - Script response_code_filter.lua - call cb_response_code_filter - - [Output] - Name stdout - Match * +[INPUT] + Name tail + Path /var/log/containers/*_istio-proxy-*.log + multiline.parser docker, cri + Tag istio.* + Mem_Buf_Limit 64MB + Skip_Long_Lines Off + +[FILTER] + Name lua + Match istio.* + Script response_code_filter.lua + call cb_response_code_filter + +[Output] + Name stdout + Match * ``` {% endtab %} diff --git a/pipeline/inputs/opentelemetry.md b/pipeline/inputs/opentelemetry.md index fcbb64a69..1b9f44fae 100644 --- a/pipeline/inputs/opentelemetry.md +++ b/pipeline/inputs/opentelemetry.md @@ -79,13 +79,13 @@ pipeline: {% tab title="fluent-bit.conf" %} ``` [INPUT] - name opentelemetry - listen 127.0.0.1 - port 4318 + name opentelemetry + listen 127.0.0.1 + port 4318 [OUTPUT] - name stdout - match * + name stdout + match * ``` {% endtab %} diff --git a/pipeline/inputs/prometheus-remote-write.md b/pipeline/inputs/prometheus-remote-write.md index f23ecc40b..6d9a9d7cb 100644 --- a/pipeline/inputs/prometheus-remote-write.md +++ b/pipeline/inputs/prometheus-remote-write.md @@ -26,14 +26,14 @@ A sample config file to get started will look something like the following: {% tab title="fluent-bit.conf" %} ``` [INPUT] - name prometheus_remote_write - listen 127.0.0.1 - port 8080 - uri /api/prom/push + name prometheus_remote_write + listen 127.0.0.1 + port 8080 + uri /api/prom/push [OUTPUT] - name stdout - match * + name stdout + match * ``` {% endtab %} @@ -65,13 +65,13 @@ Communicating with TLS, you will need to use the tls related parameters: ``` [INPUT] - Name prometheus_remote_write - Listen 127.0.0.1 - Port 8080 - Uri /api/prom/push - Tls On - tls.crt_file /path/to/certificate.crt - tls.key_file /path/to/certificate.key + Name prometheus_remote_write + Listen 127.0.0.1 + Port 8080 + Uri /api/prom/push + Tls On + tls.crt_file /path/to/certificate.crt + tls.key_file /path/to/certificate.key ``` Now, you should be able to send data over TLS to the remote write input. diff --git a/pipeline/outputs/chronicle.md b/pipeline/outputs/chronicle.md index d2935fc00..ddc945c88 100644 --- a/pipeline/outputs/chronicle.md +++ b/pipeline/outputs/chronicle.md @@ -1,5 +1,3 @@ ---- - # Chronicle The Chronicle output plugin allows ingesting security logs into [Google Chronicle](https://chronicle.security/) service. This connector is designed to send unstructured security logs. diff --git a/pipeline/outputs/oci-logging-analytics.md b/pipeline/outputs/oci-logging-analytics.md index 54abb039a..36475c870 100644 --- a/pipeline/outputs/oci-logging-analytics.md +++ b/pipeline/outputs/oci-logging-analytics.md @@ -86,11 +86,13 @@ In case of multiple inputs, where oci_la_* properties can differ, you can add th [INPUT] Name dummy Tag dummy + [Filter] Name modify Match * Add oci_la_log_source_name Add oci_la_log_group_id + [Output] Name oracle_log_analytics Match * @@ -109,6 +111,7 @@ You can attach certain metadata to the log events collected from various inputs. [INPUT] Name dummy Tag dummy + [Output] Name oracle_log_analytics Match * @@ -138,12 +141,12 @@ The above configuration will generate a payload that looks like this "metadata": { "key1": "value1", "key2": "value2" - }, - "logSourceName": "example_log_source", - "logRecords": [ - "dummy" - ] - } + }, + "logSourceName": "example_log_source", + "logRecords": [ + "dummy" + ] + } ] } ``` @@ -156,11 +159,13 @@ With oci_config_in_record option set to true, the metadata key-value pairs will [INPUT] Name dummy Tag dummy + [FILTER] Name Modify Match * Add olgm.key1 val1 Add olgm.key2 val2 + [FILTER] Name nest Match * @@ -168,11 +173,13 @@ With oci_config_in_record option set to true, the metadata key-value pairs will Wildcard olgm.* Nest_under oci_la_global_metadata Remove_prefix olgm. + [Filter] Name modify Match * Add oci_la_log_source_name Add oci_la_log_group_id + [Output] Name oracle_log_analytics Match * diff --git a/pipeline/outputs/s3.md b/pipeline/outputs/s3.md index 469123d87..5f2df5f38 100644 --- a/pipeline/outputs/s3.md +++ b/pipeline/outputs/s3.md @@ -198,13 +198,13 @@ The following settings are recommended for this use case: ``` [OUTPUT] - Name s3 - Match * - bucket your-bucket - region us-east-1 - total_file_size 1M - upload_timeout 1m - use_put_object On + Name s3 + Match * + bucket your-bucket + region us-east-1 + total_file_size 1M + upload_timeout 1m + use_put_object On ``` ## S3 Multipart Uploads @@ -252,14 +252,14 @@ Example: ``` [OUTPUT] - Name s3 - Match * - bucket your-bucket - region us-east-1 - total_file_size 1M - upload_timeout 1m - use_put_object On - workers 1 + Name s3 + Match * + bucket your-bucket + region us-east-1 + total_file_size 1M + upload_timeout 1m + use_put_object On + workers 1 ``` If you enable a single worker, you are enabling a dedicated thread for your S3 output. We recommend starting without workers, evaluating the performance, and then enabling a worker if needed. For most users, the plugin can provide sufficient throughput without workers. @@ -274,10 +274,10 @@ Example: ``` [OUTPUT] - Name s3 - Match * - bucket your-bucket - endpoint http://localhost:9000 + Name s3 + Match * + bucket your-bucket + endpoint http://localhost:9000 ``` Then, the records will be stored into the MinIO server. @@ -300,27 +300,27 @@ In your main configuration file append the following _Output_ section: ``` [OUTPUT] - Name s3 - Match * - bucket your-bucket - region us-east-1 - store_dir /home/ec2-user/buffer - total_file_size 50M - upload_timeout 10m + Name s3 + Match * + bucket your-bucket + region us-east-1 + store_dir /home/ec2-user/buffer + total_file_size 50M + upload_timeout 10m ``` An example that using PutObject instead of multipart: ``` [OUTPUT] - Name s3 - Match * - bucket your-bucket - region us-east-1 - store_dir /home/ec2-user/buffer - use_put_object On - total_file_size 10M - upload_timeout 10m + Name s3 + Match * + bucket your-bucket + region us-east-1 + store_dir /home/ec2-user/buffer + use_put_object On + total_file_size 10M + upload_timeout 10m ``` ## AWS for Fluent Bit @@ -387,15 +387,15 @@ Once compiled, Fluent Bit can upload incoming data to S3 in Apache Arrow format. ``` [INPUT] - Name cpu + Name cpu [OUTPUT] - Name s3 - Bucket your-bucket-name - total_file_size 1M - use_put_object On - upload_timeout 60s - Compression arrow + Name s3 + Bucket your-bucket-name + total_file_size 1M + use_put_object On + upload_timeout 60s + Compression arrow ``` As shown in this example, setting `Compression` to `arrow` makes Fluent Bit to convert payload into Apache Arrow format. diff --git a/pipeline/outputs/vivo-exporter.md b/pipeline/outputs/vivo-exporter.md index 69c00dfcb..ba1afd7bb 100644 --- a/pipeline/outputs/vivo-exporter.md +++ b/pipeline/outputs/vivo-exporter.md @@ -25,7 +25,7 @@ Here is a simple configuration of Vivo Exporter, note that this example is not b match * empty_stream_on_read off stream_queue_size 20M - 
http_cors_allow_origin * + http_cors_allow_origin * ``` ### How it works diff --git a/pipeline/outputs/websocket.md b/pipeline/outputs/websocket.md index fc5d4ab08..bb96cd674 100644 --- a/pipeline/outputs/websocket.md +++ b/pipeline/outputs/websocket.md @@ -63,6 +63,7 @@ Websocket plugin is working with tcp keepalive mode, please refer to [networking Listen 0.0.0.0 Port 5170 Format json + [OUTPUT] Name websocket Match *