Skip to content

Add HTTPBenchmarkApp with consolidated "real‑world" benchmarks #101

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

RaghavRoy145
Copy link

@RaghavRoy145 RaghavRoy145 commented Apr 19, 2025

Motivations

  • Added a SwiftNIO example that demonstrates realistic throughput and latency scenarios - file streaming, high‑concurrency sums, partial I/O patterns, and lock contention.
  • A command‑line tool that can run these benchmarks in‑process and print percentile statistics helps both users and contributors understand and tune performance, this is also to address Performance ideas/roadmap swift-nio#2844 (comment)

Modifications

  • HTTPBenchmarkApp implementing four HTTP endpoints and four in‑process benchmarks.
  • CLI flags:
    • --run-all-benchmarks to run all scenarios back‑to‑back.
    • --samples <N> to customize iteration count (default 10).
    • --use-io-uring to switch to NIOTSEventLoopGroup when available.
  • Benchmark helpers:
    • measure(_:) and measureMultiple(iterations:block:) for timing.
    • calculateStatistics(from:) to compute p0/p25/p50/p75/p90/p99/p100.
    • formatBenchmarkTable(metric:stats:) to render Unicode tables.

Results

Users can now build and run a single tool to:
1. Execute workload benchmarks entirely in‑process, with configurable sample counts and percentile output.
2. Optionally serve HTTP endpoints for external latency measurements.

This provides an extensible SwiftNIO example app that is useful for performance tuning.

Motivation:
Added a self‑contained SwiftNIO example that demonstrates realistic throughput and latency scenarios—file streaming, high‑concurrency sums, partial I/O patterns, and lock contention.
A command‑line tool that can run these benchmarks in‑process and print percentile statistics helps both users and contributors understand and tune performance.

Modications:
- `HTTPBenchmarkApp` implementing four HTTP endpoints and four in‑process benchmarks.
- CLI flags:
  - `--run-all-benchmarks` to run all scenarios back‑to‑back.
  - `--samples <N>` to customize iteration count (default 10).
  - `--use-io-uring` to switch to `NIOTSEventLoopGroup` when available.
- Benchmark helpers:
  - `measure(_:)` and `measureMultiple(iterations:block:)` for timing.
  - `calculateStatistics(from:)` to compute p0/p25/p50/p75/p90/p99/p100.
  - `formatBenchmarkTable(metric:stats:)` to render Unicode tables.

Results:
Users can now build and run a single tool to:
1. Execute workload benchmarks entirely in‑process, with configurable sample counts and percentile output.
2. Optionally serve HTTP endpoints for external latency measurements.

This provides an extensible SwiftNIO example app that is both educational and practically useful for performance tuning.
@Option(help: "Host to bind on") var host: String = "127.0.0.1"
@Option(help: "Port to bind on") var port: Int = 8080
@Option(help: "Number of samples for each consolidated benchmark") var samples: Int = 10
@Flag(help: "Enable io_uring backend (requires NIOTransportServices)") var useIOUring: Bool = false
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks wrong: NIOTransportServices doesn't use io_uring, it uses Network.framework.

// will execute consolidated benchmarks and output a report.
// Optionally, the --use-io-uring flag enables NIOTSEventLoopGroup (Linux io_uring).
//
//===----------------------------------------------------------------------===//
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This license header isn't valid.

let server = try bootstrap.bind(host: host, port: port).wait()
print("HTTPBenchmarkApp running on \(host):\(port)")
try server.closeFuture.wait()
try group.syncShutdownGracefully()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seems a bit odd: we either run benchmarks or a HTTP server. Why that choice?

}
dg.wait()
}
print(formatBenchmarkTable(metric: "Lock Contention (ms)", stats: stats4))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These benchmarks don't seem to be particularly HTTP related.

}
}

extension BenchmarkRequestHandler: @unchecked Sendable {}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This type definitely isn't Sendable.

@RaghavRoy145
Copy link
Author

Thank you for your reviews @Lukasa! Just so it doesn't seem like I've abandoned this project, I wanted to put this here: I'll continue working on this right after my exams.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants