Skip to content

Avoid repeated resolving of singleton beans through @Lazy proxy #33841

@dreis2211

Description

@dreis2211

Hi 👋 ,

we have discovered that one of our applications spends a reasonable amount of CPU cycles (10-20%) inDefaultListableBeanFactory.doResolveDependency during runtime. We have found out that this is caused by repeated resolving of beans through e.g. ObjectProvider.getObject (or in fact @Lazy annotated fields suffer from the same problem).

image

I've spent a few minutes creating a reproducer example here: https://github.com/dreis2211/objectprovider-example
But essentially it's this:

@RestController
public class TestController {

	private final ObjectProvider<List<BeanInterface>> objectProvider;

	private List<BeanInterface> beansCache;

	public TestController(ObjectProvider<List<BeanInterface>> objectProvider) {
		this.objectProvider = objectProvider;
	}

	@GetMapping("/beans")
	public String beans() {
		List<BeanInterface> beans = objectProvider.getIfAvailable();
		return "Hello you beautiful " + beans.size() + " beans";
	}

	@GetMapping("/cached-beans")
	public String cachedBeans() {
		List<BeanInterface> beans = getCachedBeans();
		return "Hello you beautiful " + beans.size() + " beans";
	}

	private List<BeanInterface> getCachedBeans() {
		if (beansCache == null) {
			beansCache = objectProvider.getIfAvailable();
		}
		return beansCache;
	}

}

A little test wrk benchmark shows the following results:

wrk -t12 -c400 -d30s http://localhost:8080/beans

Running 30s test @ http://localhost:8080/beans
  12 threads and 400 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    58.57ms   64.48ms 532.24ms   80.97%
    Req/Sec   372.77    159.75     1.04k    65.66%
  133853 requests in 30.10s, 18.28MB read
  Socket errors: connect 157, read 150, write 7, timeout 0
Requests/sec:   4447.18
Transfer/sec:    621.79KB

Caching those beans e.g. in a class local field (also provided in the example already) yields the following results:

wrk -t12 -c400 -d30s http://localhost:8080/cached-beans
Running 30s test @ http://localhost:8080/cached-beans
  12 threads and 400 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     3.78ms    1.26ms  69.73ms   72.44%
    Req/Sec     4.75k     1.40k    9.92k    64.14%
  1638000 requests in 30.11s, 223.68MB read
  Socket errors: connect 157, read 153, write 0, timeout 0
Requests/sec:  54405.84
Transfer/sec:      7.43MB

As you can see the latter is considerably better in terms of throughput.

As the application at hand is making use of Lazy or ObjectProvider in many places, a colleague of mine spent some time on writing a CustomAutowireConfigurer that avoids the repeated resolving for us so we don't have to clutter our code with class local caches. That however feels overkill to me. Is there any specific reason why the result of deferred bean resolving is not cached by default? I have the feeling the current behaviour is a little counter-intuitive and might not be known to everybody. A quick look into the docs also doesn't make a note about the runtime performance characteristics of such approaches.

In case there is nothing to be done about the performance in general, I'd at least vote for an additional hint in the documentation about this.

Thank you :)

Cheers,
Christoph

Metadata

Metadata

Assignees

Labels

in: coreIssues in core modules (aop, beans, core, context, expression)type: enhancementA general enhancement

Type

No type

Projects

No projects

Milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions