Skip to content

segfault in nginx under high concurrency #21

@buddy-ekb

Description

@buddy-ekb

I suspect that there is some bug in ngx_postgres' connection cache code. While stressing the nginx with "ab -c 20 ..." it occasionally segfaulted, so I ran it under valgrind. There is the result:

==25581== Invalid read of size 8
==25581== at 0x406DAB: ngx_destroy_pool (ngx_palloc.c:76)
==25581== by 0x4B6457: ngx_postgres_upstream_free_connection (ngx_postgres_upstream.c:584)
==25581== by 0x4B00FE: ngx_postgres_keepalive_free_peer (ngx_postgres_keepalive.c:218)
==25581== by 0x4B60DC: ngx_postgres_upstream_free_peer (ngx_postgres_upstream.c:509)
==25581== by 0x4B6674: ngx_postgres_upstream_finalize_request (ngx_postgres_util.c:79)
==25581== by 0x4B4DCD: ngx_postgres_upstream_done (ngx_postgres_processor.c:507)
==25581== by 0x4B4D46: ngx_postgres_upstream_get_ack (ngx_postgres_processor.c:488)
==25581== by 0x4B4951: ngx_postgres_upstream_get_result (ngx_postgres_processor.c:366)
==25581== by 0x4B40F0: ngx_postgres_process_events (ngx_postgres_processor.c:76)
==25581== by 0x4AF7D2: ngx_postgres_rev_handler (ngx_postgres_handler.c:314)
==25581== by 0x46415C: ngx_http_upstream_handler (ngx_http_upstream.c:976)
==25581== by 0x437C1D: ngx_epoll_process_events (ngx_epoll_module.c:691)
==25581== by 0x42853F: ngx_process_events_and_timers (ngx_event.c:248)
==25581== by 0x4347E4: ngx_single_process_cycle (ngx_process_cycle.c:315)
==25581== by 0x403DF4: main (nginx.c:404)
==25581== Address 0x5641930 is 96 bytes inside a block of size 256 free'd
==25581== at 0x4A063F0: free (vg_replace_malloc.c:446)
==25581== by 0x406E39: ngx_destroy_pool (ngx_palloc.c:87)
==25581== by 0x44E8D2: ngx_http_close_connection (ngx_http_request.c:3489)
==25581== by 0x44E56D: ngx_http_close_request (ngx_http_request.c:3350)
==25581== by 0x44E0C1: ngx_http_lingering_close_handler (ngx_http_request.c:3209)
==25581== by 0x44DF6B: ngx_http_set_lingering_close (ngx_http_request.c:3171)
==25581== by 0x44C858: ngx_http_finalize_connection (ngx_http_request.c:2493)
==25581== by 0x44C443: ngx_http_finalize_request (ngx_http_request.c:2384)
==25581== by 0x4B6800: ngx_postgres_upstream_finalize_request (ngx_postgres_util.c:140)
==25581== by 0x4B4DCD: ngx_postgres_upstream_done (ngx_postgres_processor.c:507)
==25581== by 0x4B4D46: ngx_postgres_upstream_get_ack (ngx_postgres_processor.c:488)
==25581== by 0x4B4951: ngx_postgres_upstream_get_result (ngx_postgres_processor.c:366)
==25581== by 0x4B40F0: ngx_postgres_process_events (ngx_postgres_processor.c:76)
==25581== by 0x4AF7D2: ngx_postgres_rev_handler (ngx_postgres_handler.c:314)
==25581== by 0x46415C: ngx_http_upstream_handler (ngx_http_upstream.c:976)
==25581== by 0x437C1D: ngx_epoll_process_events (ngx_epoll_module.c:691)
==25581== by 0x42853F: ngx_process_events_and_timers (ngx_event.c:248)
==25581== by 0x4347E4: ngx_single_process_cycle (ngx_process_cycle.c:315)
==25581== by 0x403DF4: main (nginx.c:404)

I believe that "ngx_postgres_keepalive_free_peer" (ngx_postgres_keepalive.c:164) somehow fails to correctly isolate some nginx memory pool from freeing which occurs later at (ngx_postgres_util.c:140) and eventually leads to an attempt of reading invalid data from freed and reused memory area.

I'm looking forward to giving any comprehensive information to anybody whom it may concern.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions