Re-enable http keepalive on remote storage#3160
Conversation
|
Can't make out what went wrong in the Travis failure. |
mdlayher
left a comment
There was a problem hiding this comment.
LGTM. Failure appears unrelated to me. Possibly a flaky test.
|
Indeed it succeed second time round. |
| ProxyURL URL `yaml:"proxy_url,omitempty"` | ||
| // TLSConfig to use to connect to the targets. | ||
| TLSConfig TLSConfig `yaml:"tls_config,omitempty"` | ||
| // If set, override whether to use HTTP KeepAlive - scraping defaults OFF, remote read/write defaults ON |
There was a problem hiding this comment.
I think this is going to be very hard to understand for users and to explain to them, a given config stanza shouldn't mean different things in different contexts. It's also unresolved what scraping should do in relation to keepalive.
If you want to enable keepalive for remote storage you should do exactly that, without bringing additional complexity and other debates into it.
There was a problem hiding this comment.
I'm not following why you think this config means different things in different contexts. false always means you want keep-alive disabled, and true always means you want it available. Unless I made a typo?
There was a problem hiding this comment.
No config means different things in different contexts. We don't do this anywhere else in Prometheus or the other components.
There was a problem hiding this comment.
I made a different take on it at #3173, with no configurability.
Let me know if you revert this PR.
There was a problem hiding this comment.
@brian-brazil @bboreham I do not see keep_alive in https://prometheus.io/docs/prometheus/latest/configuration/configuration/
How I can set it to false ?
There was a problem hiding this comment.
You can’t.
If you need that feature, suggest modifying the code or open an issue explaining why you need it.
There was a problem hiding this comment.
We have manual exporters on https://restrserve.org/ which is in forking mode (Yes, I know fork is sucks, but in R world it's commonness)
As I see prometheus do not close connection on fetching metrics and child process lives forever. At some point OOM killer come, and kill entire k8s pod.
@bboreham
Do you have idea why prometheus do not close connection by default?
UPD: As I see promethus do not send keep-alive header as well
GET /metrics HTTP/1.1
Host: 10.36.1.63:8000
User-Agent: Prometheus/2.28.1
Accept: application/openmetrics-text; version=0.0.1,text/plain;version=0.0.4;q=0.5,*/*;q=0.1
Accept-Encoding: gzip
X-Prometheus-Scrape-Timeout-Seconds: 5
Removes configurability introduced in prometheus#3160 in favour of hard-coding, per advice from @brian-brazil.
Removes configurability introduced in #3160 in favour of hard-coding, per advice from @brian-brazil.
… to 2.0 (see prometheus#3173) Removes configurability introduced in prometheus#3160 in favour of hard-coding, per advice from @brian-brazil.
As discussed at https://groups.google.com/d/msg/prometheus-users/CMyMtPC-Co4/vDfrsOXVAAAJ
Without this change, Prometheus is doing a DNS lookup, opening/closing a socket, etc., on every remote read or write, which may be many times per second.
I felt I should give users the option, and since I did this at the lower level this ends up relevant to #2498