Skip to content

Commit 141ee87

Browse files
authored
docs: add php agent queue info (#5007)
1 parent e339250 commit 141ee87

1 file changed

Lines changed: 22 additions & 20 deletions

File tree

docs/common-problems.asciidoc

Lines changed: 22 additions & 20 deletions
Original file line numberDiff line numberDiff line change
@@ -52,16 +52,14 @@ As a result, Elasticsearch must be configured to allow {ref}/docs-index_.html#in
5252
[float]
5353
=== HTTP 400: Data decoding error / Data validation error
5454

55-
The most likely cause for this is that you are using incompatible versions of agent and APM Server.
56-
For instance, APM Server 6.2 and 6.5 changed the Intake API spec and require a minimum version of each agent.
57-
58-
View the {apm-overview-ref-v}/agent-server-compatibility.html[agent/server compatibility matrix] for more information.
55+
The most likely cause for this error is using incompatible versions of APM agent and APM Server.
56+
See the {apm-overview-ref-v}/agent-server-compatibility.html[agent/server compatibility matrix] for more information.
5957

6058
[[event-too-large]]
6159
[float]
6260
=== HTTP 400: Event too large
6361

64-
APM Agents communicate with the APM server by sending events in an HTTP request. Each event is sent as its own line in the HTTP request body. If events are too large, you should consider increasing the <<max_event_size,`max_event_size`>>
62+
APM agents communicate with the APM server by sending events in an HTTP request. Each event is sent as its own line in the HTTP request body. If events are too large, you should consider increasing the <<max_event_size,`max_event_size`>>
6563
setting in the APM Server, and adjusting relevant settings in the agent.
6664

6765
[[unauthorized]]
@@ -90,8 +88,8 @@ APM Server has an internal queue that helps to:
9088
When the queue has reached the maximum size,
9189
APM Server returns an HTTP 503 status with the message "Queue is full".
9290

93-
A full queue generally means that the agents collect more data than APM server is able to process.
94-
This might happen when APM Server is not configured properly for the size of your Elasticsearch cluster,
91+
A full queue generally means that the agents collect more data than APM server can process.
92+
This might happen when APM Server is not configured properly for your Elasticsearch cluster size,
9593
or because your Elasticsearch cluster is underpowered or not configured properly for the given workload.
9694

9795
The queue can also fill up if Elasticsearch runs out of disk space.
@@ -125,7 +123,7 @@ To alleviate this problem, you can try to:
125123

126124
The target host running might be unreachable or the certificate may not be valid. To resolve your issue:
127125

128-
* Make sure that server process on the target host is running and you can connect to it.
126+
* Make sure that the APM Server process on the target host is running and you can connect to it.
129127
First, try to ping the target host to verify that you can reach it from the host running {beatname_uc}.
130128
Then use either `nc` or `telnet` to make sure that the port is available. For example:
131129
+
@@ -159,8 +157,8 @@ This happens because your certificate is only valid for the hostname present in
159157

160158
To resolve this problem, try one of these solutions:
161159

162-
* Create a DNS entry for the hostname mapping it to the server's IP.
163-
* Create an entry in `/etc/hosts` for the hostname. Or on Windows add an entry to
160+
* Create a DNS entry for the hostname, mapping it to the server's IP.
161+
* Create an entry in `/etc/hosts` for the hostname. Or, on Windows, add an entry to
164162
`C:\Windows\System32\drivers\etc\hosts`.
165163
* Re-create the server certificate and add a SubjectAltName (SAN) for the IP address of the server. This makes the
166164
server's certificate valid for both the hostname and the IP address.
@@ -213,6 +211,7 @@ you won't see a sign of failures as the APM server asynchronously sends the data
213211
However,
214212
the APM server and Elasticsearch log a warning like this:
215213

214+
[source,logs]
216215
----
217216
{\"type\":\"illegal_argument_exception\",\"reason\":\"Limit of total fields [1000] in index [apm-7.0.0-transaction-2017.05.30] has been exceeded\"}
218217
----
@@ -226,21 +225,23 @@ especially when using a load balancer.
226225

227226
You may see an error like the one below in the agent logs, and/or a similar error on the APM Server side:
228227

228+
[source,logs]
229229
----------------------------------------------------------------------
230230
[ElasticAPM] APM Server responded with an error:
231231
"read tcp 123.34.22.313:8200->123.34.22.40:41602: i/o timeout"
232232
----------------------------------------------------------------------
233233

234-
To fix this, ensure timeouts are incrementing from the {apm-agents-ref}[APM Agent],
234+
To fix this, ensure timeouts are incrementing from the {apm-agents-ref}[APM agent],
235235
through your load balancer, to the <<read_timeout,APM Server>>.
236236

237237
By default, the agent timeouts are set at 10 seconds, and the server timeout is set at 30 seconds.
238238
Your load balancer should be set somewhere between these numbers.
239239

240240
For example:
241241

242+
[source,txt]
242243
----------------------------------------------------------------------
243-
APM Agent --> Load Balancer --> APM Server
244+
APM agent --> Load Balancer --> APM Server
244245
10s 15s 30s
245246
----------------------------------------------------------------------
246247

@@ -260,19 +261,20 @@ and data will be lost.
260261

261262
Some agents have internal queues or buffers that will temporarily store data if the APM Server goes down.
262263
As a general rule of thumb, queues fill up quickly. Assume data will be lost if APM Server goes down.
263-
Adjusting these queues/buffers can increase the overhead of the agent, so use caution when updating default values.
264+
Adjusting these queues/buffers can increase the agent's overhead, so use caution when updating default values.
264265

265-
* **Go Agent** - Circular buffer with configurable size:
266+
* **Go agent** - Circular buffer with configurable size:
266267
{apm-go-ref}/configuration.html#config-api-buffer-size[`ELASTIC_APM_BUFFER_SIZE`].
267-
* **Java Agent** - Internal buffer with configurable size:
268+
* **Java agent** - Internal buffer with configurable size:
268269
{apm-java-ref}/config-reporter.html#config-max-queue-size[`max_queue_size`].
269-
* **Node.js Agent** - No internal queue. Data is lost.
270-
* **Python Agent** - Internal {apm-py-ref}/tuning-and-overhead.html#tuning-queue[Transaction queue]
270+
* **Node.js agent** - No internal queue. Data is lost.
271+
* **PHP agent** - No internal queue. Data is lost.
272+
* **Python agent** - Internal {apm-py-ref}/tuning-and-overhead.html#tuning-queue[Transaction queue]
271273
with configurable size and time between flushes.
272-
* **Ruby Agent** - Internal queue with configurable size:
274+
* **Ruby agent** - Internal queue with configurable size:
273275
{apm-ruby-ref}/configuration.html#config-api-buffer-size[`api_buffer_size`].
274-
* **RUM Agent** - No internal queue. Data is lost.
275-
* **.NET Agent** - No internal queue. Data is lost.
276+
* **RUM agent** - No internal queue. Data is lost.
277+
* **.NET agent** - No internal queue. Data is lost.
276278

277279
[[server-resource-exists-not-alias]]
278280
[float]

0 commit comments

Comments
 (0)