Remove deprecated transports#452
Conversation
|
Fixes #386. |
|
Besides deleting the duplicate code this changes the batching for spans from 10/1sec to 200/60s and changes the way the queue is flushed from #384. The new flushing behavior hasn't caused any problems for stats, and should be safe for traces. I'm interested to know how you tested the batch size change though. The new batch size is consistent with other clients, but it means that users sending between 200 and 600 spans/traces per minute now have to change the default. |
|
We've been running with 200/60s for a couple months without issues, as the previous default ran through quota immediately. Perhaps we should bump up the batch size to 600? |
|
Either way, I think we need to update the code to submit as many batches as possible per wait period. Otherwise services under high load will buffer stats/traces until they run out of memory, as only _DEFAULT_MAX_BATCH_SIZE entries can be submitted per minute. |
|
Bumping the batch size up sounds good to me. |
Closes #386.