Skip to content

Implement non-blocking event sending#1155

Merged
st0012 merged 7 commits intomasterfrom
implement-#1130
Dec 18, 2020
Merged

Implement non-blocking event sending#1155
st0012 merged 7 commits intomasterfrom
implement-#1130

Conversation

@st0012
Copy link
Copy Markdown
Contributor

@st0012 st0012 commented Dec 17, 2020

For users who don't use config.async, sending events is currently a blocking operation. This means a request won't complete before some SDK operations like serializing or making requests to Sentry are also finished. So this PR introduces non-blocking event sending functionality to the SDK.

The functionality works like this:

  1. When the SDK is initialized, a Sentry::BackgroundWorker will be initialized too.
  2. When an event is passed to Client#capture_event, instead of sending it directly with Client#send_event, we'll let the worker do it.
  3. The worker will have a number of threads. And one of the idle threads will pick the job and call Client#send_event.
  • If all the threads are busy, new jobs will be put into a queue, which has a limit of 30.
  • If the queue size is exceeded, new events will be dropped.

About Sentry::BackgroundWorker

  • The worker is built on top of the concurrent-ruby gem's ThreadPoolExecutor, which is also used by Rails ActiveJob's async adapter. This should minimize the risk of messing up client applications with our own thread pool implementation.
  • It respects users' async implementation. So if the users have config.async set, the worker won't initialize a thread pool and won't be used either.
  • The worker will be initialized at the end of the Sentry.init call.

New Configuration Option - background_worker_threads

This change also introduces a new background_worker_threads config option. It allows users to decide how many threads should the worker hold. By default, the value will be the number of the processors the user's machine has. For example, if the user's machine has 4 processors, the value would be 4.

Of course, users can always override the value to fit their use cases, like

config.background_worker_threads = 5 # the worker will have 5 threads for sending events

Users can also disable this new non-blocking behaviour by giving a 0 value:

config.background_worker_threads = 0 # all events will be sent synchronously

closes #1130

@st0012 st0012 added this to the 4.1.0 milestone Dec 17, 2020
@st0012 st0012 self-assigned this Dec 17, 2020
@codecov-io
Copy link
Copy Markdown

codecov-io commented Dec 17, 2020

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 98.28%. Comparing base (8d0a146) to head (d66c36c).
⚠️ Report is 1184 commits behind head on master.

Additional details and impacted files
@@            Coverage Diff             @@
##           master    #1155      +/-   ##
==========================================
+ Coverage   97.79%   98.28%   +0.49%     
==========================================
  Files         187       94      -93     
  Lines        7840     4078    -3762     
==========================================
- Hits         7667     4008    -3659     
+ Misses        173       70     -103     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

@st0012 st0012 requested a review from HazAT December 17, 2020 12:20
Copy link
Copy Markdown
Member

@HazAT HazAT left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, I'll add @bruno-garcia for a second pair of eyes

@HazAT HazAT requested a review from bruno-garcia December 17, 2020 12:32
Copy link
Copy Markdown
Member

@bruno-garcia bruno-garcia left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. Very nice and simple

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Internal Transport queue to limit backpressure

4 participants