I still remember the first time a range-sum problem blew up my runtime budget. I had a dashboard that needed totals for thousands of time windows across a large array of daily transactions. A naive loop for each query felt innocent, but it ballooned into millions of additions and a sluggish UI. The fix was almost embarrassingly simple: compute running totals once, then answer every query in constant time. That technique is the prefix sum.
If you’ve ever needed to sum numbers between two indices, count events in a time interval, or compare subarrays quickly, prefix sums can change the game. I’ll show you what they are, why they work, and how to implement them in a way that’s safe for real systems. I’ll also point out the edge cases I’ve tripped over, when you should avoid them, and how I use them in modern workflows. By the end, you should be able to reach for prefix sums with confidence and explain them to someone else without hand-waving.
Prefix Sums: The Core Idea
A prefix sum array is a derived array where each index stores the total of everything up to that point. For an original array arr, the prefix sum array prefix is defined as:
prefix[0] = arr[0]prefix[1] = arr[0] + arr[1]prefix[2] = arr[0] + arr[1] + arr[2]- and so on.
Mathematically, for index i:
prefix[i] = arr[0] + arr[1] + ... + arr[i]
The word “prefix” is literal: you’re summing the prefix (the beginning segment) of the array. What makes this powerful is a simple subtraction trick. If you want the sum from index L to index R, you can compute:
sum(L, R) = prefix[R] - prefix[L - 1] (when L > 0)
If L is 0, the answer is just prefix[R].
That’s it. You pay a one-time cost to build the prefix array, then each range-sum query becomes constant time.
A quick analogy
Think of prefix sums as a running bank balance. If you know your balance after each day, you can find how much you spent between Monday and Thursday by subtracting Monday’s balance from Thursday’s. You don’t need to replay every transaction; you just compare two snapshots.
Why the subtraction trick works
It’s easy to memorize the formula, but I like to keep the “why” handy, especially when explaining it to someone else or debugging a tricky case. The key idea is cancellation.
Let prefix[R] be the sum from index 0 through R. Let prefix[L-1] be the sum from index 0 through L-1. Subtracting those two removes all contributions from indices 0..L-1, leaving only L..R. It’s the same logic as cumulative rainfall: the total from March to May equals total through May minus total through February.
This mental model matters because it scales. The 2D prefix sum subtraction rule is just this same cancellation idea, applied twice.
Building a Prefix Sum Array (and Why It’s O(n))
Building prefix sums is a single pass. You take the current value and add it to the previous total. This takes linear time and linear space.
Here’s a clean, runnable Python example with minimal but useful comments:
from typing import List
def buildprefixsum(values: List[int]) -> List[int]:
if not values:
return []
prefix = [0] * len(values)
prefix[0] = values[0]
for i in range(1, len(values)):
# running total up to i
prefix[i] = prefix[i - 1] + values[i]
return prefix
if name == "main":
data = [2, 4, 6, 8, 10]
prefix = buildprefixsum(data)
print(prefix) # [2, 6, 12, 20, 30]
And here’s the matching JavaScript version:
function buildPrefixSum(values) {
if (values.length === 0) return [];
const prefix = new Array(values.length);
prefix[0] = values[0];
for (let i = 1; i < values.length; i++) {
// running total up to i
prefix[i] = prefix[i - 1] + values[i];
}
return prefix;
}
const data = [2, 4, 6, 8, 10];
const prefix = buildPrefixSum(data);
console.log(prefix); // [2, 6, 12, 20, 30]
Time complexity is O(n) because you visit each element once. Space complexity is O(n) because you store the prefix array. This is the trade you make for constant-time queries later.
A small but important design choice: length n or n+1?
There are two common conventions:
1) Prefix array same length as input (what I used above): prefix[i] = sum(0..i).
2) Prefix array with one extra leading zero: prefix[0] = 0, prefix[i+1] = sum(0..i).
The second style is often safer because it avoids special-casing L == 0 and makes formulas uniform. For example:
sum(L, R) = prefix[R + 1] - prefix[L]
The cost is a little extra space and a slightly different indexing mental model. In real systems, I prefer the extra-zero version because it reduces off-by-one bugs. In teaching contexts, I show the same-length version first because it’s visually direct. Pick one and apply it consistently.
Range Queries in Constant Time
Let’s make the subtraction trick concrete. Suppose arr = [2, 4, 6, 8, 10] and prefix = [2, 6, 12, 20, 30].
If you want the sum from index 1 to 3 (4 + 6 + 8):
prefix[3] = 20prefix[0] = 2sum(1, 3) = 20 - 2 = 18
That gives the correct answer with a constant number of operations. Here’s a small helper function that makes this neat and safe:
def range_sum(prefix, left, right):
if left = len(prefix) or left > right:
raise ValueError("Invalid range")
if left == 0:
return prefix[right]
return prefix[right] - prefix[left - 1]
When you build systems that need thousands or millions of queries—analytics dashboards, monitoring tools, log analysis, even game scoring—this approach keeps the latency predictable.
Make the query interface intentional
One detail I care about in production code is the query interface. Are indices inclusive? Are they zero-based? Are timestamps converted to bucket indices? The prefix sum formula assumes inclusive endpoints (L and R both included). If you store buckets or events with half-open intervals (common in time series), the interface should align with that to avoid confusion.
I often define helper methods like:
sum_inclusive(l, r)for inclusive rangesum_range(l, r)for half-open [l, r)
That small naming discipline prevents bugs when multiple teams use the same utility.
In-Place Prefix Sums (When Memory Matters)
Sometimes you can’t afford a second array. If you own the input and don’t need the original values later, you can compute prefix sums in-place:
arr[i] = arr[i] + arr[i - 1]
That’s still O(n) time, but space drops to O(1) extra. Here’s a C++ example:
#include
#include
void buildPrefixInPlace(std::vector& a) {
for (size_t i = 1; i < a.size(); ++i) {
// running total stored back into a
a[i] = a[i] + a[i - 1];
}
}
int main() {
std::vector a = {2, 4, 6, 8, 10};
buildPrefixInPlace(a);
for (auto v : a) std::cout << v << " ";
std::cout << "\n";
return 0;
}
I use the in-place approach when the array is big and memory is tight, such as embedded devices or memory-constrained data pipelines. The downside is that you lose the original array, so you need to plan for that.
In-place pitfalls
- Debugging gets harder because you can’t log the original values after the transformation.
- If you build prefix sums in-place and then later need to apply another transformation (like normalization), you must be careful about ordering.
- In concurrent code, mutating shared arrays can be risky. If another thread expects raw values, you’ll introduce a subtle data race or correctness bug.
If any of those risks are in play, I stick with a separate prefix array and accept the extra memory.
Practical Example: Daily Energy Usage Report
Let’s say you’re tracking hourly energy consumption for a smart building. You have 24 values per day, and you need totals for arbitrary intervals: 8am–noon, 2pm–6pm, or an overnight range.
A naive approach sums the data each time you request a window. But a prefix array lets you compute any window instantly. Here’s a runnable Python example that uses real numbers and handles user input safely:
from typing import List, Tuple
def buildprefixsum(values: List[float]) -> List[float]:
if not values:
return []
prefix = [0.0] * len(values)
prefix[0] = values[0]
for i in range(1, len(values)):
prefix[i] = prefix[i - 1] + values[i]
return prefix
def range_sum(prefix: List[float], left: int, right: int) -> float:
if left = len(prefix) or left > right:
raise ValueError("Invalid range")
return prefix[right] if left == 0 else prefix[right] - prefix[left - 1]
if name == "main":
# kWh per hour for a day
energy = [1.2, 1.0, 0.9, 0.8, 0.7, 0.6,
0.9, 1.4, 1.8, 2.2, 2.6, 2.4,
2.1, 1.9, 2.0, 2.3, 2.5, 2.7,
2.1, 1.8, 1.5, 1.3, 1.1, 1.0]
prefix = buildprefixsum(energy)
# total energy from 8am (index 8) to noon (index 12)
morningtotal = rangesum(prefix, 8, 12)
print(f"8am–noon: {morning_total:.2f} kWh")
# evening total from 6pm (index 18) to 10pm (index 22)
eveningtotal = rangesum(prefix, 18, 22)
print(f"6pm–10pm: {evening_total:.2f} kWh")
In my experience, this is where prefix sums shine: repeated, small queries over a fixed dataset. You can plug this into a web service, an analytics panel, or a CLI tool and keep latency stable.
Handling overnight ranges
A common edge case in time-based systems is a range that wraps around midnight, like 10pm–2am. Prefix sums alone don’t directly handle wraparound ranges, but the fix is simple: split the range into two pieces and add them.
For a day with 24 buckets, 22..23 and 0..2 becomes:
sum(22, 23) + sum(0, 2)
This is the kind of detail I handle in a wrapper function so the rest of the codebase can stay clean.
Common Mistakes I See (and How to Avoid Them)
1) Off-by-one errors
The formula prefix[R] - prefix[L - 1] is easy to misapply. I recommend making a tiny helper function that handles bounds and the L == 0 case. Avoid copy-pasting the formula in multiple places.
2) Mutating input without realizing it
In-place prefix sums are great, but you can’t go back to the original values. If later code needs raw data, copy the array first or build a separate prefix array.
3) Overflow with large sums
Integer overflow is a real risk when values are big or arrays are long. In C++ or Java, I use long long or long for prefix sums even when the original array uses int. In JavaScript, numbers are floating-point, which can introduce precision issues for huge sums, so I use BigInt if needed.
4) Forgetting negative values
Prefix sums work with negatives, but some people incorrectly assume they only apply to non-negative arrays. The range sum formula works regardless of sign; what changes is the behavior of sliding window tricks that rely on monotonicity.
5) Ignoring empty arrays
An empty array is a valid input in many systems. Decide upfront whether you return an empty prefix array or throw an error. I prefer returning an empty prefix array and letting the query function handle bounds.
6) Conflating indices with real-world units
I’ve seen real systems where an index represents a 5-minute bucket or a 15-minute bucket, and a bug appears because someone uses a timestamp directly as an index. Convert time to bucket indices in one place, and assert the mapping. This is a boring but crucial production safeguard.
7) Rebuilding prefix sums too often
If your data is stable across many queries, build prefix sums once and reuse them. A common performance bug is rebuilding the prefix array for each query or in a tight loop for each request. Cache it, memoize it, or store it in a service-level structure if you’re processing repeated data.
When Prefix Sums Are a Great Fit
I reach for prefix sums when I see any of these patterns:
- Many range-sum queries over a static array
- Subarray sum comparisons, like checking if a window hits a threshold
- Building higher-level structures such as 2D prefix sums (more on that soon)
- Computing cumulative totals for charts and reports
- Counting frequency events over time windows
One concrete example: a monitoring service that calculates request counts per minute. With prefix sums, you can find requests between two timestamps in O(1), even if you need to answer thousands of queries per dashboard refresh.
When Prefix Sums Are the Wrong Tool
Prefix sums are not a silver bullet. I avoid them in these cases:
- The array changes frequently and you need to update values in the middle. Prefix sums must be rebuilt unless you use a more advanced data structure.
- You only need one or two range queries. In that case, a simple loop is simpler and clearer.
- You need queries like min or max over ranges; prefix sums won’t help there.
If you need fast updates and fast queries, I switch to Fenwick trees (Binary Indexed Trees) or segment trees. They’re more complex, but they handle updates in logarithmic time.
The “update vs query” rule of thumb
Here’s how I decide quickly:
- Many queries, few updates: prefix sums.
- Many updates, many queries: Fenwick or segment trees.
- Only a few queries: direct sum.
This rule is not absolute, but it has kept me from premature complexity more times than I can count.
Real-World Applications I Use Regularly
1) Log analytics and time-based metrics
When I’m analyzing logs or time series data, I usually have a fixed array of counts by time bucket. Prefix sums let me answer “How many errors between 2pm and 6pm?” without scanning the whole range each time.
2) Resource budgeting in schedulers
In cluster schedulers, I often need to know how much CPU time or memory was used in a window. Prefix sums help estimate usage quickly, which is critical for adaptive scheduling.
3) Finance and trading backtests
For backtesting strategies, I might compute cumulative PnL or volume. Prefix sums let me compare different intervals fast, which is useful for rolling analysis.
4) Game development
When I help on game systems, I see prefix sums used in score tracking, damage history, and moving averages. The constant-time query is perfect for low-latency gameplay.
5) Data preprocessing for machine learning
Even in 2026 workflows that use AI-assisted tooling, many models still rely on careful feature engineering. Prefix sums are a fast way to compute windowed features for time-series models.
6) A/B testing and product analytics
When you track conversions or feature usage over time, you often want to compare windows before and after a launch. Prefix sums make those comparisons cheap, which can be the difference between interactive dashboards and laggy reports.
7) Inventory and supply chain
If you maintain daily demand counts, prefix sums let you answer “How many units sold between week 2 and week 4?” in constant time, which is useful for forecasting or anomaly detection.
Classic Problem Patterns That Use Prefix Sums
Here are the patterns that I recognize immediately as prefix-sum candidates:
- Range sum queries: “Sum from L to R for many queries.”
- Subarray sum equals K:
prefix[i] - prefix[j] = Ksuggests hashing prefix sums. - Maximum subarray sum with constraints: prefix sums can transform to range comparisons.
- Balanced arrays: mapping
0 -> -1and using prefix sums to find equal counts of 0s and 1s.
Example: Subarray Sum Equals K
The trick here is not just prefix sums, but using a hash map of prefix sums seen so far. If current prefix is S, and you’ve seen S - K before, there’s a subarray that sums to K.
Here’s a Java example that I use in interviews and production:
import java.util.HashMap;
public class SubarraySumK {
public static int countSubarrays(int[] nums, int k) {
HashMap count = new HashMap();
count.put(0, 1); // empty prefix
int prefix = 0;
int total = 0;
for (int value : nums) {
prefix += value;
int needed = prefix - k;
total += count.getOrDefault(needed, 0);
count.put(prefix, count.getOrDefault(prefix, 0) + 1);
}
return total;
}
public static void main(String[] args) {
int[] nums = {1, 2, 3, -1, 2, 1};
System.out.println(countSubarrays(nums, 3)); // 4
}
}
This uses prefix sums without explicitly building the array, but the concept is the same.
Another pattern: longest subarray with sum at most K
If all numbers are non-negative, you can pair prefix sums with a two-pointer (sliding window) technique. But if negatives appear, that monotonic property breaks. Prefix sums are still useful, but you’ll need a different strategy (often a balanced tree or binary search over prefix sums).
I call this out because people sometimes mix the patterns and assume sliding window always works. It doesn’t. Prefix sums are universal, but the query strategy around them depends on the data’s properties.
2D Prefix Sums for Grids and Images
Prefix sums extend naturally to 2D arrays. If you have a matrix grid, you can build a 2D prefix matrix where each cell is the sum of everything above and to the left. Then any rectangle sum can be answered in constant time with a few additions and subtractions.
This is extremely useful for image processing, heat maps, and spatial analytics.
Here’s a Python example for a 2D grid:
def buildprefix2d(grid):
if not grid or not grid[0]:
return []
rows, cols = len(grid), len(grid[0])
prefix = [[0] * (cols + 1) for _ in range(rows + 1)]
for r in range(1, rows + 1):
for c in range(1, cols + 1):
prefix[r][c] = (
grid[r - 1][c - 1]
+ prefix[r - 1][c]
+ prefix[r][c - 1]
- prefix[r - 1][c - 1]
)
return prefix
def rect_sum(prefix, r1, c1, r2, c2):
# coordinates are inclusive, zero-based
r1 += 1; c1 += 1; r2 += 1; c2 += 1
return (
prefix[r2][c2]
- prefix[r1 - 1][c2]
- prefix[r2][c1 - 1]
+ prefix[r1 - 1][c1 - 1]
)
I add one extra row and column filled with zeros to avoid bounds checks. That small detail prevents errors and keeps the query formula clean.
Why 2D prefix sums matter in practice
- Heatmaps: sum the number of events in a rectangular region quickly.
- Image processing: compute the average color in a block by summing pixel values.
- Game maps: aggregate terrain costs for a rectangular area.
- GIS systems: calculate totals for regions without scanning each cell.
If 2D is useful, 3D and higher are just more of the same. The principle scales, but the complexity of indexing and memory grows fast, so I only use higher dimensions when necessary.
Performance Reality: What You Should Expect
Prefix sums are fast. On modern hardware, a single pass over a million elements typically takes under 10–15ms in a compiled language and may be closer to 15–30ms in a scripting language, depending on the runtime and data type. The important point is predictability: you pay once, then queries are constant time.
If you need to handle tens of millions of entries, you may start to care about cache behavior and memory layout. In that case, I prefer contiguous arrays and avoid heavy object wrappers. In Java, primitive arrays beat ArrayList. In JavaScript, typed arrays can help.
Before/after in a real workload
When I replace naive range loops with prefix sums, I typically see:
- Latency per query drop from linear to near-constant.
- P99 latency stabilize (fewer worst-case spikes).
- CPU usage flatten out across large bursts of queries.
Even when the prefix build time is noticeable, it’s amortized over many queries. The gain is especially obvious in interactive applications where users trigger repeated ranges on the same dataset.
Traditional vs Modern Practices (2026 View)
Prefix sums are a classic technique, but my workflow has changed in the last few years. Here’s how the pattern looks in practice today:
Modern 2026 approach
—
Use a small utility module and tested helpers
Add quick property-based tests for range queries
Use AI-assisted debugging to generate counterexamples
Use explicit 64-bit types or BigInt where neededI still write the logic myself, but I also lean on AI tools to generate test cases that catch off-by-one errors. That’s not replacing thinking; it’s a fast way to shake out mistakes.
Testing strategy I actually use
When I add a prefix sum utility to a codebase, I usually add tests like:
- For random arrays, compare
range_sumto a naive sum for random ranges. - Edge cases: empty array, single element, all negatives, all zeros.
- Large values to check overflow behavior (or BigInt path).
These tests are simple, cheap, and incredibly effective. They also build confidence for any downstream features that depend on prefix sums.
A Clear Mental Model You Can Reuse
If you remember only one thing, remember this: prefix sums are a precomputed ledger. Each entry tells you how much has accumulated from the start up to that point. Any range total is just a difference between two ledger entries.
That’s why prefix sums feel so magical at first and then become second nature. You’re replacing repeated work with a table of precomputed totals, then using subtraction to carve out exactly the slice you need.
Practical Implementation Patterns I Rely On
1) The utility module pattern
In production, I avoid scattering prefix sum logic across the codebase. I use a small module with:
build_prefix(values)range_sum(prefix, l, r)- Optional
buildprefixzero(values)for the extra-zero convention
This makes it easy to change conventions later and ensures every caller uses the same semantics.
2) Prefix sums for streaming or batched data
If data arrives in chunks, I do one of two things:
- Append to the prefix sum array as new data arrives (just add to the last prefix value).
- Rebuild when the batch completes if the data might be corrected or backfilled.
This way, the prefix structure always matches the current data. In most streaming pipelines, appending is fast and simple, and you still get constant-time queries.
3) Hybrid approach with caching
If queries repeatedly ask for the same ranges, I sometimes cache those results on top of prefix sums. Prefix sums give me O(1), but caching removes even that tiny cost and makes it trivial to calculate repeated metrics. It’s not always necessary, but it’s easy to layer on.
Prefix Sums in Different Languages (Tiny Differences That Matter)
Python
- Integers are unbounded, so overflow isn’t a concern.
- Lists are dynamic; prefix sums are straightforward.
JavaScript
- Number is floating-point; for extremely large sums, precision may degrade.
- If I need exactness, I use
BigInt, but then everything must beBigInt.
Java
- Use
longfor safety, even if input isint. - Arrays are fast;
ArrayListadds overhead.
C++
- Use
long longfor sums. - Prefer
std::vector; avoid mixing signed/unsigned indices if possible.
Go
intsize can vary (32-bit vs 64-bit). For safety, I useint64for prefix sums.
These are small differences, but I’ve seen them cause real bugs in production, so I call them out early.
Edge Cases You Should Handle Deliberately
1) Empty input: return empty prefix, or throw? Decide explicitly.
2) Single element: queries should still work and not assume prefix[-1].
3) Large ranges: if L and R come from user input, validate bounds.
4) Negative indices: reject early; they usually signal a caller bug.
5) Very large data: memory footprint matters; consider in-place or chunked approaches.
When I write the helper function, I often validate ranges and fail fast with a clear error message. That’s better than returning a wrong sum and letting it slip into analytics or finance outputs.
Alternative Approaches and How They Compare
Prefix sums are one tool in a broader toolbox. Here’s a quick comparison with common alternatives:
- Naive summation: simplest, best for few queries.
- Prefix sums: fastest for many queries on static data.
- Fenwick tree: supports updates and range queries in O(log n).
- Segment tree: handles range sums, range mins, and more with updates in O(log n).
I keep prefix sums as my first choice for read-heavy workloads. If updates become frequent, I switch tools rather than bending prefix sums into something they’re not.
Production Considerations: Scaling, Monitoring, and Stability
Memory budget
A prefix array doubles the memory footprint of your data. If you’re dealing with tens of millions of entries, that’s significant. I’ve seen analytics systems hit memory ceilings simply because a prefix array was added without sizing. In those cases, I use in-place, chunked prefix sums, or move to a more compact representation (like int32 vs int64 when safe).
Batch vs real-time
Prefix sums are easy to update in real time by appending. If you need to correct older data, you’ll need a rebuild or a more dynamic structure. In practice, I separate “live” and “historical” segments: a smaller dynamic structure for recent data and a frozen prefix for older data. It keeps complexity under control.
Monitoring correctness
I like to include “canary” checks in analytics systems. For example, each day I verify the total sum of all buckets via two different methods and compare. This catches corruption or misaligned indices early. Prefix sums make it easy to compute totals, so they fit naturally into monitoring checks.
Deeper Code Example: A Robust Range Sum Class
Here’s a Python class that uses the extra-zero convention and provides a clean interface. It includes bounds checks and supports half-open ranges, which I find more natural in time-based data.
from typing import List
class PrefixSum:
def init(self, values: List[int]):
self.prefix = [0] * (len(values) + 1)
for i, v in enumerate(values):
self.prefix[i + 1] = self.prefix[i] + v
def sum_inclusive(self, left: int, right: int) -> int:
if left < 0 or right right or right >= len(self.prefix) - 1:
raise ValueError("Invalid range")
return self.prefix[right + 1] - self.prefix[left]
def sumhalfopen(self, left: int, right: int) -> int:
# sums [left, right)
if left < 0 or right right or right > len(self.prefix) - 1:
raise ValueError("Invalid range")
return self.prefix[right] - self.prefix[left]
Example usage
values = [5, -2, 3, 7, 1]
ps = PrefixSum(values)
print(ps.sum_inclusive(1, 3)) # -2 + 3 + 7 = 8
print(ps.sumhalfopen(1, 4)) # -2 + 3 + 7 = 8
I keep both inclusive and half-open methods because it avoids confusion when integrating with time-based indexing systems. The extra zero makes the formula consistent, and it also makes empty ranges sum to zero naturally in the half-open case.
Prefix Sums and Windowed Averages
A common extension is to compute averages over ranges. If you can compute a sum in O(1), an average is just sum divided by length. This is especially useful for rolling metrics like “average error rate over the last N minutes.”
With prefix sums, the average for [L, R] becomes:
avg(L, R) = (prefix[R] - prefix[L-1]) / (R - L + 1)
When the denominator is small and the sum is large, I use floating-point division with care. If precision matters, I do the sum in 64-bit and then cast to float only at the end.
A Few Subtle Optimizations That Help at Scale
1) Avoid repeated bounds checks in tight loops when safe. Pre-validate ranges in batch, then compute sums quickly.
2) Use contiguous arrays for cache friendliness.
3) Store prefix sums in the same numeric type as your maximum expected sum, not the input type.
4) If you batch many queries, re-order them by R to improve cache access to the prefix array (this can matter for very large arrays).
Most of the time, you don’t need these micro-optimizations. But when you do, the prefix sum structure is simple enough that it’s easy to tune.
A Short Checklist Before You Use Prefix Sums
- Do I have many range-sum queries on static data?
- Can I afford the extra memory for the prefix array?
- Is the numeric type large enough to avoid overflow?
- Are range endpoints well-defined and consistent?
- Have I tested edge cases like empty arrays and single elements?
If those answers are yes, prefix sums are a strong fit.
Final Mental Anchor
Prefix sums are precomputed totals. They let you answer range-sum queries by subtraction instead of repeated addition. Once you see them as a ledger of cumulative values, the formulas feel obvious, and the technique becomes a default tool in your problem-solving toolbox.
I still use prefix sums constantly, both in algorithmic interviews and in real systems. They’re simple, fast, and reliable. And when you pair them with careful interfaces and testing, they’re surprisingly robust in production.
If you want to go further after this, I’d recommend practicing with:
- Subarray sum equals K using hash maps
- 2D prefix sums on small grids
- Rolling averages and windowed analytics
Each of those builds on the same core idea and reinforces why prefix sums are such a foundational technique.


