When a Python script feels slow, guessing is the fastest way to waste time. I’ve seen teams spend days “fixing” the wrong part of a pipeline because nobody measured the actual runtime. In my work, I start by timing the code in a few different ways, each suited to a specific question. Am I comparing two small snippets? I use a repeatable micro‑benchmark. Am I measuring a whole batch job? I use a wall‑clock timer around the critical path. Am I diagnosing a long‑running service? I log durations with consistent tags so I can spot regressions over time.
You should treat execution time like a diagnostic signal, not a single number. The right measurement method gives you a stable view of performance, and the wrong one gives you noise. In this guide, I’ll show you how I time Python code using the timeit, time, and datetime modules, plus some modern workflow patterns that are common in 2026. I’ll also cover common mistakes, when not to measure at all, and how to interpret results so you can take action with confidence.
What “Execution Time” Really Means in Python
Execution time sounds simple, but it can mean different things depending on context. I usually split it into three categories:
- Wall‑clock time: The actual time you wait. This includes waiting on disk, network calls, and OS scheduling.
- CPU time: The amount of time the CPU spends executing your process. This excludes time spent waiting for I/O.
- Benchmark time: The time for a tiny snippet measured repeatedly to reduce randomness.
You should match the measurement to the decision you need to make. If you’re running a data export that your team waits on every morning, wall‑clock time is what matters. If you’re comparing two algorithms that run in memory, CPU time or micro‑benchmarks will be more accurate.
A useful analogy I give juniors: wall‑clock time is how long a coffee order takes from standing in line to walking away, CPU time is how long the barista actually spends making your drink, and a benchmark is timing just the espresso shot over and over to see the average. All are “time,” but they answer different questions.
Quick Checks with time: A Reliable First Pass
When I need a fast, simple measurement, I reach for the time module. It’s a great choice for “how long did this block take in total?” You should use it for scripts, ETL tasks, or endpoints where external calls matter.
Here’s a clean, runnable example that measures a CPU‑heavy calculation:
import time
start = time.time()
result = sum(i100 for i in range(1000))
end = time.time()
print(f"Execution time: {(end - start) * 1e3:.3f} ms")
I keep the code minimal and readable. I also prefer formatting the output because it makes logs easier to scan. If you need repeated measurements across different input sizes, you can wrap the same pattern in a loop:
import time
for n in range(100, 5501, 100):
start = time.time()
total = sum(i100 for i in range(n))
end = time.time()
# Explicit formatting keeps logs clean for later analysis
print(f"n={n:4d} -> {(end - start) * 1e3:.3f} ms")
This pattern is great for quick scaling insights. You’ll see how time grows as the input size increases, which is often more useful than a single number.
When I use time:
- Measuring end‑to‑end script runtime
- Timing I/O‑heavy tasks
- Getting a rough baseline before deeper profiling
When I don’t use time:
- Comparing two tiny snippets (too noisy)
- Running inside a tight loop where overhead matters
Micro‑Benchmarks with timeit: My Go‑To for Small Snippets
If I’m deciding between two small implementations, I want a stable number. That’s what timeit is for. It runs the code many times, reducing noise from the OS scheduler, background tasks, and Python’s own startup overhead. I prefer timeit when I’m tuning inner loops or testing small functions.
Here’s a full example that uses a setup string to define the function and measures it a million times:
import timeit
setup_code = """
from math import sqrt
def build_list():
return [sqrt(x) for x in range(100)]
"""
timeit returns seconds; multiply by 1e3 for milliseconds
elapsedms = timeit.timeit("buildlist()", setup=setupcode, number=1000_000) * 1e3
print(f"Average over 1,000,000 runs: {elapsed_ms:.3f} ms")
A key detail: the code under test should be small and self‑contained. You should avoid measuring things like file reads or network calls with timeit because the variability will drown out the signal.
I also use the command line version of timeit when I want a quick read without writing a script. For example:
python -m timeit "sum(range(100))"
A typical output looks like:
1000000 loops, best of 5: 0.261 usec per loop
That “best of 5” detail matters. It means timeit ran the test several times and reported the fastest average. This is good for comparison, but you should still treat the result as a relative measure, not a precise promise.
When I use timeit:
- Comparing two small code paths
- Evaluating micro‑level improvements
- Building confidence before refactoring
When I don’t use timeit:
- Measuring long scripts or workflows
- Timing async I/O or external services
datetime Timing: Readable for Logs and Reports
The datetime module isn’t my first choice for precision, but it’s convenient when I need timestamps in logs. If your team expects timestamps in reports, datetime.now() plus a delta is readable and easy to store.
Here’s a simple example:
from datetime import datetime
start = datetime.now()
result = sum(i100 for i in range(1000))
end = datetime.now()
elapsedms = (end - start).totalseconds() * 1e3
print(f"Execution time: {elapsed_ms:.3f} ms")
The output is clear and can be aligned with other timestamped events. I use this pattern in batch workflows where I’m also logging start and end times for auditing.
When I use datetime:
- Long‑running scripts where readability matters
- Logging start and end timestamps for reports
When I don’t use datetime:
- Micro‑benchmarks
- Tight loops where overhead skews results
Traditional vs Modern Timing Workflows (2026 Lens)
Timing code hasn’t changed much, but the workflow around it has. In 2026, I see teams leaning on AI‑assisted tooling, structured logs, and reproducible benchmarks. Here’s how I compare the approaches:
Modern approach (2026)
—
time.time() deltas Structured timing logs with JSON fields for aggregation
Reproducible benchmarks stored with source control
AI‑assisted benchmark generation and summary analysis
Continuous regression checks on critical pathsI still use the traditional tools, but I wrap them with modern practices. For example, I’ll log execution time as a JSON object so our observability stack can chart it over time. That’s a small change with a big impact on real‑world performance tracking.
Common Mistakes I See (and How I Avoid Them)
Over the years, I’ve seen a few timing mistakes that lead teams astray. Here’s what I watch for:
- Measuring only once: One run can be misleading. I repeat measurements or use
timeitfor small code. - Timing inside the function under test: This changes behavior and can skew results. I measure outside the function whenever possible.
- Mixing warm‑up and measurement: The first run may be slower due to caching or imports. I do a warm‑up pass when accuracy matters.
- Measuring I/O with
timeit: File and network calls add unpredictable latency. I measure those withtimeand sample multiple runs. - Ignoring variance: If one run is 10ms and another is 50ms, that’s a signal, not noise. I look for the cause before drawing conclusions.
A small example of a warm‑up pattern that I recommend:
import time
Warm-up
for _ in range(3):
_ = [i2 for i in range(10000)]
start = time.time()
_ = [i2 for i in range(10000)]
end = time.time()
print(f"Measured after warm-up: {(end - start) * 1e3:.3f} ms")
This isn’t perfect, but it reduces surprises from one‑time initialization costs.
Choosing the Right Method: My Practical Heuristics
When someone asks me “How should I measure this?”, I use a few quick rules. You should treat these as defaults, not strict laws:
- Measuring a script or pipeline → Use
time.time()around the main block. - Comparing two small functions → Use
timeitwith at least 100k iterations. - Logging timings for audit or compliance → Use
datetimeand store start/end timestamps. - Monitoring long‑term trends → Log structured timings and aggregate in dashboards.
I also think about the level of accuracy I actually need. If you’re choosing between two algorithms with a 10x difference, a rough measure is fine. If you’re shaving milliseconds off a hot path, you’ll want a controlled benchmark.
Real‑World Scenarios and Edge Cases
To keep this practical, here are a few scenarios and how I measure them:
Scenario 1: API endpoint that calls a database and an external service
I use wall‑clock timing around the endpoint handler and log the result. I also time the database query separately so I can spot which part is slowing down.
Scenario 2: Data science notebook comparing two pandas approaches
I use timeit for the small section, but I also run the notebook cell multiple times to ensure the result is stable. If there’s a large spread, I consider the data cache effect.
Scenario 3: CLI tool with multiple subcommands
I time the entire command and log both total and per‑step durations. That lets me see where users are waiting.
Edge case: Very short code paths
If your code runs in microseconds, the overhead of timing can be larger than the code itself. I handle this by increasing the workload inside the measured block and dividing by the number of iterations.
import time
iterations = 1000000
start = time.time()
for _ in range(iterations):
_ = 12345 * 67890
end = time.time()
peropns = ((end - start) / iterations) * 1e9
print(f"Approx per-op time: {peropns:.2f} ns")
This is not perfectly precise, but it gives you a sense of scale without requiring specialized profilers.
Performance Considerations Without the Hype
Performance discussions often turn into endless debates. I keep it simple: measure, interpret, decide. You should also remember that execution time is only one piece of performance. Memory usage, concurrency, and throughput matter too. Still, timing is usually the easiest signal to capture, so it’s a good starting point.
I also avoid over‑tuning. If a script runs in 120ms and you can get it to 90ms, that’s a nice win if it runs 10,000 times a day. If it runs once a week, that improvement probably doesn’t matter. In practice, I prioritize changes that shift time ranges significantly: for example, turning a 5–8 second task into a 300–600ms task, or reducing a batch job from 40–50 minutes to 5–10 minutes.
My Personal Checklist for Timing Python Code
When I’m mentoring or doing a quick internal review, I run through this list:
- What question are we trying to answer with this timing?
- Is the chosen method aligned with that question?
- Did we repeat measurements or use
timeitfor small code? - Are we logging or reporting the result in a consistent format?
- Can we explain the number to a teammate in plain language?
That last one is important. If you can’t explain the timing result, it’s hard to act on it.
Practical Next Steps You Can Apply Today
If you want to improve how you measure execution time right away, here’s what I recommend:
- Start with
time.time()around the critical path of your script to get a baseline. - Use
timeitfor any snippet where you’re comparing approaches or exploring small changes. - Add a warm‑up run before measurements when you care about stable results.
- Log timing data in a consistent way so you can compare results over time.
- Repeat your measurements under similar conditions, not just once on a fast laptop.
I also suggest keeping a tiny benchmark file in your repo for performance‑sensitive code. That way, you can re‑run it after refactors or dependency updates and spot regressions before users do.
Execution time measurement isn’t glamorous, but it’s a foundational skill that saves real time and money. In my experience, the teams that measure first are the teams that ship with confidence. If you apply these techniques consistently, you’ll catch performance issues earlier, build better intuition, and make clearer tradeoffs when the pressure to move fast hits your next release.
Beyond Basics: perfcounter, processtime, and Why They Matter
The time module is great, but once you want more precision or want to isolate CPU usage, you should know about these two functions:
time.perf_counter(): High‑resolution wall‑clock timer designed for benchmarking. It includes time spent sleeping and waiting on I/O.time.process_time(): CPU time used by the current process, excluding sleep and I/O waits.
Here’s how I use them side by side to see if I’m CPU‑bound or I/O‑bound:
import time
startwall = time.perfcounter()
startcpu = time.processtime()
Simulate a mix of CPU work and I/O wait
= [i2 for i in range(200000)]
time.sleep(0.1)
endwall = time.perfcounter()
endcpu = time.processtime()
print(f"Wall time: {(endwall - startwall) * 1e3:.2f} ms")
print(f"CPU time: {(endcpu - startcpu) * 1e3:.2f} ms")
If the wall‑clock time is much larger than CPU time, the script is likely waiting on I/O, sleeping, or blocked by external resources. That’s incredibly useful when you’re deciding whether to optimize code or optimize external dependencies.
My rule of thumb:
- Use
perf_counter()for micro‑benchmarks inside scripts (even if you don’t usetimeit). - Use
process_time()if you specifically want to ignore I/O and focus on CPU cost.
Simple Timing Utility I Reuse Across Projects
I don’t like copy‑pasting timing boilerplate all over a codebase. A small helper function makes measurements consistent and reduces mistakes. Here’s a tiny utility I use in scripts or internal tools:
import time
from contextlib import contextmanager
@contextmanager
def timer(label: str):
start = time.perf_counter()
try:
yield
finally:
end = time.perf_counter()
elapsed_ms = (end - start) * 1e3
print(f"{label}: {elapsed_ms:.2f} ms")
Usage
with timer("loadandprocess"):
data = [i3 for i in range(100_000)]
This pattern is clean, readable, and easy to reuse. It also creates a habit of measuring consistently, which reduces the “one‑off benchmarking” trap.
If you want structured logging instead of prints, replace the print line with a JSON logger. The key is consistency: the same format, the same units, the same labels. That’s how you compare results across time and across environments.
Measuring Functions with Decorators (When It’s Worth It)
I don’t recommend timing everything with decorators, but they’re useful for quick diagnostics, especially in small scripts or when you want to time multiple functions consistently.
import time
from functools import wraps
def timed(fn):
@wraps(fn)
def wrapper(args, *kwargs):
start = time.perf_counter()
result = fn(args, *kwargs)
end = time.perf_counter()
print(f"{fn.name} -> {(end - start) * 1e3:.3f} ms")
return result
return wrapper
@timed
def heavytask(n=200000):
return sum(i2 for i in range(n))
heavy_task()
I treat this as a temporary instrument. Once I’ve found the bottleneck, I remove it so it doesn’t add noise or overhead.
Interpreting Results: From Numbers to Decisions
A timing number is only useful if it changes a decision. Here’s how I interpret results in practice:
- Differences under 5%: Often noise, unless the code runs extremely often.
- Differences in the 10–30% range: Worth investigating if the code is hot or user‑visible.
- Order‑of‑magnitude differences: Almost always worth taking, even if it costs engineering time.
I also compare results within ranges, not absolute numbers. If one approach is 1.9–2.2 ms and another is 3.8–4.4 ms, I feel confident about the difference even if each individual run varies.
Execution Time vs Throughput: Don’t Mix the Metrics
I’ve seen teams optimize for the wrong metric because they conflate latency (execution time) with throughput (units processed per time). They’re related but not the same.
- Execution time (latency): How long a single operation takes.
- Throughput: How many operations you can complete per second.
For example, a batch job might have high per‑item latency but extremely high throughput because it processes items in parallel. In that case, wall‑clock timing alone might make the system look slow even if it’s efficient overall. This matters when you interpret timing results: always check whether the “slow” result is actually hurting real‑world throughput.
Timing in Async Code: Avoiding the Usual Pitfalls
Async code changes the picture. The wall‑clock time for a single task may be dominated by waiting, but the whole system can still be fast if you’re running multiple tasks concurrently.
Here’s how I measure async functions without blocking the event loop:
import asyncio
import time
async def fetch_simulated(delay: float):
await asyncio.sleep(delay)
return delay
async def main():
start = time.perf_counter()
results = await asyncio.gather(
fetch_simulated(0.2),
fetch_simulated(0.3),
fetch_simulated(0.1),
)
end = time.perf_counter()
print(f"Total wall time: {(end - start) * 1e3:.2f} ms")
print(f"Results: {results}")
asyncio.run(main())
The total time will be close to the longest single delay (not the sum). That’s expected, and that’s why measuring async code like synchronous code often misleads people.
When I time async tasks:
- Use wall‑clock timers around
awaitblocks. - Measure per‑task durations if you need to compare tasks.
- Prefer aggregate results (e.g., total duration of
gather) when optimizing user‑visible latency.
Timing Network and I/O: Use Sampling, Not Single Runs
I/O variability is the fastest way to confuse yourself. If a download takes 120ms in one run and 450ms in the next, it doesn’t mean your code got worse. It means your environment changed.
In those cases, I collect a small sample and look at percentiles:
import time
import random
samples = []
for _ in range(20):
start = time.perf_counter()
time.sleep(random.uniform(0.02, 0.08)) # simulate variable I/O
end = time.perf_counter()
samples.append((end - start) * 1e3)
samples.sort()
print(f"p50: {samples[9]:.2f} ms")
print(f"p90: {samples[17]:.2f} ms")
print(f"p99: {samples[19]:.2f} ms")
Even without a statistics library, you can learn a lot from rough percentiles. This helps you avoid optimizing for “the average” when your real pain comes from outliers.
Measuring in Production: Lightweight and Safe
Production timing is a different game. The goal is to learn without adding significant overhead or noise. My guidelines:
- Use a lightweight timer such as
perf_counter()and aggregate results. - Log minimal fields (e.g., label, durationms, requestid).
- Sample instead of timing every single request to reduce overhead.
Here’s a simple sampling pattern I use:
import time
import random
SAMPLE_RATE = 0.1 # 10% sampling
start = time.perf_counter()
... work ...
end = time.perf_counter()
if random.random() < SAMPLE_RATE:
elapsed_ms = (end - start) * 1e3
print({"event": "timing", "label": "checkoutflow", "durationms": round(elapsed_ms, 2)})
This is basic, but it scales well. You keep visibility while avoiding a performance cliff caused by excessive logging.
Comparing Implementations: A Mini‑Playbook
When I compare two approaches, I want the experiment to be fair. Here’s my simple playbook:
- Isolate the code (don’t include unrelated setup).
- Use the same inputs for each test.
- Warm up if caches or imports are involved.
- Run multiple trials and compare ranges, not single numbers.
- Log results in the same units so comparisons are direct.
Here’s a compact example comparing list comprehension vs map for a small transformation:
import timeit
setup = """
values = list(range(1000))
"""
comp = timeit.timeit("[v * 2 for v in values]", setup=setup, number=100_000)
map = timeit.timeit("list(map(lambda v: v * 2, values))", setup=setup, number=100000)
print(f"list comprehension: {comp * 1e3:.2f} ms")
print(f"map + lambda: {map_ * 1e3:.2f} ms")
The important part isn’t which is faster; it’s that the comparison is clean and reproducible.
Timing in Notebooks: Useful but Easy to Misuse
In notebooks, the goal is often speed of experimentation, not perfect benchmarks. I still time things, but I take notebook results as directional, not definitive.
Tips I follow:
- Restart the kernel if you’re testing performance after major changes.
- Run the cell multiple times and watch the spread.
- Avoid timing cells that include huge imports or one‑time initialization unless that is the real cost you care about.
If you want a quick timing in a notebook without external tools, a simple time.perf_counter() block is fine. But I avoid calling it a benchmark unless I’ve repeated it and controlled the environment.
Scaling Tests: Input Size Matters More Than You Think
A common mistake is benchmarking on a tiny input size and assuming the results will hold for large data. Complexity matters more than micro‑optimizations in that case.
I like to measure across a range of input sizes and then look at the shape of the curve. Even a simple loop can reveal whether performance grows linearly, quadratically, or worse.
import time
for n in [1000, 10000, 50000, 100000]:
start = time.perf_counter()
_ = [i2 for i in range(n)]
end = time.perf_counter()
print(f"n={n:6d} -> {(end - start) * 1e3:.2f} ms")
If time explodes as n grows, the right fix is often algorithmic, not micro‑optimization. That’s a crucial insight that simple one‑off timings can hide.
When Not to Measure Execution Time
This might sound strange, but sometimes you shouldn’t measure at all. If you don’t have a decision to make, you’re just collecting numbers. I avoid measuring when:
- The code is obviously fast enough for the use case.
- I haven’t identified a user‑visible latency problem.
- The measurement environment is too noisy to give stable results.
- The effort to measure would delay urgent work without benefit.
Execution time is a tool, not a ritual. I measure when it leads to action.
Common Pitfalls with Timing Logic in the Codebase
I’ve seen teams leave timing code in production permanently, or worse, build new features on top of it. That causes subtle issues:
- Side effects in timing logic can alter behavior (especially with decorators).
- Over‑logging can slow the system and create noisy metrics.
- Mixed units (ms vs seconds) can create misleading dashboards.
If you want to keep timing in production, I strongly recommend:
- Consistent units (ms or seconds, not both).
- Clear labels and tags.
- Sampling or rate limiting.
- A plan to clean up temporary timing hooks after the investigation.
Alternative Approaches: Profiling vs Timing
Timing tells you “how long,” but profiling tells you “where.” I often start with timing, then move to profiling if I need deeper insight.
Here’s the difference in practice:
- Timing: simple, fast, minimal overhead, good for baselines.
- Profiling: detailed, more overhead, shows hotspots and call stacks.
If a script is slow and the timing results aren’t clear, that’s a signal to reach for a profiler. But I still like to start with timing because it’s quick and provides a useful baseline for later comparisons.
A Tiny Benchmark File Template I Keep Around
I keep a file called bench.py in performance‑sensitive repos. It’s intentionally small and easy to run. Here’s a template you can adapt:
import time
def workload(n: int) -> int:
return sum(i2 for i in range(n))
if name == "main":
for n in [10000, 50000, 100_000]:
start = time.perf_counter()
workload(n)
end = time.perf_counter()
print(f"n={n:6d} -> {(end - start) * 1e3:.2f} ms")
This is not a replacement for real benchmarks, but it’s a fast sanity check you can run before and after refactors.
Comparing Results Across Machines (and Why It’s Tricky)
Timing results can change drastically across machines. A 3ms operation on a fast laptop can be 12–20ms on a shared CI runner. That doesn’t necessarily mean your code got worse; it means the environment is different.
When comparing across machines, I focus on:
- Relative differences (A is faster than B) rather than absolute times.
- Repeating measurements in the same environment whenever possible.
- Running on a consistent machine when benchmarking regressions.
If the team needs reliable numbers, I strongly suggest using a dedicated benchmark environment rather than ad‑hoc developer machines.
A Note on Garbage Collection and Timing Noise
Python’s garbage collector (GC) can introduce random spikes. If you’re measuring small snippets, a GC pause can distort results. I don’t recommend disabling GC in most cases, but if you’re running a micro‑benchmark and the results vary wildly, it’s worth considering.
A cautious approach is to run multiple trials and ignore outliers. The goal isn’t to “game” the result; it’s to find a stable signal in a noisy environment.
Practical Scenarios, Extended
To add more depth, here are a few additional scenarios I run into and how I handle them:
Scenario 4: Batch job with multiple phases
I time each phase separately and the total job. That gives me two insights: which phase is slow and how much each phase contributes. If a phase dominates, I focus there.
Scenario 5: Data pipeline with caching
I time the pipeline with a cold cache and a warm cache. The difference tells me how much the caching strategy is helping. The cold cache time is often closer to the real user experience for first‑time runs.
Scenario 6: ML training loop
I measure per‑epoch time and log it. If epoch times drift upward, I check data loading, GPU utilization, or memory pressure. For training, consistency is often more important than micro‑optimizations.
Scenario 7: Web scraping task
I measure total wall‑clock time and per‑request time. If per‑request time spikes, it usually points to rate limiting or network issues rather than code inefficiency.
A Compact Decision Matrix You Can Use
I keep a simple decision matrix in my head. You can adapt it to your workflow:
- You want overall runtime →
time.time()orperf_counter()around the main function. - You want CPU vs I/O split →
processtime()plusperfcounter(). - You want quick snippet comparison →
timeit. - You want logs with timestamps →
datetime. - You want production visibility → lightweight timing + sampling + structured logs.
What to Do After You Measure
Measurement isn’t the goal; action is. Here’s how I decide what to do next:
- If the result is acceptable: Stop. Don’t optimize for sport.
- If the result is slow but stable: Consider algorithmic changes or caching.
- If the result is unstable: Investigate environment or I/O variability.
- If the result is regressing: Check recent changes, dependencies, or infrastructure.
This keeps performance work focused and aligned with real outcomes.
A Final Perspective: Timing as a Skill, Not a One‑Off
The best engineers I know don’t treat timing as a special event. They build it into their thinking. They know which methods to use, how to interpret results, and when to stop.
If there’s one mindset shift I recommend, it’s this: execution time is a tool for clarity. The goal isn’t to make everything fast; it’s to make the right things fast enough.
Practical Next Steps You Can Apply Today (Extended)
To wrap things up, here’s a slightly expanded, concrete checklist you can apply right away:
- Measure your script’s end‑to‑end time with
time.time()orperf_counter(). - If you suspect CPU bottlenecks, compare wall‑clock time and process time.
- Use
timeitwhen deciding between two small implementations. - Repeat measurements, then compare ranges rather than single values.
- Log results consistently so you can compare across days or releases.
- Keep a tiny benchmark file in your repo and run it after refactors.
- Consider sampling in production so you get real‑world data without overhead.
Execution time measurement isn’t glamorous, but it’s a foundational skill that saves real time and money. In my experience, the teams that measure first are the teams that ship with confidence. If you apply these techniques consistently, you’ll catch performance issues earlier, build better intuition, and make clearer tradeoffs when the pressure to move fast hits your next release.
Expansion Strategy
Add new sections or deepen existing ones with:
- Deeper code examples: More complete, real-world implementations
- Edge cases: What breaks and how to handle it
- Practical scenarios: When to use vs when NOT to use
- Performance considerations: Before/after comparisons (use ranges, not exact numbers)
- Common pitfalls: Mistakes developers make and how to avoid them
- Alternative approaches: Different ways to solve the same problem
If Relevant to Topic
- Modern tooling and AI-assisted workflows (for infrastructure/framework topics)
- Comparison tables for Traditional vs Modern approaches
- Production considerations: deployment, monitoring, scaling


