When I’m reviewing production incidents, arithmetic expression bugs show up more often than you’d expect. A misread operator, a forgotten precedence rule, or a corner case around unary minus can turn a pricing engine into a fire drill. If you build calculators, filters, billing logic, or even a small scripting feature, you’ll eventually need a reliable evaluator. I’ll walk you through how I think about expression evaluation as a senior engineer in 2026, using the stack-based approach that still holds up in modern systems. You’ll see why postfix (Reverse Polish) is so friendly to machines, how to convert from infix without losing meaning, and how to run the actual evaluation safely and predictably. I’ll also show a full, runnable Python implementation, highlight common mistakes I still see in code reviews, and suggest a modern workflow for testing and verification. You should finish with the ability to implement or review an evaluator with confidence, not just copy an algorithm from a textbook.
Stacks as the evaluation engine
I like to explain stacks with a simple analogy: a stack is a pile of plates in a kitchen. You place new plates on top, and you always remove the top plate first. That “last in, first out” behavior maps perfectly to how we handle operators and operands in expression evaluation. When you read an expression from left to right, you either see a value (push it) or an operator (pop a pair of values, apply, push the result). That’s the full mental model.
You should also notice what the stack model buys you: locality and clarity. Every time you pop, you’re working on the most recent partial result. That keeps memory usage small and the implementation simple. In practice, the stack model also makes it easy to detect malformed expressions. If you try to pop two operands and there’s only one, you have a clear error. If you finish processing and the stack doesn’t hold exactly one value, the expression is invalid.
In my experience, stacks are the most reliable bridge between human-friendly expressions and machine-friendly evaluation. They help you keep precedence rules explicit, handle parentheses cleanly, and avoid the hidden complexity that shows up when you try to evaluate infix directly.
Three notations, one meaning
Arithmetic expressions are commonly written in infix notation, where operators sit between operands: A + B, 2 5, or (A – B) C. Humans like infix because it reads naturally, but it is ambiguous without precedence and parentheses. That’s why you see (A + B) C and A + (B C) written differently, even though they use the same operators.
Prefix notation (also called Polish notation) moves the operator in front of its operands. Postfix (Reverse Polish) moves the operator after the operands. Both remove the need for parentheses because the order is explicit.
- Prefix: + A B
- Postfix: A B +
The power move here is that postfix turns evaluation into a single left-to-right scan. You push operands; when you see an operator, you pop and compute. No need to peek ahead for precedence rules because the ordering is already encoded. That is why stack-organized machines and stack-based interpreters naturally prefer postfix.
Here’s the classic example, kept in symbolic form:
- Infix: (A-B)*[C/(D+E)+F]
- Postfix: AB- CDE +/F +*
If you mentally trace it, you’ll see that postfix locks in the correct order: evaluate A-B and D+E first, then do C/(D+E), then add F, then multiply by (A-B). No extra parentheses needed.
Precedence and associativity rules that actually matter
I always tell people: precedence decides which operator binds tighter; associativity decides how to resolve ties. Without both, your evaluator will drift from user expectations.
For typical arithmetic with five binary operators, I keep the hierarchy simple:
- Highest: exponentiation (^)
- Next: multiplication (*) and division (/)
- Lowest: addition (+) and subtraction (-)
Associativity is the second rule. Most operators are left-associative (A – B – C becomes (A – B) – C), but exponentiation is usually right-associative (A ^ B ^ C becomes A ^ (B ^ C)). If you ignore this, you will compute power chains incorrectly, and that’s a bug you can’t easily explain away.
Parentheses override precedence. In (2 + 4) * (4 + 6), the parentheses force the additions before the multiplication, even though multiplication normally binds tighter than addition. That’s why the result is 60, not 28.
I recommend you encode precedence and associativity in a clear data structure. Don’t scatter “if operator is ^” across your evaluator. That small design decision pays for itself when you extend the language with unary operators, modulo, or functions.
Converting infix to postfix with a stack
The conversion algorithm is where most implementers stumble. The approach I trust is a simplified shunting-yard method. You maintain two structures: an output list and an operator stack.
Core rules I follow:
1) If the token is a number or identifier, append it to output.
2) If the token is an operator, pop operators from the stack to output while they have higher precedence, or equal precedence with left associativity. Then push the new operator.
3) If the token is “(”, push it to the operator stack.
4) If the token is “)”, pop operators to output until “(” is found, then discard the “(”.
5) After processing all tokens, pop remaining operators to output. If you find a mismatched parenthesis, error.
Let me walk a short example, using the numeric case so you can see it clearly:
Infix: (2+4) * (4+6)
Tokens: ( 2 + 4 ) * ( 4 + 6 )
Output building:
- Read “(” → push to operator stack
- Read “2” → output [2]
- Read “+” → stack [ ( + ]
- Read “4” → output [2, 4]
- Read “)” → pop “+” to output, discard “(” → output [2, 4, +]
- Read “” → stack [ ]
- Read “(” → stack [ *, ( ]
- Read “4” → output [2, 4, +, 4]
- Read “+” → stack [ *, (, + ]
- Read “6” → output [2, 4, +, 4, 6]
- Read “)” → pop “+” to output, discard “(” → output [2, 4, +, 4, 6, +]
- End → pop “” → output [2, 4, +, 4, 6, +, ]
Postfix: 2 4 + 4 6 + *
That postfix string is now trivial to evaluate in one pass, and you’ve fully respected precedence and parentheses.
Evaluating postfix in one pass
Postfix evaluation is the clean part. I suggest you keep it simple and deterministic. Use a value stack, scan left to right, and apply operators as soon as you see them.
Rules:
- If the token is a number, push it.
- If the token is an operator, pop the top two values (right operand first, then left), apply the operator, push the result.
- At the end, if the stack has exactly one value, that’s your answer. Otherwise, the expression was invalid.
Using the postfix form from the previous example:
Postfix: 2 4 + 4 6 + *
Stack trace:
- Push 2 → [2]
- Push 4 → [2, 4]
- “+” → pop 4 and 2 → 2 + 4 = 6 → push 6 → [6]
- Push 4 → [6, 4]
- Push 6 → [6, 4, 6]
- “+” → pop 6 and 4 → 4 + 6 = 10 → push 10 → [6, 10]
- “” → pop 10 and 6 → 6 10 = 60 → push 60 → [60]
Result: 60
This model gives you two practical benefits: it’s easy to reason about, and it is easy to test. You can craft a set of postfix expressions, expected results, and run them in a small loop without any parsing concerns.
A complete Python evaluator you can run
Below is a full Python implementation that covers tokenization, infix-to-postfix conversion, and postfix evaluation. I include basic handling of unary minus (negative numbers) by converting it to a distinct operator token. It is runnable as-is and prints results for sample expressions.
import math
from dataclasses import dataclass
from typing import List, Tuple
@dataclass(frozen=True)
class OpInfo:
precedence: int
right_assoc: bool
OPS = {
‘+‘: OpInfo(precedence=1, right_assoc=False),
‘-‘: OpInfo(precedence=1, right_assoc=False),
‘*‘: OpInfo(precedence=2, right_assoc=False),
‘/‘: OpInfo(precedence=2, right_assoc=False),
‘^‘: OpInfo(precedence=3, right_assoc=True),
‘u-‘: OpInfo(precedence=4, right_assoc=True), # unary minus
}
def tokenize(expr: str) -> List[str]:
tokens: List[str] = []
i = 0
while i < len(expr):
ch = expr[i]
if ch.isspace():
i += 1
continue
if ch.isdigit() or ch == ‘.‘:
j = i
while j < len(expr) and (expr[j].isdigit() or expr[j] == '.'):
j += 1
tokens.append(expr[i:j])
i = j
continue
if ch in ‘+-*/^()‘:
tokens.append(ch)
i += 1
continue
raise ValueError(f‘Unexpected character: {ch}‘)
return tokens
def to_postfix(tokens: List[str]) -> List[str]:
output: List[str] = []
stack: List[str] = []
prev_token = None
for token in tokens:
if token.replace(‘.‘, ‘‘, 1).isdigit():
output.append(token)
prev_token = ‘number‘
continue
if token == ‘(‘:
stack.append(token)
prev_token = ‘(‘
continue
if token == ‘)‘:
while stack and stack[-1] != ‘(‘:
output.append(stack.pop())
if not stack:
raise ValueError(‘Mismatched parentheses‘)
stack.pop()
prev_token = ‘)‘
continue
# operator handling, including unary minus
op = token
if op == ‘-‘ and (prevtoken is None or prevtoken in (‘(‘, ‘operator‘)):
op = ‘u-‘
if op not in OPS:
raise ValueError(f‘Unknown operator: {op}‘)
while stack and stack[-1] in OPS:
top = stack[-1]
if (OPS[top].precedence > OPS[op].precedence) or (
OPS[top].precedence == OPS[op].precedence and not OPS[op].right_assoc
):
output.append(stack.pop())
else:
break
stack.append(op)
prev_token = ‘operator‘
while stack:
if stack[-1] in (‘(‘, ‘)‘):
raise ValueError(‘Mismatched parentheses‘)
output.append(stack.pop())
return output
def eval_postfix(tokens: List[str]) -> float:
stack: List[float] = []
for token in tokens:
if token.replace(‘.‘, ‘‘, 1).isdigit():
stack.append(float(token))
continue
if token == ‘u-‘:
if not stack:
raise ValueError(‘Unary minus missing operand‘)
stack.append(-stack.pop())
continue
if token in (‘+‘, ‘-‘, ‘*‘, ‘/‘, ‘^‘):
if len(stack) < 2:
raise ValueError(‘Binary operator missing operands‘)
b = stack.pop()
a = stack.pop()
if token == ‘+‘:
stack.append(a + b)
elif token == ‘-‘:
stack.append(a - b)
elif token == ‘*‘:
stack.append(a * b)
elif token == ‘/‘:
stack.append(a / b)
elif token == ‘^‘:
stack.append(a b)
continue
raise ValueError(f‘Unknown token: {token}‘)
if len(stack) != 1:
raise ValueError(‘Invalid expression‘)
return stack[0]
def evaluate(expr: str) -> float:
tokens = tokenize(expr)
postfix = to_postfix(tokens)
return eval_postfix(postfix)
if name == ‘main‘:
samples = [
‘(2+4) * (4+6)‘,
‘(3-1) * (8/(2+2) + 5)‘,
‘-3 + 4 * 2‘,
‘2 ^ 3 ^ 2‘
]
for s in samples:
print(f‘{s} = {evaluate(s)}‘)
If you run this, you’ll see correct precedence, correct power associativity, and handling of unary minus in front of a number or parenthesized expression. You can extend it with functions, variables, or modulo, but the structure should stay stable.
Mistakes, edge cases, and modern practice in 2026
I still see the same set of bugs in production evaluators, so I’ll call them out plainly.
Common mistakes I would flag in code review:
- Treating exponentiation as left-associative, which breaks expressions like 2 ^ 3 ^ 2.
- Failing to handle unary minus, so “-3 + 4” throws or miscomputes.
- Forgetting to clear the stack at the end, so a malformed expression returns a random value.
- Allowing mismatched parentheses to pass silently.
- Mixing integer and float division unintentionally, especially in JavaScript with implicit conversions.
Edge cases you should test:
- Deeply nested parentheses: (((1+2)*3)^(4-2))
- Leading negatives: -5 + 2
- Multiple spaces and unusual spacing: “ 3 + 4 ”
- Division by zero
- Long chains: 1 + 2 + 3 + 4 + 5
When to use stack-based evaluation:
- Small expression languages for configuration or rules engines
- Calculators and education tools
- Filtering expressions for data pipelines
- Scripting features inside apps
When not to use it:
- Full programming languages with variables, scopes, and complex syntax. Use a parser generator or a Pratt parser instead.
- Expressions that need user-defined functions with complex argument rules. You’ll want a true AST and a type system.
Here’s a pragmatic comparison I use when advising teams in 2026:
Modern Approach with AST + Tests
—
Stronger correctness when language features grow
Easier to extend with functions and variables
Better for custom operator precedence and unary rules
Higher initial effort, cleaner long-term change controlI generally recommend the stack approach for compact expression features, and a full AST approach when the expression language becomes part of your product’s contract.
If you’re in a modern workflow, you should lean on AI-assisted test generation and property-based testing. I often ask a model to generate random expressions, then compare results against a trusted evaluator (like Python’s eval for a restricted subset). In CI, you can run 5k to 50k randomized tests in a few seconds, which gives you confidence across a huge surface area. Performance-wise, stack-based evaluation is typically in the 10–50ms range for expressions with a few hundred tokens on server-grade hardware, and well under 5ms for typical calculator-sized inputs.
If you want a JavaScript flavor for browser or Node use, this is a small and runnable version of postfix evaluation that assumes you already have postfix tokens:
function evalPostfix(tokens) {
const stack = [];
for (const token of tokens) {
if (/^\d+(\.\d+)?$/.test(token)) {
stack.push(Number(token));
continue;
}
if (token === ‘u-‘) {
if (stack.length < 1) throw new Error('Unary minus missing operand');
stack.push(-stack.pop());
continue;
}
if ([‘+‘, ‘-‘, ‘*‘, ‘/‘, ‘^‘].includes(token)) {
if (stack.length < 2) throw new Error('Binary operator missing operands');
const b = stack.pop();
const a = stack.pop();
switch (token) {
case ‘+‘:
stack.push(a + b);
break;
case ‘-‘:
stack.push(a - b);
break;
case ‘*‘:
stack.push(a * b);
break;
case ‘/‘:
stack.push(a / b);
break;
case ‘^‘:
stack.push(a b);
break;
}
continue;
}
throw new Error(Unknown token: ${token});
}
if (stack.length !== 1) throw new Error(‘Invalid expression‘);
return stack[0];
}
That version is intentionally focused on evaluation. In real projects, you still need a tokenizer and an infix-to-postfix converter, but the evaluation step stays almost identical.
Tokenization: the hidden half of reliability
Most articles treat tokenization as a quick pre-step, but in practice tokenization is half the correctness story. That’s because your evaluator only sees tokens, not characters. If tokenization is sloppy, the rest of the pipeline looks correct while still producing wrong results.
Two common tokenization pitfalls I see:
- Accepting “12.3.4” as a number instead of flagging it as invalid.
- Treating “-” as part of the number without checking whether it’s actually unary minus or subtraction.
I always recommend splitting tokenization into explicit states: number, operator, whitespace, and error. When you scan digits, you should allow a single decimal point and reject anything else. If you ever plan to support scientific notation (like 1e-3), that’s the moment to explicitly encode it instead of letting it slip in accidentally.
Practical tip: for small evaluators, I prefer a hand-written tokenizer over regex-heavy parsing. It’s easier to debug, you can add precise error messages, and it’s less likely to accept malformed input.
Unary operators and why they’re tricky
Unary operators look simple on paper but are tricky in practice because they depend on context. In “-3”, the minus is unary. In “5-3”, it’s binary. In “5*-3”, it is unary again. That means you cannot decide whether a minus is unary or binary just by looking at the character itself.
The rule I use is position-based: a minus sign is unary if it appears at the start of the expression or immediately after another operator or an opening parenthesis. That rule matches typical arithmetic expectations and avoids most corner cases.
If you plan to add unary plus or other unary operators (like factorial or logical negation), keep the rule consistent. I often define a small helper that says “a token can start a value,” and that becomes the rule for unary detection.
Handling functions and identifiers without losing simplicity
A pure arithmetic evaluator handles numbers and operators only. Real product needs add functions and identifiers, like min(3, 4) or price * taxRate. You can still keep a stack-based approach, but you’ll need two extensions:
1) A dictionary of variables or a callback to resolve identifiers.
2) A function stack or a function token that carries arity.
A minimal pattern:
- Tokenize identifiers as a distinct token type.
- In infix-to-postfix conversion, treat an identifier followed by “(” as a function name.
- Push function tokens to the operator stack so they’re emitted after their arguments in postfix.
This is where the classic shunting-yard method shines, because it already has a way to handle commas and function calls. If your evaluator is staying in the “arithmetic only” world, don’t overbuild. But if you’re adding functions, plan to add tests for nested calls and mix-and-match with operators.
A worked example with unary minus and exponentiation
Let me walk a more complex example so you can see the algorithm hold together. The expression is:
-3 + 4 * 2 ^ 3
Given the precedence rules, exponentiation happens first, then multiplication, then addition. Unary minus applies to the 3 alone. The expected order is:
1) u- 3
2) 2 ^ 3
3) 4 * (2 ^ 3)
4) (-3) + result
The postfix form with unary minus is:
3 u- 4 2 3 ^ * +
If you simulate evaluation, you’ll get 29 (since -3 + 4 * 8 = -3 + 32 = 29). This is exactly the kind of example that exposes precedence and associativity bugs quickly. I keep one of these in my test suite because it touches unary operators, exponentiation, and multiplication in one expression.
Error handling and the human factor
In production, you won’t just evaluate expressions; you’ll also explain errors to humans. That means errors must be precise and consistent. “Invalid expression” is fine for a unit test, but it’s useless for an end user or even a backend engineer debugging a 2 a.m. incident.
I aim for error types like:
- Unexpected character (with position)
- Mismatched parentheses
- Missing operand for operator
- Division by zero
If you’re building a shared expression service, attach a small error schema. For example, return {code: "MISSING_OPERAND", position: 12} rather than a raw stack trace. That helps downstream systems decide whether to retry, flag, or display a user-friendly message.
Performance considerations you can actually act on
Most arithmetic evaluators are fast enough, but “fast enough” depends on usage patterns. Here’s how I think about it:
- For short expressions under 50 tokens, any stack-based implementation is effectively instant (sub-millisecond to low single-digit ms).
- For large expressions with hundreds or thousands of tokens, tokenization becomes a noticeable slice of runtime.
- For high-throughput systems (like pricing in a large marketplace), you may evaluate the same expression many times with different variable bindings.
If you’re in that third category, the best optimization is caching the postfix form or a compiled AST. That means you tokenize and parse once, then re-evaluate quickly with new values. You can also pre-validate expressions at write-time so runtime evaluation only deals with good data.
Avoid over-optimizing early. The algorithm is already linear in token count, and the constant factors are small. If you do need speed, measure first, then optimize for the real bottleneck, which is usually tokenization or variable lookups, not the stack operations themselves.
Security and safety when evaluating user input
If expressions come from user input, your main risk isn’t performance, it’s correctness and safety. A pure arithmetic evaluator is safe because it doesn’t execute code. But subtle issues still show up:
- Floating-point precision errors can produce unexpected results. If you’re in a billing context, consider decimal arithmetic instead of float.
- Extremely deep parentheses can trigger stack growth if you implement recursion. A stack-based approach is iterative, which helps, but tokenization still must handle size limits.
- Denial-of-service risks appear when users can submit very long expressions. Enforce token limits and input size caps.
I also recommend a strict whitelist of operators. If you later add functions, make the whitelist explicit and versioned. Hidden function expansion is a security risk in expression engines embedded into larger systems.
A disciplined test strategy that scales
A simple evaluator needs a simple test suite, but a robust evaluator deserves layers of tests.
I use three layers:
1) Golden tests: a curated list of expressions and expected results.
2) Property-based tests: generate random expressions and compare against a trusted reference.
3) Error tests: malformed expressions that must fail for the right reason.
Here’s a small list of golden tests I like to start with:
- 1 + 2 * 3 = 7
- (1 + 2) * 3 = 9
- 2 ^ 3 ^ 2 = 512
- -3 + 4 = 1
- (-3) + 4 = 1
- 5 * -2 = -10
- 10 / 2 + 3 = 8
For property-based testing, I keep a constrained grammar so I don’t accidentally generate invalid expressions. The key is to generate a valid expression and compare results with a “trusted” evaluator. If you’re in Python, you can use eval on a sanitized subset. If you’re in another language, you can generate postfix first, then convert back to infix and test both routes.
The point is not just to avoid bugs; it’s to create confidence in your evaluator before it becomes a dependency for billing, user filters, or rule engines.
How I review expression code in production
When I read an evaluator in a code review, I focus on a few things first:
- Tokenization correctness: Does it reject invalid numbers and unknown characters?
- Precedence and associativity logic: Is exponentiation right-associative? Are unary operators encoded cleanly?
- Error handling: Are failures explicit? Are error messages helpful?
- Tests: Are there golden tests, and do they cover unary minus and exponentiation?
If those pass, I look for maintainability. I want a single map for operator metadata, not a web of if-else statements. I want a clean conversion pipeline and a small, testable evaluator function. That structure is what makes it safe to extend later.
A practical extension: variables and bindings
Here’s how I typically extend a simple evaluator for variables without changing the core stack logic:
- Tokenizer recognizes identifiers: [a-zA-Z][a-zA-Z0-9]*
- If a token is an identifier, you resolve it during evaluation using a dict.
- If the variable doesn’t exist, raise a descriptive error.
This allows expressions like:
price * quantity + tax
Your pipeline stays almost the same, and the evaluator becomes useful for configuration and rules. This is also the easiest way to keep expression evaluation separate from application logic: you pass in a context map and let the evaluator do the math.
Alternative approaches and why you might use them
Stack-based evaluation is excellent for simple arithmetic, but there are other ways to build evaluators. I recommend knowing them so you can choose the right tool.
1) Pratt parser (top-down operator precedence)
- Good for expressions that evolve over time
- Handles custom operators and functions cleanly
- More code upfront, but great for language-like features
2) Parser generators
- Useful for complex grammars
- Best when you need full language features
- Heavier tooling, slower iteration
3) AST-first manual parsing
- Hand-built recursive descent parser
- Good for mid-size expression languages
- Easier to extend than a pure stack model
I still advocate stack-based evaluation as a starting point. It’s clear, small, and practical. When the expression language grows beyond basic arithmetic and a handful of functions, that’s when I switch to an AST.
Observability and production stability
If expression evaluation is part of a critical system, you should treat it as a production component. That means some lightweight observability:
- Log parse failures with expression IDs (not necessarily the full expression if it may contain sensitive data).
- Track error rates for malformed expressions.
- Include a histogram of evaluation latency if expressions are evaluated at scale.
These signals help you spot patterns like “90% of errors come from a single customer” or “evaluation time spikes when expressions exceed 2,000 tokens.” Those aren’t theoretical; I’ve seen both.
Practical scenarios where the stack approach shines
I’ve deployed stack-based evaluators in several kinds of systems. Here are a few to show why the approach remains useful:
- Pricing rules: “basePrice * (1 – discount) + shipping.” Easy to implement and safe to validate.
- Feature flags: “userTier >= 3 && region == ‘US‘” (with boolean operators added).
- Data filters: “(score > 0.8 && age < 30) || isPremium.”
- Configuration evaluation: “timeout = base * (isPeak ? 2 : 1)” (with ternary support).
Each example starts with arithmetic but grows into a small expression language. The stack model is a good on-ramp, but be ready to evolve when complexity grows.
A deeper look at edge cases and how to defuse them
Edge cases are where evaluators break. I keep a mental list and I test them explicitly.
1) Empty input
- Should be rejected with a clear error.
2) Trailing operator
- “3 +” should not silently pass. Expect a missing operand error.
3) Consecutive operators
- “3 * / 4” should fail during parsing or conversion.
4) Parentheses without operators
- “(3)” is valid, “()” is not.
5) Unary minus with parentheses
- “-(3 + 4)” should be valid and compute to -7.
6) Right-associative exponent chains
- “2 ^ 3 ^ 2” should be 512, not 64.
The short version: if your evaluator passes these, it’s already more reliable than most ad hoc implementations I see in the wild.
A production-ready posture without overengineering
There’s a balance between “textbook correct” and “production ready.” Production-ready means:
- Predictable evaluation
- Clear error reporting
- Tests that catch regressions
- Limited, explicit feature set
You do not need a massive architecture for an arithmetic evaluator. You need crisp logic and reliable tests. If you reach for a bigger solution too early, you’ll lose the simplicity that makes the stack model so appealing.
A modern workflow for confidence
In 2026, I expect a modern workflow to look like this:
- Write a small suite of golden tests.
- Add property-based tests that generate random valid expressions.
- Integrate static analysis or linting to keep code clean.
- Add minimal observability if the evaluator is in production.
You can also use AI-assisted workflows here. I often ask a model to generate random expressions under a specific grammar and then compare results against a reference implementation. The key is not the tool; it’s the habit of testing beyond the “happy path.”
Summary: a reliable evaluator is a product asset
Arithmetic expression evaluation is not glamorous work, but it is foundational. If you get it wrong, you’ll see bugs in pricing, filters, analytics, and user-facing calculators. The stack-based approach gives you clarity, correctness, and speed, and it’s still a great fit for modern systems.
If you implement it with a clean tokenizer, explicit precedence rules, and solid tests, you’ll have an evaluator you can trust. And once you trust it, you can confidently build higher-level features like variables, functions, and rule-based logic without fear of hidden arithmetic bugs.
That’s the point: not just evaluating numbers, but building a small, dependable core that keeps your product stable when everything else changes.


