Solving Equations with a Variable on Both Sides (Developer-Grade Guide)

You will eventually hit an equation that feels like a bug report: the unknown appears on both sides, the numbers look harmless, and yet your brain refuses to simplify it on the first pass.\n\nI see this constantly in programming work. A pricing rule says: shipping depends on subtotal, but subtotal depends on shipping. A game formula says: damage scales with level, but level scaling depends on damage thresholds. A monitoring alert says: latency budget depends on throughput, but throughput depends on latency backpressure. In all of these, the math is really one idea: collect like terms, isolate the variable, and then check the result against reality.\n\nIf you can solve equations with the variable on both sides reliably, you get two superpowers: (1) you can reason about systems that feed back into themselves, and (2) you can write code that rearranges formulas safely instead of guessing.\n\nI am going to show you the move-set I actually use, how to spot the two important edge cases (no solution and infinitely many solutions), and how to implement a small, runnable solver in Python and JavaScript that can handle real inputs like parentheses, negatives, and fractions.\n\n## Why variables on both sides show up in real software\nWhen I model a system, I often start with a relationship rather than a direct computation. Relationships produce equations, and equations love putting the unknown on both sides.\n\nHere are three patterns that generate these equations:\n\n- Fees and taxes that depend on totals: You might see something like total = subtotal + taxRate total. The variable total is on both sides.\n- Discounts based on final price: For example, a platform takes a percentage of the customer-paid total, but you want to back-calculate what list price yields a target payout.\n- Control loops and budgets: Rate limiting, queueing, and latency budgets often define constraints where the same quantity appears on both sides after substitutions.\n\nIn algebra class, these look like 5x - 3 = 2x + 9. In code, they look like:\n\n- payout = price - 0.1 price - fixedFee\n- allowed = base + k allowed\n\nThe solving technique is the same. The only difference is that in software you need to be extra careful about:\n\n- Units (milliseconds vs seconds)\n- Domains (prices cannot be negative)\n- Rounding (currency cents)\n- Validation (a formula rearrangement can be wrong but still type-check)\n\nI’ll add one more that bites teams quietly: interpretation. In a lot of real systems, the same symbol gets reused for slightly different concepts (e.g., “price” could mean list price, pre-tax, post-tax, or post-fee). Equations with variables on both sides tend to show up exactly where naming is already ambiguous—because feedback loops are where semantics matter most.\n\nA quick, concrete example from payments: suppose the user pays T, the platform takes a fee rate r on T, there’s a fixed processing fee f, and the seller receives P. A common relationship is:\n\n- P = T - rT - f\n\nIf you want to compute T for a desired payout P, that’s the same “variable on both sides” issue (because T appears twice). In a spreadsheet it’s annoying; in code it’s dangerous because rounding and edge cases can create subtle mismatches at scale.\n\n## The move-set: simplify, collect, isolate, verify\nWhen the variable is on both sides, the goal is always the same:\n\n- Put all variable terms on one side.\n- Put all constants on the other.\n- Divide (or multiply) to isolate the variable.\n- Check the solution.\n\nI do this with a tight set of moves, and I recommend you treat them like refactoring steps:\n\n1) Simplify both sides\n- Expand parentheses.\n- Combine like terms.\n- Reduce fractions where it is easy.\n\n2) Collect variable terms on one side\n- Add or subtract the same variable term on both sides.\n- Pick the direction that keeps coefficients positive if you can (it reduces sign mistakes).\n\n3) Collect constants on the other side\n- Add or subtract constants to clear the side you want.\n\n4) Isolate the variable\n- Divide by the coefficient of the variable.\n- If the coefficient is a fraction, dividing is usually cleaner than multiplying everything.\n\n5) Verify\n- Substitute the result into the original equation.\n- In code, verify with a tolerance if you used floating point.\n\nIf you want a mental model: you are allowed to do anything to an equation as long as you do it to both sides equally. That is the invariant.\n\n### The invariant (and why “do the same thing to both sides” works)\nI like to be explicit about what operations are safe, because this is where people accidentally create bugs in algebra and in code.\n\nSafe transformations are operations that preserve the set of solutions. Examples:\n\n- Add the same expression to both sides: if A = B, then A + C = B + C.\n- Subtract the same expression: A - C = B - C.\n- Multiply by the same non-zero constant: if k != 0, then kA = kB.\n- Divide by the same non-zero constant: if k != 0, then A/k = B/k.\n\nThe “non-zero” caveat matters. Dividing both sides by an expression that could be zero is a classic way to accidentally delete solutions (or invent them). When I solve by hand, I usually avoid dividing by anything that depends on x unless I’m also doing a separate domain check. When I write a solver, I enforce the same rule: only divide by known non-zero constants (or branch on zero and handle both cases).\n\n### A fast, mechanical checklist\nWhen you’re under time pressure—debugging a production issue, reading a PR, or fixing a flaky billing test—this is the fastest way I know to avoid algebra mistakes:\n\n- Expand parentheses early.\n- Replace fractions with exact rationals (not decimals) if you can.\n- Move all x terms to the left.\n- Move all constants to the right.\n- Reduce to kx = c.\n- If k = 0, stop and classify (none vs infinite).\n- Otherwise, x = c/k.\n- Substitute back into the original equation (not the simplified one) to check.\n\n## Worked examples that cover the tricky bits\nI am going to start with straightforward cases, then move into the ones that typically cause mistakes in code reviews: distribution, negatives, and fractions.\n\n### Example 1: 5x – 3 = 2x + 9\nCollect variables on the left:\n\n- Subtract 2x from both sides: 5x - 2x - 3 = 9\n- Combine like terms: 3x - 3 = 9\n\nMove constants to the right:\n\n- Add 3 to both sides: 3x = 12\n\nIsolate:\n\n- Divide by 3: x = 4\n\nCheck:\n\n- Left: 54 - 3 = 20 - 3 = 17\n- Right: 24 + 9 = 8 + 9 = 17\n\n### Example 2: 6x + 4 = 3x + 10\n- Subtract 3x from both sides: 3x + 4 = 10\n- Subtract 4 from both sides: 3x = 6\n- Divide by 3: x = 2\n\n### Example 3 (distribution): 4(x + 2) = 3x + 14\nFirst simplify the left:\n\n- Expand: 4x + 8 = 3x + 14\n\nThen collect variables:\n\n- Subtract 3x: x + 8 = 14\n\nThen constants:\n\n- Subtract 8: x = 6\n\n### Example 4 (negatives): 2(3x – 1) = 4x + 8\nSimplify:\n\n- Expand: 6x - 2 = 4x + 8\n\nCollect variables:\n\n- Subtract 4x: 2x - 2 = 8\n\nCollect constants:\n\n- Add 2: 2x = 10\n\nIsolate:\n\n- Divide by 2: x = 5\n\n### Example 5 (fractions that tempt rounding): (3/4)x + 2 = (1/2)x + 5\nI like to avoid decimals here because they trigger avoidable floating point drift.\n\nCollect variables:\n\n- Subtract (1/2)x from both sides: (3/4)x - (1/2)x + 2 = 5\n- Compute the coefficient: (3/4 - 1/2) = (3/4 - 2/4) = 1/4\n- So: (1/4)x + 2 = 5\n\nCollect constants:\n\n- Subtract 2: (1/4)x = 3\n\nIsolate:\n\n- Multiply both sides by 4: x = 12\n\nCheck quickly:\n\n- Left: (3/4)12 + 2 = 9 + 2 = 11\n- Right: (1/2)12 + 5 = 6 + 5 = 11\n\nIf you are writing code that solves or validates equations, this is where rational arithmetic can save you from awkward tolerance bugs.\n\n### Example 6 (variables on both sides with parentheses): 3(x – 4) + 2 = 2(x + 1)\nThis is the kind of thing that causes sign mistakes because you’re distributing a negative and moving terms.\n\nSimplify both sides:\n\n- Expand left: 3x - 12 + 2 = 3x - 10\n- Expand right: 2x + 2\n\nNow solve:\n\n- 3x - 10 = 2x + 2\n- Subtract 2x: x - 10 = 2\n- Add 10: x = 12\n\nCheck:\n\n- Left: 3(12 - 4) + 2 = 38 + 2 = 26\n- Right: 2(12 + 1) = 26\n\n### Example 7 (decimals—when you’re forced to use them): 0.2x + 1.5 = 0.05x + 3\nWhen decimals show up, I often convert them to fractions before I do anything else. But if you can’t (because the numbers came from measurements), then treat it as numeric algebra and keep track of precision.\n\nAlgebraically:\n\n- Subtract 0.05x: 0.15x + 1.5 = 3\n- Subtract 1.5: 0.15x = 1.5\n- Divide: x = 10\n\nVerification in code should use a tolerance, because 0.15 isn’t exactly representable in binary floating point:\n\n- Check abs(0.210 + 1.5 - (0.0510 + 3)) < 1e-9\n\n### Example 8 (the “divide by x” trap): x = x^2\nThis example is intentionally not linear, but it illustrates a transformation pitfall that shows up in real debugging.\n\n- x = x^2\n- Move everything to one side: x^2 - x = 0\n- Factor: x(x - 1) = 0\n- Solutions: x = 0 or x = 1\n\nIf you had divided both sides by x at the start (to get 1 = x), you would have lost the x = 0 solution. That’s why I’m strict about dividing only by non-zero constants when I’m doing “variable on both sides” work.\n\n## The two edge cases you must handle: no solution and infinite solutions\nIn programming, these are not academic corner cases. They show up when your model has contradictory constraints or redundant equations.\n\n### Case A: No solution (contradiction)\nExample: 2x + 3 = 2x + 5\n\nCollect variables:\n\n- Subtract 2x from both sides: 3 = 5\n\nThat is false, so there is no value of x that makes the equation true.\n\nHow I explain this in engineering terms: the slopes match but the offsets differ, so the lines are parallel and never intersect.\n\n### Case B: Infinite solutions (identity)\nExample: 2x + 3 = 2x + 3\n\n- Subtract 2x: 3 = 3\n\nThat is always true, so every value of x is a solution.\n\nEngineering translation: you wrote the same constraint twice, or you simplified away the only informative part.\n\n### How to detect these reliably\nFor linear equations in one variable, you can always reduce to:\n\n- ax + b = cx + d\n- Move everything to one side: (a - c)x + (b - d) = 0\n\nNow you only need two checks:\n\n- If (a - c) = 0 and (b - d) = 0 -> infinite solutions\n- If (a - c) = 0 and (b - d) != 0 -> no solution\n- Otherwise -> one solution: x = (d - b) / (a - c)\n\nThis is exactly the branching logic you want in code.\n\n### A geometric interpretation that prevents mistakes\nI keep this picture in my head because it makes edge cases obvious.\n\n- ax + b is a line with slope a and intercept b.\n- cx + d is another line with slope c and intercept d.\n\nWhen you solve ax + b = cx + d, you’re finding where those lines intersect.\n\n- If a != c, the slopes differ and there is exactly one intersection (one solution).\n- If a = c but b != d, the lines are parallel (no solution).\n- If a = c and b = d, they’re the same line (infinitely many solutions).\n\nThis is also why “variable on both sides” isn’t scary—most of the time it’s just “two lines intersect or they don’t.”\n\n## A programmer-friendly mental model: treat it like normalization\nWhen I teach this to developers, I frame it like normalizing data:\n\n- You have two expressions.\n- You want to rewrite both into a standard form.\n\nFor linear equations in one variable, the standard form I use is:\n\n- Ax + B on the left\n- Cx + D on the right\n\nThen solving is mechanical.\n\nThe discipline that prevents most mistakes is to track two things explicitly:\n\n- the coefficient of x\n- the constant offset\n\nEverything you do (expanding parentheses, adding terms, subtracting terms) should update those two numbers. If you do this, you will stop making the classic sign error where a - quietly flips meaning when you move a term.\n\nI also recommend one habit from modern development practice: when the algebra is part of business logic, write a tiny property test:\n\n- Pick random x values.\n- Evaluate both sides.\n- Verify that the solved x makes them equal.\n\nIt is the math version of fuzzing.\n\n### “Collecting terms” as a data structure\nIf you’ve ever implemented a compiler pass, a linter rule, or even a non-trivial formatting tool, you already understand the deeper idea: you can represent expressions as trees and then reduce them to a normal form.\n\nFor linear expressions in one variable, the normal form is incredibly small: two numbers (coefx, const). That’s why the solver implementations below work: they parse the expression and reduce it to those two values while preserving exactness.\n\nThis is also how I sanity-check my own algebra on paper: I ask “what is the coefficient of x?” and “what is the constant?” If I can’t answer quickly, I expand and rewrite until I can.\n\n## Implementing a small linear solver in Python (runnable)\nIf you only need linear equations of the form ax + b = cx + d, you do not need a CAS. You need a parser that can reduce an expression into (coefx, const).\n\nBelow is a runnable Python example that handles:\n\n- + and -\n- multiplication by constants\n- parentheses\n- fractions like 3/4\n\nIt intentionally does not handle xx, sin(x), or multiple variables. For typical algebra worksheets and a lot of business formulas, linear is enough.\n\n from future import annotations\n\n from dataclasses import dataclass\n from fractions import Fraction\n from typing import List\n\n\n Token = str\n\n\n def tokenize(expr: str) -> List[Token]:\n expr = expr.replace(‘ ‘, ‘‘)\n tokens: List[Token] = []\n i = 0\n while i < len(expr):\n ch = expr[i]\n if ch in '+-()‘:\n tokens.append(ch)\n i += 1\n continue\n\n if ch.isdigit():\n j = i\n while j < len(expr) and expr[j].isdigit():\n j += 1\n # fraction support like 3/4\n if j = len(expr) or not expr[k].isdigit():\n raise ValueError(f‘Invalid fraction near: {expr[i:]}‘)\n while k < len(expr) and expr[k].isdigit():\n k += 1\n tokens.append(expr[i:k])\n i = k\n else:\n tokens.append(expr[i:j])\n i = j\n continue\n\n if ch == 'x':\n tokens.append('x')\n i += 1\n continue\n\n raise ValueError(f'Unexpected character: {ch}')\n\n return tokens\n\n\n @dataclass(frozen=True)\n class Linear:\n coefx: Fraction\n const: Fraction\n\n def add(self, other: "Linear") -> "Linear":\n return Linear(self.coefx + other.coefx, self.const + other.const)\n\n def sub(self, other: "Linear") -> "Linear":\n return Linear(self.coefx – other.coefx, self.const – other.const)\n\n def scale(self, k: Fraction) -> "Linear":\n return Linear(self.coefx k, self.const k)\n\n\n def parsefraction(tok: str) -> Fraction:\n if ‘/‘ in tok:\n a, b = tok.split(‘/‘, 1)\n return Fraction(int(a), int(b))\n return Fraction(int(tok), 1)\n\n\n class Parser:\n def init(self, tokens: List[Token]):\n self.tokens = tokens\n self.i = 0\n\n def peek(self) -> Token

None:\n return self.tokens[self.i] if self.i Token:\n tok = self.peek()\n if tok is None:\n raise ValueError(‘Unexpected end of input‘)\n self.i += 1\n return tok\n\n # Grammar (linear-safe subset):\n # expr := term ((+

-) term)\n # term := factor (( factor))\n # factor:= number

x

(expr)

(+ factor)

(- factor)\n\n def parseexpr(self) -> Linear:\n value = self.parseterm()\n while self.peek() in (‘+‘, ‘-‘):\n op = self.pop()\n rhs = self.parseterm()\n value = value + rhs if op == ‘+‘ else value – rhs\n return value\n\n def parseterm(self) -> Linear:\n value = self.parsefactor()\n while self.peek() == ‘‘:\n self.pop()\n rhs = self.parsefactor()\n # Only allow multiplication where at least one side is constant.\n if value.coefx != 0 and rhs.coefx != 0:\n raise ValueError(‘Non-linear term: xx is not supported‘)\n if rhs.coefx == 0:\n value = value.scale(rhs.const)\n else:\n value = rhs.scale(value.const)\n return value\n\n def parsefactor(self) -> Linear:\n tok = self.peek()\n if tok in (‘+‘, ‘-‘):\n op = self.pop()\n val = self.parsefactor()\n return val if op == ‘+‘ else val.scale(Fraction(-1, 1))\n\n if tok == ‘(‘:\n self.pop()\n val = self.parseexpr()\n if self.pop() != ‘)‘:\n raise ValueError(‘Missing closing parenthesis‘)\n return val\n\n if tok == ‘x‘:\n self.pop()\n return Linear(Fraction(1, 1), Fraction(0, 1))\n\n if tok is not None and (tok[0].isdigit()):\n self.pop()\n return Linear(Fraction(0, 1), parsefraction(tok))\n\n raise ValueError(f‘Unexpected token: {tok}‘)\n\n\n @dataclass(frozen=True)\n class SolveResult:\n kind: str # ‘one‘, ‘none‘, ‘infinite‘\n x: Fraction

None = None\n\n\n def parselinear(expr: str) -> Linear:\n tokens = tokenize(expr)\n parser = Parser(tokens)\n out = parser.parseexpr()\n if parser.peek() is not None:\n raise ValueError(f‘Unexpected trailing tokens: {parser.tokens[parser.i:]}‘)\n return out\n\n\n def solveequation(equation: str) -> SolveResult:\n if ‘=‘ not in equation:\n raise ValueError(‘Equation must contain =‘)\n lefts, rights = equation.split(‘=‘, 1)\n\n left = parselinear(lefts)\n right = parselinear(rights)\n\n # (left.coefx – right.coefx) x + (left.const – right.const) = 0\n a = left.coefx – right.coefx\n b = left.const – right.const\n\n if a == 0 and b == 0:\n return SolveResult(kind=‘infinite‘, x=None)\n if a == 0 and b != 0:\n return SolveResult(kind=‘none‘, x=None)\n\n x = -b / a\n return SolveResult(kind=‘one‘, x=x)\n\n\n def checksolution(equation: str, x: Fraction) -> bool:\n lefts, rights = equation.split(‘=‘, 1)\n left = parselinear(lefts)\n right = parselinear(rights)\n\n leftval = left.coefx x + left.const\n rightval = right.coefx x + right.const\n return leftval == rightval\n\n\n if name == ‘main‘:\n samples = [\n ‘5x-3=2x+9‘,\n ‘4(x+2)=3x+14‘,\n ‘2(3x-1)=4x+8‘,\n ‘3/4x+2=1/2x+5‘,\n ‘2x+3=2x+5‘,\n ‘2x+3=2x+3‘,\n ]\n\n for eq in samples:\n result = solveequation(eq)\n if result.kind == ‘one‘:\n ok = checksolution(eq, result.x)\n print(eq, ‘=> x =‘, result.x, ‘check:‘, ok)\n else:\n print(eq, ‘=>‘, result.kind)\n\nA couple of engineering notes:\n\n- I used fractions.Fraction so checks are exact. That is often better than floats when the input is rational.\n- For typical single equations, this runs in well under 1 ms in local scripts. In a service handling batches, parsing dominates; you will usually see a few milliseconds per few thousand short equations.\n\n### What this Python solver is (and is not)\nI want to be explicit about the boundary, because this is where people over-trust “a solver” and ship the wrong behavior.\n\nThis solver supports a linear-safe subset: expressions built from numbers, x, parentheses, plus/minus, and multiplication where at least one side is constant. That includes typical forms like 2(x-3) and 3/4x. It does not support:\n\n- Multiplying two x-containing expressions (non-linear).\n- Division by expressions (like (x+1)/2).\n- Multiple variables.\n\nIf you need those, you’re in symbolic math territory or you need numeric methods. But for “variable on both sides” problems as they show up in real business rules, linear is surprisingly common and surprisingly sufficient.\n\n## Implementing the same idea in JavaScript (and validating it)\nIn JavaScript, you have a choice:\n\n- Traditional: floats everywhere, then compare with a tolerance.\n- Modern: use BigInt plus rational pairs, or a small fraction library.\n\nIn 2026, I still reach for a tiny rational representation when correctness matters (billing, compliance), and floats when inputs are inherently approximate (sensor data).\n\nHere is a runnable Node.js script that uses a minimal fraction type backed by BigInt. It supports the same linear subset as the Python version.\n\n // node solve-linear.js\n\n function absBigInt(x) {\n return x < 0n ? -x : x;\n }\n\n function gcd(a, b) {\n a = absBigInt(a);\n b = absBigInt(b);\n while (b !== 0n) {\n const t = a % b;\n a = b;\n b = t;\n }\n return a;\n }\n\n function frac(n, d = 1n) {\n if (d === 0n) throw new Error('division by zero in fraction');\n if (d < 0n) { n = -n; d = -d; }\n const g = gcd(n, d);\n return { n: n / g, d: d / g };\n }\n\n function fracAdd(a, b) {\n return frac(a.n b.d + b.n a.d, a.d b.d);\n }\n\n function fracSub(a, b) {\n return frac(a.n b.d – b.n a.d, a.d b.d);\n }\n\n function fracMul(a, b) {\n return frac(a.n b.n, a.d b.d);\n }\n\n function fracNeg(a) {\n return { n: -a.n, d: a.d };\n }\n\n function fracEq(a, b) {\n return a.n === b.n && a.d === b.d;\n }\n\n function fracToString(a) {\n if (a.d === 1n) return a.n.toString();\n return a.n.toString() + ‘/‘ + a.d.toString();\n }\n\n function fracFromToken(tok) {\n if (tok.includes(‘/‘)) {\n const [a, b] = tok.split(‘/‘);\n return frac(BigInt(a), BigInt(b));\n }\n return frac(BigInt(tok), 1n);\n }\n\n // Linear = coefX x + const\n function lin(coefX, constant) {\n return { coefX, constant };\n }\n\n function linAdd(a, b) {\n return lin(fracAdd(a.coefX, b.coefX), fracAdd(a.constant, b.constant));\n }\n\n function linSub(a, b) {\n return lin(fracSub(a.coefX, b.coefX), fracSub(a.constant, b.constant));\n }\n\n function linScale(v, k) {\n return lin(fracMul(v.coefX, k), fracMul(v.constant, k));\n }\n\n // Tokenizer: numbers, fractions (like 3/4), x, operators, parentheses\n function tokenize(expr) {\n const s = expr.replace(/\s+/g, ‘‘);\n const out = [];\n let i = 0;\n\n while (i < s.length) {\n const ch = s[i];\n\n if (ch === '+' ch === ‘-‘ ch === ‘‘ ch === ‘(‘ ch === ‘)‘) {\n out.push(ch);\n i += 1;\n continue;\n }\n\n if (ch === ‘x‘) {\n out.push(‘x‘);\n i += 1;\n continue;\n }\n\n if (ch >= ‘0‘ && ch <= '9') {\n let j = i;\n while (j = ‘0‘ && s[j] <= '9') j += 1;\n\n if (j = s.length s[k] < '0' s[k] > ‘9‘) {\n throw new Error(‘Invalid fraction near: ‘ + s.slice(i));\n }\n while (k = ‘0‘ && s[k] <= '9') k += 1;\n out.push(s.slice(i, k));\n i = k;\n } else {\n out.push(s.slice(i, j));\n i = j;\n }\n continue;\n }\n\n throw new Error('Unexpected character: ' + ch);\n }\n\n return out;\n }\n\n class Parser {\n constructor(tokens) {\n this.tokens = tokens;\n this.i = 0;\n }\n\n peek() {\n return this.i < this.tokens.length ? this.tokens[this.i] : null;\n }\n\n pop() {\n const tok = this.peek();\n if (tok === null) throw new Error('Unexpected end of input');\n this.i += 1;\n return tok;\n }\n\n // expr := term ((+

-) term)\n // term := factor (( factor))\n // factor:= number

x

(expr)

(+ factor)

(- factor)\n\n parseExpr() {\n let value = this.parseTerm();\n while (this.peek() === ‘+‘

this.peek() === ‘-‘) {\n const op = this.pop();\n const rhs = this.parseTerm();\n value = (op === ‘+‘) ? linAdd(value, rhs) : linSub(value, rhs);\n }\n return value;\n }\n\n parseTerm() {\n let value = this.parseFactor();\n while (this.peek() === ‘‘) {\n this.pop();\n const rhs = this.parseFactor();\n\n const valueHasX = !fracEq(value.coefX, frac(0n, 1n));\n const rhsHasX = !fracEq(rhs.coefX, frac(0n, 1n));\n if (valueHasX && rhsHasX) {\n throw new Error(‘Non-linear term: xx is not supported‘);\n }\n\n if (!rhsHasX) {\n value = linScale(value, rhs.constant);\n } else {\n value = linScale(rhs, value.constant);\n }\n }\n return value;\n }\n\n parseFactor() {\n const tok = this.peek();\n\n if (tok === ‘+‘ tok === ‘-‘) {\n const op = this.pop();\n const val = this.parseFactor();\n return (op === ‘+‘) ? val : linScale(val, frac(-1n, 1n));\n }\n\n if (tok === ‘(‘) {\n this.pop();\n const val = this.parseExpr();\n if (this.pop() !== ‘)‘) throw new Error(‘Missing closing parenthesis‘);\n return val;\n }\n\n if (tok === ‘x‘) {\n this.pop();\n return lin(frac(1n, 1n), frac(0n, 1n));\n }\n\n if (tok !== null && tok[0] >= ‘0‘ && tok[0] <= '9') {\n this.pop();\n return lin(frac(0n, 1n), fracFromToken(tok));\n }\n\n throw new Error('Unexpected token: ' + tok);\n }\n }\n\n function parseLinear(expr) {\n const tokens = tokenize(expr);\n const parser = new Parser(tokens);\n const out = parser.parseExpr();\n if (parser.peek() !== null) {\n throw new Error('Unexpected trailing tokens: ' + tokens.slice(parser.i).join(' '));\n }\n return out;\n }\n\n function solveEquation(equation) {\n const idx = equation.indexOf('=');\n if (idx === -1) throw new Error('Equation must contain =');\n const leftS = equation.slice(0, idx);\n const rightS = equation.slice(idx + 1);\n\n const left = parseLinear(leftS);\n const right = parseLinear(rightS);\n\n // (left.coefX – right.coefX) x + (left.const – right.const) = 0\n const a = fracSub(left.coefX, right.coefX);\n const b = fracSub(left.constant, right.constant);\n\n const aIsZero = fracEq(a, frac(0n, 1n));\n const bIsZero = fracEq(b, frac(0n, 1n));\n\n if (aIsZero && bIsZero) return { kind: ‘infinite‘, x: null };\n if (aIsZero && !bIsZero) return { kind: ‘none‘, x: null };\n\n // ax + b = 0 => x = -b/a\n const x = fracMul(fracNeg(b), frac(a.d, a.n));\n return { kind: ‘one‘, x };\n }\n\n function checkSolution(equation, x) {\n const idx = equation.indexOf(‘=‘);\n const left = parseLinear(equation.slice(0, idx));\n const right = parseLinear(equation.slice(idx + 1));\n\n const leftVal = fracAdd(fracMul(left.coefX, x), left.constant);\n const rightVal = fracAdd(fracMul(right.coefX, x), right.constant);\n return fracEq(leftVal, rightVal);\n }\n\n if (require.main === module) {\n const samples = [\n ‘5x-3=2x+9‘,\n ‘4(x+2)=3x+14‘,\n ‘2(3x-1)=4x+8‘,\n ‘3/4x+2=1/2x+5‘,\n ‘2x+3=2x+5‘,\n ‘2x+3=2x+3‘,\n ];\n\n for (const eq of samples) {\n const result = solveEquation(eq);\n if (result.kind === ‘one‘) {\n const ok = checkSolution(eq, result.x);\n console.log(eq + ‘ => x = ‘ + fracToString(result.x) + ‘ check: ‘ + ok);\n } else {\n console.log(eq + ‘ => ‘ + result.kind);\n }\n }\n }\n\n### Traditional floats vs exact rationals (a quick comparison)\nI decide between floats and rationals by asking one question: “Will the business treat tiny differences as bugs?”\n\n- If you’re computing sensor thresholds, animation curves, or latency estimates: floats are fine and often faster.\n- If you’re computing invoices, payouts, interest, or compliance-relevant numbers: exact rationals (or integer cents) are safer.\n\nA simple comparison table:\n\n- Representation\n – Floats: approximate real numbers\n – Rationals: exact p/q\n- Equality checks\n – Floats: need tolerances\n – Rationals: exact equality\n- Failure mode\n – Floats: drift, rounding surprises\n – Rationals: big integers can grow\n- Best use\n – Floats: measurements, ML-ish heuristics\n – Rationals: finance, rules, tests\n\n## Common pitfalls (and the habits I use to avoid them)\nMost mistakes in “variable on both sides” equations are boring—and that’s why they’re so dangerous. They don’t look like errors; they look like plausible algebra. Here are the ones I see constantly.\n\n### Pitfall 1: Sign errors when moving terms\nExample: 7 - 2x = 3x + 1\n\nI’ve watched people do “move -2x to the right” and write 7 = 3x + 2x + 1 (which is correct), and then immediately forget the + 1 and write 7 = 5x. The equation is now wrong but still “looks fine.”\n\nHabit that prevents it: I only do one move per line, and I rewrite the full equation each time. It feels slow, but it’s faster than debugging a silent algebra bug later.\n\nSolve it cleanly:\n\n- 7 - 2x = 3x + 1\n- Add 2x to both sides: 7 = 5x + 1\n- Subtract 1: 6 = 5x\n- x = 6/5\n\n### Pitfall 2: Distributing a negative incorrectly\nExample: -(x - 4) = 2x + 1\n\nThe left side becomes -x + 4, not -x - 4.\n\n- -(x - 4) = 2x + 1\n- -x + 4 = 2x + 1\n- Add x: 4 = 3x + 1\n- Subtract 1: 3 = 3x\n- x = 1\n\nHabit that prevents it: I treat “minus outside parentheses” as multiplying by -1 and distribute mechanically.\n\n### Pitfall 3: Dividing by something that can be zero\nI showed x = x^2 earlier because it’s the clearest demonstration. In linear equations, the analogous trap is less obvious but still real when you simplify and cancel factors.\n\nRule I follow: I do not cancel expressions unless I explicitly track when they could be zero and handle that case separately.\n\n### Pitfall 4: Mixing units (the stealth algebra bug)\nEquations with variables on both sides often arise from substitutions across layers: API inputs, database values, UI display, and business logic. This is where unit bugs sneak in.\n\nIf you have latencyms = basems + k latencys, you’ve already lost; the equation is dimensionally inconsistent and any “solution” is meaningless.\n\nHabit that prevents it: I do a quick dimensional check before solving. If units don’t match, I stop and fix the model instead of solving.\n\n### Pitfall 5: Rounding too early\nIf you solve in cents, solve in cents all the way until the end. If you round intermediate values, you can create contradictions like “no solution” due to rounding artifacts, or you can bias outputs systematically.\n\nHabit that prevents it: I postpone rounding until the final output boundary (UI, invoice line, storage constraint), and I keep the solver exact when possible.\n\n## Practical scenarios: when to use this, and when not to\nSolving equations with the variable on both sides is powerful—but it’s not always the right tool. I separate problems into three categories.\n\n### Use it when: you can isolate safely and the model is stable\nTypical examples:\n\n- Back-calculating totals from fees\n- Computing break-even points\n- Deriving configuration thresholds\n- Solving for a parameter in a linear rule\n\nThese tend to reduce cleanly to ax + b = cx + d.\n\n### Be cautious when: the equation is “mostly linear” but has constraints\nExamples:\n\n- “Price must be at least $0.50 and must round to the nearest cent”\n- “Rate limit must be an integer”\n- “Discount cannot exceed 80%”\n\nYou can still solve algebraically, but the raw solution might violate constraints. In those cases, I do this:\n\n1) Solve the continuous equation.\n2) Project the result into the valid domain (clamp, round, enforce integer).\n3) Re-check the original relationship after projection.\n\n### Don’t use it when: the equation is non-linear or discontinuous\nExamples:\n\n- x appears in a denominator: A = B/(x+1)\n- Absolute values:

x - 3= 2x + 1\n- Piecewise fees: “$0.30 under $10, else 2.9%”\n\nYou can still solve these, but you need case analysis or numeric methods. For piecewise rules, I often do “solve per branch” and then select the branch that is consistent with the solution.\n\n## Alternative approaches (same problem, different tools)\nSometimes it’s useful to know more than one method—especially when you’re debugging someone else’s code and you want an independent check.\n\n### Approach 1: Reduce both sides to standard form\nThis is what the solver code does: parse each side into (coefx, const), then apply the edge-case logic and solve.\n\nIt’s fast, deterministic, and easy to unit test. For linear equations, this is my default.\n\n### Approach 2: Move everything to one side and interpret it\nIf you rewrite ax + b = cx + d as (a-c)x + (b-d) = 0, you can reason about:\n\n- The coefficient (a-c) as the “sensitivity”\n- The constant (b-d) as the “offset”\n\nIn production systems, this is helpful because it tells you whether the solution will be stable: if (a-c) is very small (close to zero), small changes in inputs can produce huge swings in x. That’s not an algebra mistake; that’s a model stability issue.\n\n### Approach 3: Numeric solving (when exact algebra is inconvenient)\nIf you have a monotonic function f(x) and you want f(x) = 0, you can use bisection or Newton’s method.\n\nI only do this when the equation isn’t easily reducible to linear form or when it’s piecewise. For linear equations, numeric solving is overkill and introduces approximation error.\n\n## Testing and validation strategies (the part people skip)\nWhen this algebra ends up in software, the failure mode is rarely “the equation crashes.” The failure mode is “the equation returns a plausible wrong number.” That means testing is not optional.\n\n### 1) Example-based unit tests\nI always include tests for:\n\n- A normal “one solution” case\n- The “no solution” case\n- The “infinite solutions” case\n- A parentheses + negatives case\n- A fractions case\n\nThe sample lists in both scripts are basically that.\n\n### 2) Property tests (fuzzing the algebra)\nThis is the highest leverage test if you can do it. For linear equations, you can generate random coefficients and then generate an equation with a known solution.\n\nIdea:\n\n- Choose random a, b, c, d with a != c.\n- Pick a random x0.\n- Construct both sides so that ax + b and cx + d are equal at x0.\n- Feed the equation to your solver and confirm it returns x0.\n\nThis catches sign mistakes, parser bugs, and edge-case errors very quickly.\n\n### 3) Cross-checking with evaluation\nEven without fuzzing, I always do this in production code: after solving for x, plug it back into the original equation evaluator and confirm the residual is ~0.\n\n- With rationals: exact equality\n- With floats: abs(left-right) < epsilon\n\nThis is like asserting invariants after a refactor.\n\n## Performance considerations (what actually matters)\nFor single equations, performance almost never matters. For batches (think: processing many pricing rules, validating lots of spreadsheet imports, or running simulations), a few things dominate.\n\n### Parsing dominates compute\nThe arithmetic for linear solving is tiny. Tokenization and parsing are what you pay for. If you need throughput:\n\n- Avoid repeated parsing of identical expressions (cache parse trees).\n- Keep the grammar small (linear-safe subset).\n- Prefer iterative parsers over extremely general parsers if you’re doing this at scale.\n\nIn practice, you’ll see performance improvements in ranges like “a few times faster” from caching and simplifying input, not from micro-optimizing fraction multiplication.\n\n### BigInt / Fraction growth\nExact rationals are great, but denominators can grow when you add many fractions. For typical short expressions (like business formulas), it’s fine. If you’re ingesting long expressions, consider:\n\n- Normalizing inputs (e.g., use cents as integers rather than fractions).\n- Reducing the number of fraction operations (simplify early).\n- Imposing limits (max token length, max nesting).\n\nIn other words: treat the solver like an interpreter that needs guardrails.\n\n## A production checklist I actually use\nWhen “variable on both sides” algebra is about to ship into production logic, I run through this checklist:\n\n- Is the equation truly linear in the variable?\n- Are the units consistent on both sides?\n- Are inputs constrained (non-negative, integer, bounded)?\n- Do we handle no solution and infinite solutions explicitly?\n- Do we have at least one exact rational test case?\n- Do we verify by substitution after solving?\n- Do we avoid dividing by expressions that might be zero?\n- Do we postpone rounding until the end?\n\nIf I can’t answer “yes” to most of these, I treat the change like a risky refactor and add additional tests before I trust it.\n\n## Closing thought: this is feedback-loop thinking\nThe reason equations with variables on both sides feel “harder” is not because the algebra is fundamentally different—it’s because they’re usually modeling feedback. Feedback is where systems get interesting, but also where mistakes get amplified.\n\nOnce you internalize the move-set—simplify, collect, isolate, verify—you’re not just solving homework problems. You’re learning to reason about self-referential systems and to turn circular business rules into stable, testable code.\n\nThat’s a skill that pays back every time a formula shows up in a PR and everyone else silently hopes it’s correct."

}

Scroll to Top