I still remember a late-night bug hunt where a pricing pipeline silently dropped thousands of records. The root cause was trivial: I appended arrays the wrong way and accidentally overwrote the original dataset in a shared reference. That moment stuck with me because appending arrays seems simple, yet small choices affect performance, readability, and data integrity. When you append one array to another in JavaScript, you are making an explicit decision about mutability, allocation, and intent. If you choose well, your code stays predictable under load. Choose poorly, and you get subtle bugs that only surface in production.
Here is what you will learn: how I decide between in-place vs immutable appends, how different approaches behave in modern runtimes, and where each method shines or fails. I will walk through real-world examples, highlight common mistakes, and show you how to balance clarity with performance. You should walk away with a concrete playbook you can apply immediately, whether you are merging API responses, building a UI state pipeline, or processing batch data.
The Decision That Actually Matters: Mutate or Return a New Array
The single most important decision is whether you want to modify the original array or create a new one. Everything else flows from that. In my experience, bugs usually appear when intent and implementation do not match.
Think of it like adding pages to a physical binder. If you add pages to the original binder (mutation), everyone who shares that binder sees the new pages. If you photocopy the binder and add pages to the copy (immutability), the original stays untouched. You need to choose which binder you want to edit.
Here is a quick framing you can keep in your head:
- Mutate in place when you own the array and you want maximum speed with minimal allocations.
- Return a new array when the original is shared, cached, or used by another part of the system.
Below, I will show the most common patterns, how they behave, and how I pick them in practice.
What Append Really Means in JavaScript
I always remind myself that arrays in JavaScript are objects, and variables store references to them. Appending is not just about values, it is about which reference you are updating. A lot of confusion disappears once you separate these ideas:
- A variable points to an array object.
- A new array creates a new object and a new reference.
- Mutation changes the existing object in place.
Here is a tiny example that explains many production bugs:
const shared = [1, 2, 3];
const a = shared;
const b = shared;
b.push(4);
console.log(a); // [1, 2, 3, 4]
If I need independent arrays, I must create a new array before I append. That single difference is why immutable patterns feel safer in large systems. When I need speed and I know I own the data, I mutate without guilt.
The Workhorse: push() with the Spread Operator
If I need to append to an existing array and I am okay with mutation, I reach for push with spread. It is fast to read and easy to reason about.
const orders = [101, 102, 103];
const newOrders = [104, 105, 106];
// Mutates orders by appending items from newOrders
orders.push(...newOrders);
console.log(orders); // [101, 102, 103, 104, 105, 106]
Why I like it:
- Minimal overhead when you want to update the original array.
- Expresses intent clearly: append these values to that array.
When not to use it:
- If you need the original array unchanged.
- If newOrders is extremely large (spreading huge arrays can hit argument limits in some runtimes).
In my experience, for typical application sizes, this is the best default for in-place appends.
The Immutable Classic: concat()
When I want a new array and I need the original left untouched, concat is the most explicit option. It is an older method, but it reads cleanly, and its behavior is unambiguous.
const baseUsers = [‘Ava‘, ‘Noah‘, ‘Maya‘];
const newUsers = [‘Kai‘, ‘Zoe‘];
const allUsers = baseUsers.concat(newUsers);
console.log(allUsers); // [‘Ava‘, ‘Noah‘, ‘Maya‘, ‘Kai‘, ‘Zoe‘]
console.log(baseUsers); // [‘Ava‘, ‘Noah‘, ‘Maya‘]
Why I like it:
- Safe for shared arrays and cached references.
- Works well when you chain arrays in data pipelines.
When not to use it:
- If you need to add items to an existing array without allocations.
If you are working with state management or shared data, concat is a solid choice.
The Modern Readable Choice: Spread Into a New Array
I often use spread to build a new array because it looks like literal assembly and reads top to bottom. It also composes well with inline values.
const coreFeatures = [‘auth‘, ‘billing‘, ‘analytics‘];
const premiumFeatures = [‘automation‘, ‘sso‘];
const allFeatures = [...coreFeatures, ...premiumFeatures];
console.log(allFeatures); // [‘auth‘, ‘billing‘, ‘analytics‘, ‘automation‘, ‘sso‘]
Why I like it:
- Strong readability for modern JavaScript.
- Easy to mix literals and multiple arrays.
When not to use it:
- If you are appending to an existing array and want to avoid allocation.
- If you are merging extremely large arrays where memory spikes could matter.
I tend to use this in UI code and data transformation layers where clarity matters most.
The Straightforward Manual Route: for Loop
When I need to apply custom logic as I append, like filtering, conversion, or de-duplication, I use a loop. It is the most flexible and avoids surprises.
const rawScores = [98, 102, null, 87, 110];
const validatedScores = [];
const bonusScores = [5, 7, 3];
// Append only valid scores to validatedScores
for (let i = 0; i < rawScores.length; i++) {
const score = rawScores[i];
if (typeof score === ‘number‘ && score <= 100) {
validatedScores.push(score);
}
}
// Append bonus scores to the validated array
for (let i = 0; i < bonusScores.length; i++) {
validatedScores.push(bonusScores[i]);
}
console.log(validatedScores); // [98, 87, 5, 7, 3]
Why I like it:
- Total control over what gets appended.
- Easier to debug when logic gets complex.
When not to use it:
- If you just need a simple append and no conditions.
Loops are not glamorous, but they are brutally effective when data is messy.
The Functional Alternative: reduce()
I use reduce when I am already in a functional pipeline and I want to keep it consistent. It is more verbose than it needs to be for a basic append, but sometimes it fits the style of the surrounding code.
const cartItems = [‘keyboard‘, ‘mouse‘];
const suggestedItems = [‘headset‘, ‘webcam‘];
const merged = suggestedItems.reduce((acc, item) => {
acc.push(item);
return acc;
}, [...cartItems]);
console.log(merged); // [‘keyboard‘, ‘mouse‘, ‘headset‘, ‘webcam‘]
Why I use it occasionally:
- Fits functional composition pipelines.
- Keeps logic inside a single expression when needed.
When not to use it:
- For simple appends; it is harder to read than other options.
If you are working with functional style or libraries that expect reducer patterns, this can be acceptable.
The In-Place Power Tool: splice()
splice can append elements to the end of an array by inserting them at array.length. It is mutation-heavy and slightly more awkward, but it is precise and sometimes handy when you are already using splice for other edits.
const timeline = [‘draft‘, ‘review‘, ‘approved‘];
const additions = [‘scheduled‘, ‘published‘];
// Insert at the end using splice
timeline.splice(timeline.length, 0, ...additions);
console.log(timeline); // [‘draft‘, ‘review‘, ‘approved‘, ‘scheduled‘, ‘published‘]
Why I keep it in my toolbox:
- Useful when you are doing multiple insertions or deletions in the same routine.
- Clear control over where inserts happen.
When not to use it:
- For normal appends, it is more complex than necessary.
I do not recommend it as a default, but it is good to recognize when it is already part of a larger edit operation.
Array.from(): Useful When Your Input Is Not Really an Array
Array.from is a great option when your array is actually an iterable or array-like object. When you already have arrays, it is a bit redundant, but it becomes practical when you mix in NodeLists, Sets, or typed arrays.
const ids = [201, 202, 203];
const moreIds = [204, 205];
const combined = Array.from([...ids, ...moreIds]);
console.log(combined); // [201, 202, 203, 204, 205]
A more realistic example:
const nodeList = document.querySelectorAll(‘.card‘);
const extraCards = [‘promo‘, ‘sponsored‘];
const cards = Array.from(nodeList);
const combined = [...cards, ...extraCards];
console.log(combined);
Why I use it:
- Converts array-like data into true arrays before merging.
- Useful when you are combining DOM collections with normal arrays.
When not to use it:
- If you already have arrays on both sides and no conversion is required.
Traditional vs Modern Appending Patterns (Quick Comparison)
Here is a compact comparison I use when teaching teams how to choose. It is not about old vs new, it is about intent and clarity.
Traditional Pattern
What I Recommend
—
—
for + push
push(…arr) for clarity
concat
Spread for readability, concat for explicit immutability
for + push
Loop for clarity
Array.from + concat
Array.from + spreadI treat spread as the modern default, then adjust if performance or clarity suggests a different path.
Common Mistakes I See (and How to Avoid Them)
Even skilled developers trip over these. Here are the ones I see most often, along with fixes.
Mistake 1: Forgetting that push returns length
const numbers = [1, 2, 3];
const more = [4, 5];
const result = numbers.push(...more);
console.log(result); // 5 (length), not the array
If you want the array, keep using numbers after push.
Mistake 2: Using concat but expecting mutation
const base = [‘alpha‘, ‘beta‘];
const extra = [‘gamma‘];
base.concat(extra);
console.log(base); // [‘alpha‘, ‘beta‘]
concat does not mutate. Assign the result or use push.
Mistake 3: Appending the array as a single element
const a1 = [1, 2, 3];
const a2 = [4, 5];
// Wrong for append
const wrong = a1.concat([a2]);
console.log(wrong); // [1, 2, 3, [4, 5]]
Make sure you pass the array directly or spread it.
Mistake 4: Spreading a huge array in older runtimes
Some runtimes still have limits on argument counts for function calls. If a2 is massive, push(…a2) could throw. In those cases, use a loop or chunking.
const target = [];
const huge = new Array(500000).fill(1);
// Safer for massive arrays
for (let i = 0; i < huge.length; i++) {
target.push(huge[i]);
}
Performance Notes You Can Actually Use
It is easy to overthink performance, so I focus on real usage patterns:
- push(…a2) is typically fast for small and medium arrays. On modern engines, you will usually see it finish within a few milliseconds for thousands of elements.
- concat and spread create new arrays, which means memory overhead. That can be a real concern when you are handling tens of thousands of objects or running on mobile.
- Loops are reliable for huge arrays. If you are appending massive datasets (hundreds of thousands of items), the loop approach is most stable and predictable.
In real systems, the biggest performance cost usually comes from what you do after appending, rendering, serialization, or I/O. Still, if you are on a hot path, mutation plus loops are usually safest.
Real-World Scenarios Where the Choice Matters
Let me ground this in practical cases I see in production codebases.
UI State Updates
If you are working with a state store that depends on immutability, you should return a new array.
const state = {
notifications: [‘welcome‘]
};
const incoming = [‘new message‘, ‘payment received‘];
const nextState = {
...state,
notifications: [...state.notifications, ...incoming]
};
Here, immutability prevents subtle UI bugs where components do not re-render because references did not change.
Server-Side Batch Processing
If you own the array and you are on a hot path, mutation is usually better.
const batch = [1001, 1002, 1003];
const incoming = [1004, 1005];
batch.push(...incoming);
This avoids a new allocation and keeps memory usage predictable during heavy workloads.
Data Cleaning Pipelines
When data is messy, I prefer loops, because I can validate each item as I append.
const raw = [‘15‘, ‘17‘, ‘invalid‘, ‘21‘];
const clean = [];
for (let i = 0; i < raw.length; i++) {
const n = Number(raw[i]);
if (Number.isFinite(n)) {
clean.push(n);
}
}
If you are cleaning user data or third-party responses, explicit loops are more readable and safer.
How I Choose: A Simple Decision Checklist
When I am unsure, I run through these questions in order:
1) Do I want to mutate the existing array? If yes, I reach for push(…arr).
2) Am I working with shared state or cached data? If yes, I use concat or spread into a new array.
3) Do I need custom logic while appending? If yes, I use a loop.
4) Is my input not a true array? If yes, I normalize with Array.from.
This keeps decisions consistent across a team, which matters more than any single method‘s micro-performance.
Edge Cases That Deserve a Mention
Appending Typed Arrays
Typed arrays like Uint8Array do not behave exactly like normal arrays. You will need to convert them or create a new typed array explicitly.
const a = new Uint8Array([1, 2]);
const b = new Uint8Array([3, 4]);
const combined = new Uint8Array(a.length + b.length);
combined.set(a, 0);
combined.set(b, a.length);
console.log(combined); // Uint8Array(4) [1, 2, 3, 4]
Appending Nested Arrays
If you are working with nested arrays, make sure you are appending at the right level. A single mistaken concat can turn a flat list into a nested structure.
const a1 = [[1, 2], [3, 4]];
const a2 = [[5, 6]];
const merged = a1.concat(a2); // still nested arrays, as expected
If you want to flatten one level, combine with flat():
const flat = a1.concat(a2).flat();
console.log(flat); // [1, 2, 3, 4, 5, 6]
Appending in Async Flows
If arrays are shared across async operations, mutation can be risky. Prefer immutable patterns to avoid race conditions and unexpected ordering.
const results = [‘start‘];
async function addResults(fetchFn) {
const data = await fetchFn();
return [...results, ...data];
}
When in doubt, create new arrays in async flows to avoid shared-state surprises.
A Practical Recommendation for 2026 Projects
Modern JavaScript tooling makes immutability easier to manage, especially with state libraries and reactive frameworks. If you are in a UI-heavy codebase or collaborative team, I recommend defaulting to immutable patterns for clarity and predictability.
However, if you are in a data pipeline, backend service, or high-throughput batch job, I recommend mutation with push(…arr) or loops for the hot paths. The performance benefits are real when you scale up.
I also encourage teams to document their approach in a short style guide. A one-page array operations rulebook can eliminate hours of debate and bugs. For example:
- Default to spread or concat in UI and shared data.
- Use push(…arr) in tight loops or large data ingestion.
- Use loops for validation or transformation during appends.
Mental Models That Prevent Bugs
I keep three mental models in my head that prevent 90 percent of array-append mistakes.
First, I treat append as a choice between mutating and copying. I say it out loud when I code: mutate or copy. It sounds silly, but it stops me from mixing up push and concat.
Second, I consider arrays as containers with identity. If I hand that container to another part of the system, I assume it can be shared and I stop mutating it. This is the same reason I avoid mutating objects that are passed into functions I do not control.
Third, I think about downstream consumers. A UI render, a cache, or a memoized selector can all break if I mutate the array in place. If I know someone else cares about the reference, I return a new one.
These models are cheap to remember and they are more reliable than memorizing method names.
Append vs Merge vs Flatten: Similar Words, Different Outcomes
Teams often use the words append, merge, and flatten interchangeably, and that is how accidental bugs creep in. Here is how I distinguish them:
- Append means add items from one array to the end of another.
- Merge often means combine arrays but it can also imply de-duplication or sorting.
- Flatten means turn nested arrays into a single level array.
When I review code, I check that the function names and comments match the actual operation. If I see a function called merge that simply concatenates, I will either rename it or add clarification. These naming decisions can save hours later.
Handling Duplicates While Appending
Appending often creates duplicates. That might be fine, but in many systems it is not. I like to make this choice explicit.
If I want to preserve order but remove duplicates, I use a Set after a concat or spread. This is clean and readable, and it works well for primitive values.
const a1 = [‘alpha‘, ‘beta‘, ‘gamma‘];
const a2 = [‘beta‘, ‘delta‘];
const merged = [...a1, ...a2];
const unique = [...new Set(merged)];
console.log(unique); // [‘alpha‘, ‘beta‘, ‘gamma‘, ‘delta‘]
For objects, I usually need a key. Here is a pattern I use when appending records that have an id:
const existing = [{ id: 1, name: ‘Ada‘ }, { id: 2, name: ‘Linus‘ }];
const incoming = [{ id: 2, name: ‘Linus‘ }, { id: 3, name: ‘Grace‘ }];
const map = new Map();
for (const item of [...existing, ...incoming]) {
map.set(item.id, item);
}
const deduped = [...map.values()];
console.log(deduped); // [{ id: 1, ... }, { id: 2, ... }, { id: 3, ... }]
That pattern is fast enough for most cases and gives me clear control over which version wins in a collision.
Appending Many Arrays at Once
Sometimes I need to append multiple arrays in a single step, like merging paginated API responses. There are a few common approaches, each with tradeoffs.
Using spread with multiple arrays is clean but can be memory-heavy if the lists are large:
const a = [1, 2];
const b = [3, 4];
const c = [5, 6];
const combined = [...a, ...b, ...c];
If I already have an array of arrays, I use concat with apply or reduce. I prefer reduce because it reads better to me:
const pages = [[1, 2], [3, 4], [5, 6]];
const combined = pages.reduce((acc, page) => acc.concat(page), []);
For mutation, I will append pages in a loop to keep memory steady:
const target = [];
const pages = [[1, 2], [3, 4], [5, 6]];
for (let i = 0; i < pages.length; i++) {
target.push(...pages[i]);
}
This is slightly more verbose, but I have seen it reduce memory spikes on large datasets.
Chunking Large Appends Safely
If I have to append a massive list and I suspect argument limits or memory pressure, I chunk it. This is safer for environments where pushing hundreds of thousands of items at once can fail.
function appendInChunks(target, source, chunkSize = 10000) {
for (let i = 0; i < source.length; i += chunkSize) {
const chunk = source.slice(i, i + chunkSize);
target.push(...chunk);
}
return target;
}
const target = [];
const huge = new Array(200000).fill(1);
appendInChunks(target, huge);
Chunking is not needed for small arrays, but it is my default for anything that might be unbounded, like user uploads or third-party data exports.
Working with Array-Like Inputs
Array-like inputs are common in the browser and in Node. If I forget to normalize them, I end up with confusing bugs.
A common example is the arguments object in older functions:
function addAll() {
const args = Array.from(arguments);
return args.reduce((sum, n) => sum + n, 0);
}
Another example is a Set or Map where I want order or I want to append more items:
const ids = new Set([1, 2, 3]);
const more = [3, 4, 5];
const combined = [...ids, ...more];
The key point is that append logic is easier when you normalize inputs first. I always convert once at the edge of my function instead of sprinkling Array.from everywhere inside.
Choosing Immutable Patterns in UI State
If I am working in a UI framework, immutability is not just preference, it is a functional requirement. Rendering systems often compare references, not deep values, so I want to return a new array even if it is slightly more costly.
Here is a pattern I use in reducers where I need to append payload items to existing state:
function notificationsReducer(state, action) {
if (action.type === ‘ADD_NOTIFICATIONS‘) {
return {
...state,
notifications: [...state.notifications, ...action.payload]
};
}
return state;
}
If I need to ensure I am not mutating by accident, I sometimes use Object.freeze in development or write tests that assert reference equality is not reused. That is a simple guardrail for state-heavy systems.
Choosing Mutation for Throughput
When I am working on data pipelines or server side aggregation, I do not want to pay the cost of allocating new arrays for each batch. This is where mutation shines.
Here is a real example from a log aggregation script I wrote. I have thousands of log lines per second, so I reuse the same buffer.
const buffer = [];
function addBatch(lines) {
buffer.push(...lines);
if (buffer.length > 5000) {
flush(buffer);
buffer.length = 0; // reuse the same array
}
}
The line that resets buffer.length is a performance trick. It keeps the same array reference but clears it. That is a deliberate decision that should be documented because it is easy to misunderstand.
Appending with Filtering and Mapping in One Pass
Sometimes I want to transform data as I append to avoid extra loops. I keep this pattern simple so it stays readable.
const raw = [‘12‘, ‘n/a‘, ‘19‘, ‘‘];
const clean = [];
for (let i = 0; i < raw.length; i++) {
const value = Number(raw[i]);
if (Number.isFinite(value)) {
clean.push(value * 2);
}
}
Yes, I could use filter and map and then spread, but that creates intermediate arrays. In a hot path, a single loop is easier on memory and often easier to debug.
Why Appending Can Change Big-O Behavior
Appending itself is usually O(n) because you have to move or iterate over the source array. But what happens after that can change the overall complexity of a pipeline.
If I append inside a loop that also grows the array and I am not careful, I can accidentally double the work. This classic pitfall shows up in naive algorithms:
const items = [1, 2, 3];
for (let i = 0; i < items.length; i++) {
items.push(i); // this keeps extending the loop
}
This can lead to unexpected behavior or even infinite loops if the array keeps growing. When I append inside a loop, I snapshot the length first:
const items = [1, 2, 3];
const initialLength = items.length;
for (let i = 0; i < initialLength; i++) {
items.push(i);
}
That is not strictly about appending at the end of another array, but it is a real edge case I have seen in production.
Practical Debugging Techniques
When appending logic looks correct but the output is wrong, I use a few quick checks.
- I log array references by comparing object identity. If a supposedly new array is still strictly equal to the old one, I know I mutated.
- I check lengths before and after the append to verify expectations.
- I isolate appends by splitting steps into variables rather than chaining methods, so I can inspect intermediate results.
Here is a tiny diagnostic helper I keep around:
function assertNewArray(prev, next) {
if (prev === next) {
throw new Error(‘Expected new array but received the same reference‘);
}
}
This sort of guard is surprisingly effective during refactors.
Testing Append Logic with Small and Large Inputs
I like to test appends with both tiny and large inputs because the failure modes are different.
For example, I write small unit tests to confirm that I do not mutate arrays when I should not. Then I add a simple stress test to ensure huge inputs do not crash.
function appendImmutable(a, b) {
return [...a, ...b];
}
const a = [1, 2];
const b = [3, 4];
const result = appendImmutable(a, b);
console.log(result); // [1, 2, 3, 4]
console.log(a); // [1, 2]
For large inputs, I might run a quick script locally. I do not obsess over exact timings, but I do ensure it completes and does not throw.
A Decision Matrix I Share with Teams
When I teach teams, I use a simple matrix to keep choices consistent. It is a quick reminder that intent and clarity trump micro-optimizations.
Best Choice
—
push(…arr)
for loop or chunked push
concat or spread
Array.from + spread
for loop
I also encourage teams to leave short comments when they choose mutation in shared code. It eliminates confusion during reviews.
A Note on Memory and Garbage Collection
Appending can create extra garbage if you keep building new arrays in hot paths. This matters in long-running services and in mobile web apps.
If I am building a new array every tick or every request, I think about:
- Reusing buffers when appropriate.
- Clearing arrays by setting length to 0 instead of creating a new one.
- Avoiding intermediate arrays by combining filtering and appending in a single loop.
These micro-decisions add up when your code runs millions of times.
When Not to Append at All
Sometimes the best append is no append. If my data is naturally streaming, I might use generators or iterators instead of building giant arrays.
If I only need to process each item once, I might handle them as they arrive rather than appending them into a big collection. This is not always practical, but it is worth considering in memory-sensitive systems.
Appendix: A Tiny Utility I Actually Use
I keep a small helper function for cases where I want to control mutation but still keep call sites clean. It is intentionally tiny so it does not hide behavior.
function appendInto(target, source) {
target.push(...source);
return target;
}
I like this because it forces me to pass the target explicitly, so I stay aware of which array is being mutated. It also helps in code reviews: a line that calls appendInto signals mutation clearly.
Final Takeaway
Appending arrays is a tiny operation with outsize consequences. The difference between push and concat is not just syntax, it is the difference between shared state and safe copies. My rule is simple: default to immutability when sharing or rendering, and default to mutation when performance matters and you own the data.
When I follow that rule, I get fewer late-night bug hunts, more predictable systems, and code that reads clearly to the next person who inherits it. That is what I want every time I append one array to another.


