PowerShell provides a useful operator called the pipeline operator that allows you to connect multiple commands, making it easy to pipe output from one command to another. In this comprehensive guide, we‘ll cover everything you need to know about using pipelines effectively in PowerShell.
What is a Pipeline in PowerShell?
A pipeline in PowerShell is a set of commands separated by the pipeline operator |. It allows you to take the output from one command and send it as input to another command.
For example:
Get-Process | Sort-Object CPU -Descending
Here the Get-Process cmdlet gets a list of running processes on the system. Its output is then piped to the Sort-Object cmdlet to sort the processes by highest CPU usage.
Pipelines allow you to combine multiple simple commands to perform complex tasks.
Why Use Pipelines in PowerShell?
Pipelines provide several key benefits:
-
Simplifies Complex Tasks: You can break down a complex task into smaller reusable steps. Each command performs one part, and the pipelines stitch them together.
-
Avoids Intermediate Files: Pipelines pass data in memory between commands. So you don‘t need temporary files to store intermediate results.
-
Increased Readability: Pipelines visually convey how data flows from one operation to the next. So your code is self-documenting to a degree.
-
Flexible and Modular: You can reuse pipeline components. Just plug the commands together in different ways. Add or remove commands easily too.
Overall, pipelines help you write cleaner code that‘s easier to understand and maintain in PowerShell.
The Pipeline Operator
The pipeline operator used to connect commands is the vertical bar |.
Here is the basic syntax:
Command1 | Command2 | Command3
Command1executes first and passes output to the pipelineCommand2receives piped input from Command1 and runsCommand2passes output to Command3, and so on
So data flows from left to right through the pipeline, modified by each command along the way.
How Pipelines Work in PowerShell
To understand pipelines, you need to know how PowerShell handles pipeline input/output.
In PowerShell, cmdlets process objects – structured information with built-in properties and methods.
For example, the Get-Process cmdlet outputs process objects containing details like process name, ID, memory usage etc.
When you pipe the process objects to another command, it receives them as input. It can then access and manipulate the properties on those objects.
After running, the cmdlet outputs new objects which flow further through the pipeline.
So pipelines work through streams of .NET objects in PowerShell.
PowerShell Objects in Pipelines
As data flows through the pipeline, its structure transforms based on the operation performed at each stage:
Get-ChildItem *.txt | Measure-Object
Here‘s what happens step-by-step:
Get-ChildItemlists txt files in the current folder and outputs FileInfo objects containing file names, sizes etc.- The FileInfo objects go to the pipeline
Measure-Objectreceives the objects. It aggregates file sizes and outputs a GenericMeasureInfo object with the total size, count etc.
So the output object type changes as it flows down the pipeline. But the underlying data moves forward.
Understanding this object flow is key to constructing effective pipelines.
Constructing Pipelines in PowerShell
The key things to remember when building a pipeline are:
- Pipeline Input/Output Objects
- Parameter Usage
- Error Handling
Let‘s go through them one-by-one with examples.
Match Pipeline Object Types
For a clean pipeline, verify that the output of one command can be used as input for the next one.
Their object types should match. Often the cmdlet documentation lists input object types accepted.
For example, Sort-Object accepts input from many other cmdlets:
Get-Process | Sort-Object CPU
Get-Service | Sort-Object Status
Since process and service objects have common properties like status, name etc, Sort-Objectcan sort either type successfully.
But mismatching types causes errors:
Get-Location | Sort-Object
# Error as path objects not compatible with Sort-Object
So when constructing pipelines, check for compatibility first.
Use Parameters Effectively
Cmdlets in a pipeline use their parameters independently.
For example, consider this pipeline:
Get-ChildItem *.log | Sort-Object -Descending LastWriteTime | Select-Object -First 10
Here:
Get-ChildItemuses the*.logparameterSort-Objectuses-DescendingandLastWriteTimeSelect-Objectuses-First 10
So each command applies filters separately through its parameters. Combined, they retrieve the latest 10 log files sorted by last write time.
Using parameters at each stage makes pipelines more efficient. Any cmdlet in the pipeline can manipulate the objects as needed.
Handle Errors Properly
When constructing pipelines, also check how errors are handled.
By default, if one command fails, PowerShell stops the pipeline and returns the error without calling subsequent commands.
For example:
Get-Date | Sort-Object # Error as Sort-Object fails for date objects
Write-Host "After Pipeline" # Doesn‘t run due to error
But you can override this behavior in two ways:
1. errorAction Parameter
Some cmdlets like Sort-Object have a -ErrorAction parameter allowing you to suppress errors:
Get-Date | Sort-Object -ErrorAction SilentlyContinue
Write-Host "After Pipeline" # Runs even if pipeline errors
Now PowerShell completes the full pipeline despite the error in Sort-Object.
2. errorActionPreference Variable
The global $ErrorActionPreference variable controls error handling.
By default its value is Continue meaning that errors will stop the pipeline.
But when set to SilentlyContinue, execution continues:
$ErrorActionPreference = "SilentlyContinue"
Get-Date | Sort-Object
Write-Host "After Pipeline" # Runs with no errors
So use these options to fine-tune error handling in your pipelines.
Examples of Using Pipelines in PowerShell
Now that you understand how pipelines work, let‘s go through some applied examples.
Pipelines are extensively used for administration and DevOps tasks in PowerShell.
Here are some common usage patterns:
1. Filtering Objects
A key way pipelines are used is to filter object streams via the pipeline.
For example, to find large PowerShell log files, you can pipeline Get-ChildItem, Where-Object, and Sort-Object:
Get-ChildItem -Path $home\PowerShell\logs\* -File |
Where-Object {$_.Length -gt 10mb} |
Sort-Object -Property Length -Descending
This Leverages each cmdlet‘s strength to filter and pipe objects:
Get-ChildItem– Gets child item files as FileInfo objectsWhere-Object– Filters objects on a size criteriaSort-Object– Sorts the remaining objects by file size
Finally, we get the large log files sorted by size.
2. Bulk Data Manipulation and Analysis
Pipelines also help collect statistics across datasets which can guide your administration.
For example, finding total size of log folders on different drives:
Get-PSDrive -PSProvider FileSystem | ForEach-Object {
$drive = $_.Root
Get-ChildItem -Path $drive\logs\* -Recurse |
Measure-Object -Property Length -Sum }
Here‘s what it does:
- Gets PowerShell drives and pipes them to
ForEach-Object - For each file system drive, gets total size of all log files using
Measure-Object
This quickly collects log usage analytics across many servers and drives using pipeline chaining.
3. Intermediate Object Manipulation
You can also use pipelines to pass objects through a series of manipulations.
For instance, gathering users from Active Directory then extracting specific properties:
Get-ADUser -Filter * |
Select-Object Name,Office,EmailAddress |
ConvertTo-Html -As Table
Breaking it down:
Get-ADUser– gets user directory objectsSelect-Object– outputs new objects with only certain propertiesConvertTo-Html– converts objects to HTML table
Piping between transformations gives flexibility to analyze or format intermediate outputs.
Tips for Effective Use of PowerShell Pipelines
Here are some best practices for working with pipelines:
- Review object types flowing through the pipeline and ensure compatibility
- Construct pipelines iteratively – test after adding each command rather than all at once.
- Use parameters effectively – apply filters and transformations at each pipeline stage rather than just end results.
- Limit pipeline length for readability. For long pipelines, output intermediates to file for troubleshooting
- Handle pipeline errors appropriately via
errorActionparameters or preference variable. - Avoid CPU-intensive operations like complex scripts or conversions in between pipelines.
- Use Set-StrictMode to detect issues early. It checks for invalid object types passed between commands.
Adopting these pipeline development practices will help avoid unexpected errors and debugging issues.
Advanced Techniques for PowerShell Pipelines
Up to now, we focused on basic pipeline usage flowing one object stream through multiple commands.
But PowerShell also enables some advanced pipeline manipulation via redirection operators.
The key ones are:
Tee-Object -splits pipeline stream into multiple streamsWhere-Object– filters/redirects objects based on conditionForEach-Object– runs scripts on each pipeline object
Let‘s look at examples of implementing these for advanced workflows.
Tee-Object
Sometimes you want to divert the main pipeline stream into a secondary command set for additional processing.
Tee-Object lets you split streams easily without disrupting the main flow.
For example, finding specific error log entries and writing backups:
Get-Content -Path error.log |
Tee-Object -FilePath error_backup.log |
Where-Object { $_ -like "*fatal*"}
This splits the stream:
- Main pipeline passes to
Where-Objectafter piping Tee-Objectwrites a backup log file from the same stream
So you get filtered entries, while also keeping a copy of the full logs.
Where-Object
We saw Where-Object earlier for filtering objects.
You can also use it to redirect streams based on any condition, like handling different object types differently:
Get-ChildItem *.* | Where-Object {!$_.PSIsContainer} | Sort-Object
Get-ChildItem *.* | Where-Object {$_.PSIsContainer} | Sort-Object
This separates output by files and directories, piping each to Sort-Object.
So Where-Object provides flexible stream redirection without needing temporary outputs.
ForEach-Object
To perform custom operations, use ForEach-Object to execute a script block on every incoming pipeline object:
Get-Service | ForEach-Object {
if ($_.Status -eq "Running") {
Restart-Service $_.Name -Force
}
}
Now every service is checked individually and restarted if running.
This offers a very extensible approach for complex data flows in your scripts.
Summarizing PowerShell Pipelines
The key points about pipelines are:
- Pipelines pass .NET objects between commands
- They make scripts easy to read and maintain
- Parameters and object types need to match between pipeline stages
- Advanced techniques like stream redirection provide additional flexibility
So keep pipelines in mind when developing PowerShell automation scripts and tools. They enable clearly structured code and simplified complex logic.
Over time, practicing and identifying pipeline opportunities will get easier. You‘ll discover new ways to leverage their capabilities across your PowerShell solutions.


