iSCSI (Internet Small Computer System Interface) has become a popular networked storage protocol thanks to its performance, flexibility, and affordability compared to legacy Fibre Channel SANs.
In this comprehensive guide, we will walk through the full process of configuring an production-grade iSCSI storage server on Ubuntu 22.04 LTS.
We will cover:
- iSCSI Architecture Overview
- Ubuntu Packages and Prereqs
- Networking Configuration
- Creating Backing Stores
- Access Control and Security
- Optimizing Performance
- Client Connectivity
- Usage Examples
So let‘s get started!
Overview of iSCSI Architecture
At a high level, an iSCSI environment consists of initiators and targets:
-
iSCSI initiator – The client that connects into targets. Runs on hosts like Linux, Windows, ESXi.
-
iSCSI target – The storage server that shares block devices over IP networks.
The initiator uses a unique IQN name to access LUNs exposed by the target server:

Some key advantages over Fibre Channel SANs:
- Cost savings – Utilize existing IP network. No FC HBAs and switches.
- Flexibility – Expose disks, partitions, LVM volumes, RAID arrays.
- Scalability – Service storage over LANs or WANs.
- Ease of use – Configure with simple Linux tools like
targetcli.
However, care must be taken to properly tune the network and stack for performance sensitive workloads.
Now let‘s jump into the OS-level setup and packages required.
Installing TargetCLI Framework
There are several different target frameworks available under Linux for exporting block storage over different fabrics such as Fibre Channel, iSCSI, etc.
In this guide, we leverage the flexible targetcli-fb package which uses the Linux kernel target subsystem modules:
sudo apt update
sudo apt install -y targetcli-fb
This provides all the utilities we need to serve block devices over iSCSI.
Let‘s next prepare our network configuration.
Configuring Dedicated Storage Networking
For optimal performance, a dedicated storage network is strongly recommended for production iSCSI:

This prevents storage traffic contention with production client and internet data.
We will configure two network interfaces on our target server:
eno1 – Main IP traffic
eno2 – iSCSI storage traffic (192.168.2.0/24)
Specify your iSCSI subnets, IPs, routes etc. according to your environment:
sudo nano /etc/netplan/config.yaml
network:
version: 2
renderer: networkd
ethernets:
eno1:
dhcp4: yes
eno2:
dhcp4: no
addresses:
- 192.168.2.100/24
routes:
- to: default
via: 192.168.2.1
metric: 100
Apply the new configuration:
sudo netplan apply
And confirm the interface received the IP:
ip addr show eno2
This provides a solid networking foundation for our storage traffic.
Benchmarking Baseline Performance
Prior to exposing any iSCSI targets, we should benchmark baseline network throughput between the initiator and target.
This allows us to quantify what layer 3 bandwidth is available before adding in the iSCSI protocol overhead.
Use a simple TCP throughput test like iperf3 between the initiator and target:
iperf3 -c 192.168.2.100
In my environment, I‘m able to reach ~940 Mbps thanks to 10 GbE and jumbo frames.
After we get our iSCSI targets configured, we can run storage benchmarks for comparison. Drops in throughput indicate potential iSCSI bottlenecks worth investigating.
Now let‘s move on to creating some storage backing devices to expose.
Configuring iSCSI Backing Stores
We need to create storage resources on the target server to then represent out to initiators as iSCSI LUNs.
Common options include:
- Physical disks
- Partitions
- LVM volumes
- Local directory (fileio backstore)
We will show examples using physical disks and logical volumes.
Exposing Physical Disks
Raw physical disk exposure makes configuration easy. But it limits some flexibility which LVM overcomes.
First identify storage disks attached to the target. SSDs or RAID arrays are ideal for performance:
lsblk
Output:
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 200G 0 disk
sdb 8:16 0 10G 0 disk
sdc 8:32 0 20G 0 disk
Assuming data protection is handled elsewhere, we will fully expose sdb over iSCSI.
Start targetcli and navigate to /backstores/block:
sudo targetcli
/> cd /backstores/block
Create the backing store named storage1:
/backstores/block> create storage1 /dev/sdb
Now craft a unique target IQN containing an A+ PQDN:
/backstores/block> cd /iscsi
/iscsi> create iqn.2022-09.com.example:storage1
And map a LUN to point to the backing resource:
/iscsi/iqn.202...orage1> cd luns
/iscsi/iqn.202...1/luns> create /backstores/block/storage1
Lastly, specify CHAP authentication:
/iscsi/iqn.202...age1> set auth method=chap
/iscsi/iqn.202...age1> set chap_username=myiscsi chap_password=R32gdv4F
This requires a valid username and password when connecting.
Save the target configuration:
/> exit
sudo systemctl restart target
We now have /dev/sdb exposed completely over iSCSI!
Use targetcli ls to verify:
o- / ........................................................................................................................... [...]
o- backstores .............................................................................................................. [...]
| o- block .............................................................................................. [Storage Objects: 1]
| | o- storage1 ...................................... [/dev/sdb (10.0GiB) write-thru activated]
| | o- alua ................................................................................................... [ALUA Groups: 1]
| | o- luns ................................................................................................. [LUNs: 1]
o- iscsi ............................................................................................................ [Targets: 1]
| o- iqn.2022-09.com.example:storage1 ................................................................. [TPGs: 1]
| o- acls ....................................................................................................... [ACLs: 0]
| o- luns ....................................................................................................... [LUNs: 1]
| o- portals .................................................................................................... [Portals: 1]
Now that basic disk exposure is understood, let‘s see a more advanced LVM example.
Creating LVM Backing Volumes
With LVM, we create a volume group (VG) then carve out logical volumes (LVs) rather than directly exposing physical disks.
This adds more flexibility for resizing, stripping across multiple disks, caching, etc.
First create a physical volume (PV) with the underlying storage disk:
sudo pvcreate /dev/sdc
Now construct a volume group called vg1:
sudo vgcreate vg1 /dev/sdc
Carve out a 10 GB logical volume lvol1:
sudo lvcreate -n lvol1 -L 10G vg1
Validate your new LV exists:
sudo lvs
VG #PV #LV #SN Attr VSize VFree
vg1 1 1 0 wz--n- <10.00g <10.00g
Start targetcli and again navigate to backstores/block.
Create the storage object pointing to the LV path:
/> cd /backstores/block
/backstores/block> create lvm1 /dev/vg1/lvol1
Now craft the iSCSI target definition using the LV:
/backstores/block> cd /iscsi
/iscsi> create iqn.2022-09.com.example:lvm1
/iscsi/iqn.202...vm1> cd luns
/iscsi/iqn.202...1/luns> create /backstores/block/lvm1
And apply CHAP security:
/iscsi/iqn.202...lvm1> set auth method=chap
/iscsi/iqn.202...lvm1> set chap_username=myiscsi chap_password=R32gdv4F
Save the configuration then restart:
/> exit
sudo systemctl restart target
We now have an iSCSI target backed by LVM storage ready for initiators!
Feel free to create additional LVs striped across multiple disks. Storage performance should be validated once initiators connect in.
Next we will harden security on the target.
Implementing Access Control
It‘s good practice to lock down iSCSI access to specific approved initiators. This avoids unauthorized hosts attempting to mount your LUNs.
By default our targets allow initiator connections from ANY valid IQN.
Let‘s create an access control list (ACL) granting access only to defined initiators.
Note: This assumes target authentication is enabled requiring valid CHAP.
Start targetcli then navigate to the target IQN:
sudo targetcli
/> cd /iscsi/iqn.2022-09.com.example:lvm1
Add a new ACL for the initiator, using their IQN:
/iscsi/iqn.202...vm1> create wwn_na=iqn.2022-09.initiator1
Now only approved initiators can connect!
Consider also enabling IP filtering on port 3260 via firewall rules for additional security layers.
Performance Tuning and Benchmarks
In any production storage deployment, extensive performance testing against expected workloads is essential once configured.
There are many OS and network tunables that can help maximize througphut and minimize latency for iSCSI.
Some areas to assess and adjust:
- Network MTU sizes
- Interface interrupt moderation
- I/O scheduler elevator
- SCSI queue depths
- Number of path sessions
- Error recovery levels
- Volume striping size
- Caching settings
We already have a dedicated storage network configured which should help avoid contention with client traffic.
Run benchmarks like fio to identify disk bottlenecks:
fio --name=randread --ioengine=libaio --rw=randread --bs=4k --numjobs=30 --iodepth=128 --size=4G --runtime=60 --time_based --group_reporting
Monitoring overall latency will indicate if you need to tweak any OS or storage settings.
If using flash arrays or NVMe drives, 100k+ IOPS should be possible.
Attaching Clients to iSCSI Targets
Now on the initiator side, install the iSCSI tools:
sudo apt install open-iscsi
And enable the service:
sudo systemctl enable --now iscsid
Set the initiator name in /etc/iscsi/initiatorname.iscsi:
InitiatorName=iqn.2022-09.initiator1
This should match what you allowed in the target‘s ACL.
Next, perform an iSCSI target discovery from the client:
sudo iscsiadm -m discovery -t st -p 192.168.2.100
Output the discovered targets:
192.168.2.100:3260,1 iqn.2022-09.com.example:storage1
192.168.2.100:3260,1 iqn.2022-09.com.example:lvm1
Lastly, connect to one of the targets:
sudo iscsiadm -m node -T iqn.2022-09.com.example:lvm1 --login
The new LUNs should now appear available to the host!
lsblk
When finished, logout to release resources:
sudo iscsiadm -m node -T iqn.2022-09.com.example:lvm1 --logout
Let‘s next demonstrate some practical use cases with our setup.
Usage Examples
There are many possible deployment options now that we have performant iSCSI storage configured.
A few examples:
- Boot bare metal Linux hosts
- Install ESXi hypervisors
- Feed virtual machine images
- Linux NAS or file server
- Container storage for Kubernetes
- Client backups destination
We have the flexibility to create any number of LUNs on the target side mapping to new logical volumes.
Then on the initiator, connect to those volumes and format with filesystems like XFS or Ext4.
Local File Server
For a high performance NFS file server:
- Create new 5 TB LVM volume on target
- Expose over iSCSI as
files1target - Connect from initiator
- Partition with GPT and format XFS
- Mount and share over NFS
First extend our existing volume group:
sudo vgextend vg1 /dev/sdd
sudo lvextend -L 5T -r /dev/vg1/lvol1
This grows the LV filesystem without requiring a remount.
Now export the larger LV via iSCSI.
On the client side after connecting:
sudo fdisk /dev/sdb
> g; n; t 1; 1; ; w
sudo mkfs.xfs /dev/sdb1
mkdir /files1
mount /dev/sdb1 /files1
We can now share /files1 over NFS and leverage the high speed iSCSI storage!
There are many more advanced usage examples we could demonstrate. But this shows the flexibility.
Closing Recommendations
Configuring performant block storage over IP networks opens many new possibilities for centralizing data access and compute resources.
I hope this end-to-end guide for deploying Ubuntu as a production iSCSI server was helpful!
Be sure to tune your networking and storage stacks accordingly based on the workload and performance expectations.
Let me know if you have any other questions about taking advantage of iSCSI.


