-
Notifications
You must be signed in to change notification settings - Fork 42.6k
Description
What happened?
In the file pod_devices.go, there is a potential issue of double-locking a mutex in the function podDevices.
- In line 102, the read lock (
pdev.RLock()) is acquired in thepodDevicesfunction to ensure safe access topdev.devs. - Later, on line 107,
podDevicescalls containerDevices, which also attempts to acquire the same read lock via another call topdev.RLock()on line 115. This may result in double-locking the same mutex within the same thread.
Even though both podDevices and containerDevices only use read locks (RLock()), double-locking a mutex, even for reads, can lead to deadlocks, if there is another routine trying to acquire Lock() in between the RLock()s, according to the documentation of RWMutex.
The standard sync.Mutex() in Go is not a recursive locking implementation while it is used recursively in the podDevices function. You can read more on why go does not implement recursive locking here.
What did you expect to happen?
The expectation is that a mutex should not be double-locked within the same thread. In this case, either the locking logic needs to be restructured to prevent multiple acquisitions of the same lock or containerDevices should not attempt to lock the mutex if it is already locked by podDevices.
How can we reproduce it (as minimally and precisely as possible)?
This issue is identified through static analysis, so it cannot be directly reproduced via runtime observation. However, if left unresolved, it could lead to unpredictable behavior in environments where recursive read locks are not supported.
Anything else we need to know?
Sponsorship and Support:
This work is done by the security researchers from OpenRefactory and is supported by the Open Source Security Foundation (OpenSSF): Project Alpha-Omega. Alpha-Omega is a project partnering with open source software project maintainers to systematically find new, as-yet-undiscovered vulnerabilities in open source code - and get them fixed – to improve global software supply chain security.
The bug is found by running the Intelligent Code Repair (iCR) tool by OpenRefactory, Inc. and then manually triaging the results.
Kubernetes version
Details
$ kubectl version
# paste output hereCloud provider
Details
OS version
No response
Install tools
No response
Container runtime (CRI) and version (if applicable)
No response
Related plugins (CNI, CSI, ...) and versions (if applicable)
No response
Metadata
Metadata
Assignees
Labels
Type
Projects
Status