summaryrefslogtreecommitdiffhomepage
path: root/pkg
diff options
context:
space:
mode:
authorAyush Ranjan <ayushranjan@google.com>2020-11-18 23:08:04 -0800
committergVisor bot <gvisor-bot@google.com>2020-11-18 23:10:30 -0800
commit74bc6e56ccd999998805e142622314f86dce770c (patch)
tree6196b931de72143cbd00ad3b0b776d2d5cbc9080 /pkg
parente5650d124032e38a26e4b8058778e1c5b5aacbf0 (diff)
[vfs] kernfs: Do not panic if destroyed dentry is cached.
If a kernfs user does not cache dentries, then cacheLocked will destroy the dentry. The current DecRef implementation will be racy in this case as the following can happen: - Goroutine 1 calls DecRef and decreases ref count from 1 to 0. - Goroutine 2 acquires d.fs.mu for reading and calls IncRef and increasing the ref count from 0 to 1. - Goroutine 2 releases d.fs.mu and calls DecRef again decreasing ref count from 1 to 0. - Goroutine 1 now acquires d.fs.mu and calls cacheLocked which destroys the dentry. - Goroutine 2 now acquires d.fs.mu and calls cacheLocked to find that the dentry is already destroyed! Earlier we would panic in this case, we could instead just return instead of adding complexity to handle this race. This is similar to what the gofer client does. We do not want to lock d.fs.mu in the case that the filesystem caches dentries (common case as procfs and sysfs do this) to prevent congestion due to lock contention. PiperOrigin-RevId: 343229496
Diffstat (limited to 'pkg')
-rw-r--r--pkg/sentry/fsimpl/kernfs/kernfs.go2
1 files changed, 1 insertions, 1 deletions
diff --git a/pkg/sentry/fsimpl/kernfs/kernfs.go b/pkg/sentry/fsimpl/kernfs/kernfs.go
index c14abcff4..565d723f0 100644
--- a/pkg/sentry/fsimpl/kernfs/kernfs.go
+++ b/pkg/sentry/fsimpl/kernfs/kernfs.go
@@ -286,7 +286,7 @@ func (d *Dentry) cacheLocked(ctx context.Context) {
refs := atomic.LoadInt64(&d.refs)
if refs == -1 {
// Dentry has already been destroyed.
- panic(fmt.Sprintf("cacheLocked called on a dentry which has already been destroyed: %v", d))
+ return
}
if refs > 0 {
if d.cached {