diff options
author | gVisor bot <gvisor-bot@google.com> | 2020-02-18 13:42:31 -0800 |
---|---|---|
committer | gVisor bot <gvisor-bot@google.com> | 2020-02-18 13:43:28 -0800 |
commit | 906eb6295d54a05663a223f1dc379a16148de2d1 (patch) | |
tree | 8a6848a35411383479e39800186f38a5a21cf450 /pkg/atomicbitops/atomicbitops.go | |
parent | c841373013ec8659b2954563796479f275b00bfa (diff) |
atomicbitops package cleanups
- Redocument memory ordering from "no ordering" to "acquire-release". (No
functional change: both LOCK WHATEVER on x86, and LDAXR/STLXR loops on ARM64,
already have this property.)
- Remove IncUnlessZeroInt32 and DecUnlessOneInt32, which were only faster than
the equivalent loops using sync/atomic before the Go compiler inlined
non-unsafe.Pointer atomics many releases ago.
PiperOrigin-RevId: 295811743
Diffstat (limited to 'pkg/atomicbitops/atomicbitops.go')
-rw-r--r-- | pkg/atomicbitops/atomicbitops.go | 47 |
1 files changed, 47 insertions, 0 deletions
diff --git a/pkg/atomicbitops/atomicbitops.go b/pkg/atomicbitops/atomicbitops.go new file mode 100644 index 000000000..1be081719 --- /dev/null +++ b/pkg/atomicbitops/atomicbitops.go @@ -0,0 +1,47 @@ +// Copyright 2018 The gVisor Authors. +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +// +build amd64 arm64 + +// Package atomicbitops provides extensions to the sync/atomic package. +// +// All read-modify-write operations implemented by this package have +// acquire-release memory ordering (like sync/atomic). +package atomicbitops + +// AndUint32 atomically applies bitwise AND operation to *addr with val. +func AndUint32(addr *uint32, val uint32) + +// OrUint32 atomically applies bitwise OR operation to *addr with val. +func OrUint32(addr *uint32, val uint32) + +// XorUint32 atomically applies bitwise XOR operation to *addr with val. +func XorUint32(addr *uint32, val uint32) + +// CompareAndSwapUint32 is like sync/atomic.CompareAndSwapUint32, but returns +// the value previously stored at addr. +func CompareAndSwapUint32(addr *uint32, old, new uint32) uint32 + +// AndUint64 atomically applies bitwise AND operation to *addr with val. +func AndUint64(addr *uint64, val uint64) + +// OrUint64 atomically applies bitwise OR operation to *addr with val. +func OrUint64(addr *uint64, val uint64) + +// XorUint64 atomically applies bitwise XOR operation to *addr with val. +func XorUint64(addr *uint64, val uint64) + +// CompareAndSwapUint64 is like sync/atomic.CompareAndSwapUint64, but returns +// the value previously stored at addr. +func CompareAndSwapUint64(addr *uint64, old, new uint64) uint64 |