Age | Commit message (Collapse) | Author |
|
make BAZEL_CONFIG=aarch64 arm-qemu-smoke-test
Signed-off-by: Andrei Vagin <avagin@gmail.com>
|
|
... Otherwise it will be mounted in via some other path, and will
not be available inside the container at all.
PiperOrigin-RevId: 350440843
|
|
We log a warning if objdump fails, but this appears in the build log, not test
log, which can make it hard to notice.
Include it with the actual escape output as context on "(possible)" to make it
more clear when something is wrong.
PiperOrigin-RevId: 350355759
|
|
This returns all targets, and handles no targets.
PiperOrigin-RevId: 350263578
|
|
PiperOrigin-RevId: 350200437
|
|
For validation, the "on" key in existing YAML files is changed to a literal
string. In the YAML spec, on is a keyword which encodes a boolean value, so
without relying on a specific implementation the YAML files are technically
not encoding an object that complies with the specification.
PiperOrigin-RevId: 350172147
|
|
PiperOrigin-RevId: 349616845
|
|
This change cleans up some minor Makefile issues, and adds support for
BuildKite annotations on failure and on profiles being generated. These
annotations will make failures very clear and link to the artifacts.
This change is a stepping stone for aggregating coverage data from all
individual test jobs, as this will also happen in .buildkite/annotate.sh.
PiperOrigin-RevId: 349606598
|
|
Otherwise this pollutes the 'go' branch and doesn't conform to standards
for generate bazel files.
PiperOrigin-RevId: 349605037
|
|
PiperOrigin-RevId: 349491589
|
|
Signed-off-by: Lai Jiangshan <jiangshan.ljs@antfin.com>
|
|
PiperOrigin-RevId: 347047550
|
|
PiperOrigin-RevId: 347038652
|
|
PiperOrigin-RevId: 346878344
|
|
A few images were broken with respect to aarch64. We should now
be able to run push-all-images with ARCH=aarch64 as part of the
regular continuous integration builds, and add aarch64 smoke tests
(via user emulation for now) to the regular test suite (future).
PiperOrigin-RevId: 346685462
|
|
AtomicPtrMap is a generic concurrent map from arbitrary keys to arbitrary
pointer values.
Benchmarks:
name time/op
StoreDelete/RWMutexMap-12 335ns ± 1%
StoreDelete/SyncMap-12 705ns ± 3%
StoreDelete/AtomicPtrMap-12 287ns ± 4%
StoreDelete/AtomicPtrMapSharded-12 289ns ± 1%
LoadOrStoreDelete/RWMutexMap-12 342ns ± 2%
LoadOrStoreDelete/SyncMap-12 662ns ± 2%
LoadOrStoreDelete/AtomicPtrMap-12 290ns ± 7%
LoadOrStoreDelete/AtomicPtrMapSharded-12 293ns ± 2%
LookupPositive/RWMutexMap-12 101ns ±26%
LookupPositive/SyncMap-12 202ns ± 2%
LookupPositive/AtomicPtrMap-12 71.1ns ± 2%
LookupPositive/AtomicPtrMapSharded-12 73.2ns ± 1%
LookupNegative/RWMutexMap-12 119ns ± 1%
LookupNegative/SyncMap-12 154ns ± 1%
LookupNegative/AtomicPtrMap-12 84.7ns ± 3%
LookupNegative/AtomicPtrMapSharded-12 86.8ns ± 1%
Concurrent/FixedKeys_1PercentWrites_RWMutexMap-12 1.32µs ± 2%
Concurrent/FixedKeys_1PercentWrites_SyncMap-12 52.7ns ±10%
Concurrent/FixedKeys_1PercentWrites_AtomicPtrMap-12 31.8ns ±20%
Concurrent/FixedKeys_1PercentWrites_AtomicPtrMapSharded-12 24.0ns ±15%
Concurrent/FixedKeys_10PercentWrites_RWMutexMap-12 860ns ± 3%
Concurrent/FixedKeys_10PercentWrites_SyncMap-12 68.8ns ±20%
Concurrent/FixedKeys_10PercentWrites_AtomicPtrMap-12 98.6ns ± 7%
Concurrent/FixedKeys_10PercentWrites_AtomicPtrMapSharded-12 42.0ns ±25%
Concurrent/FixedKeys_50PercentWrites_RWMutexMap-12 1.17µs ± 3%
Concurrent/FixedKeys_50PercentWrites_SyncMap-12 136ns ±34%
Concurrent/FixedKeys_50PercentWrites_AtomicPtrMap-12 286ns ± 3%
Concurrent/FixedKeys_50PercentWrites_AtomicPtrMapSharded-12 115ns ±35%
Concurrent/ChangingKeys_1PercentWrites_RWMutexMap-12 1.27µs ± 2%
Concurrent/ChangingKeys_1PercentWrites_SyncMap-12 5.01µs ± 3%
Concurrent/ChangingKeys_1PercentWrites_AtomicPtrMap-12 38.1ns ± 3%
Concurrent/ChangingKeys_1PercentWrites_AtomicPtrMapSharded-12 22.6ns ± 2%
Concurrent/ChangingKeys_10PercentWrites_RWMutexMap-12 1.08µs ± 2%
Concurrent/ChangingKeys_10PercentWrites_SyncMap-12 5.97µs ± 1%
Concurrent/ChangingKeys_10PercentWrites_AtomicPtrMap-12 390ns ± 2%
Concurrent/ChangingKeys_10PercentWrites_AtomicPtrMapSharded-12 93.6ns ± 1%
Concurrent/ChangingKeys_50PercentWrites_RWMutexMap-12 1.77µs ± 2%
Concurrent/ChangingKeys_50PercentWrites_SyncMap-12 8.07µs ± 2%
Concurrent/ChangingKeys_50PercentWrites_AtomicPtrMap-12 1.61µs ± 2%
Concurrent/ChangingKeys_50PercentWrites_AtomicPtrMapSharded-12 386ns ± 1%
Updates #231
PiperOrigin-RevId: 346614776
|
|
Recursive make is difficult to follow and debug. Drop this by using
internal functions, which, while difficult, are easier than trying to
following recursive invokations.
Further simplify the Makefile by collapsing the image bits and removing
the tools/vm directory, which is effectively unused.
Fixes #4952
PiperOrigin-RevId: 346569133
|
|
PiperOrigin-RevId: 346496532
|
|
PiperOrigin-RevId: 345399936
|
|
PiperOrigin-RevId: 345245285
|
|
Currently, if containerd is installed locally via tools/installers/containerd,
then it will not necessarily be used if containerd is installed in the system
path. This means that the existing containerd tests are all likely broken.
Also, use libbtrfs-dev instead of btrfs-tools, which is not actually required.
PiperOrigin-RevId: 344879109
|
|
PiperOrigin-RevId: 343946859
|
|
This change also simplifies and documents the build_cmd pipeline, and
reduces general noise for debugging Makefile issues.
It also drops the mapping for /etc/docker/daemon.json, which if it
does not exist initially will create this as a directory (causing lots
of confusion and breaks).
PiperOrigin-RevId: 343932456
|
|
PiperOrigin-RevId: 343885770
|
|
PiperOrigin-RevId: 341757694
|
|
PiperOrigin-RevId: 341732791
|
|
PiperOrigin-RevId: 340941898
|
|
PiperOrigin-RevId: 340536306
|
|
PiperOrigin-RevId: 339913577
|
|
PiperOrigin-RevId: 339886754
|
|
#4673 does not seem to work. Try this new approach.
PiperOrigin-RevId: 339754794
|
|
As you can see https://github.com/google/gvisor/commits/master, there are a lot
of red commits. This is because the Go / generate GitHub action flakes.
On merge, two variants of this workflow run:
- one triggered by the pull request (copybara force pushes to the PR right
before merge)
- one triggered by the push (merge)
If the push action ends up finishing before the pull request action can run
go_branch.sh, then the changes that go_branch.sh makes is already pushed to
the remote go branch. Consequently, the pull request action ends up having
nothing to commit causing this action to fail.
This change also fixes lint warnings.
Now we skip running the go_branch.sh if we find that our current working commit
has already been committed to remote.
PiperOrigin-RevId: 339586760
|
|
PiperOrigin-RevId: 339570821
|
|
This splits the nogo rules into a separate configuration yaml file, and
allows for multiple files to be provided.
Because attrs cannot be passed down to aspects, this required that all
findings are propagated up the aspect Provider. This doesn't mean that
any extra work must be done, just that this information must be carried
through the graph, and some additional starlark complexity is required.
PiperOrigin-RevId: 339076357
|
|
Add a parser binary for parsing files containing
Benchmark output and sending data to BigQuery.
PiperOrigin-RevId: 339066396
|
|
Also let the Github bug reviver detect both in TODOs.
PiperOrigin-RevId: 338785089
|
|
PiperOrigin-RevId: 338784921
|
|
There are too many dependencies.
PiperOrigin-RevId: 338746264
|
|
- When encodeState.resolve() determines that the resolved reflect.Value is
contained by a previously-resolved object, set wire.Ref.Type to the
containing object's type (existing.obj.Type()) rather than the contained
value's type (obj.Type()).
- When encodeState.resolve() determines that the resolved reflect.Value
contains a previously-resolved object, handle cases where the new object
contains *multiple* previously-resolved objects. (This may cause
previously-allocated object IDs to become unused; to facilitate this, change
encodeState.pending to a map, and change the wire format to prefix each
object with its object ID.)
- Add encodeState.encodedStructs to avoid redundant encoding of structs, since
deduplication of objects via encodeState.resolve() doesn't work for objects
instantiated by StateSave() and passed to SaveValue() (i.e. fields tagged
`state:".(whatever)"`).
- Make unexported array fields deserializable via slices that refer to them by
casting away their unexportedness in decodeState.decodeObject().
Updates #1663
PiperOrigin-RevId: 338727687
|
|
Our current reference leak checker uses finalizers to verify whether an object
has reached zero references before it is garbage collected. There are multiple
problems with this mechanism, so a rewrite is in order.
With finalizers, there is no way to guarantee that a finalizer will run before
the program exits. When an unreachable object with a finalizer is garbage
collected, its finalizer will be added to a queue and run asynchronously. The
best we can do is run garbage collection upon sandbox exit to make sure that
all finalizers are enqueued.
Furthermore, if there is a chain of finalized objects, e.g. A points to B
points to C, garbage collection needs to run multiple times before all of the
finalizers are enqueued. The first GC run will register the finalizer for A but
not free it. It takes another GC run to free A, at which point B's finalizer
can be registered. As a result, we need to run GC as many times as the length
of the longest such chain to have a somewhat reliable leak checker.
Finally, a cyclical chain of structs pointing to one another will never be
garbage collected if a finalizer is set. This is a well-known issue with Go
finalizers (https://github.com/golang/go/issues/7358). Using leak checking on
filesystem objects that produce cycles will not work and even result in memory
leaks.
The new leak checker stores reference counted objects in a global map when
leak check is enabled and removes them once they are destroyed. At sandbox
exit, any remaining objects in the map are considered as leaked. This provides
a deterministic way of detecting leaks without relying on the complexities of
finalizers and garbage collection.
This approach has several benefits over the former, including:
- Always detects leaks of objects that should be destroyed very close to
sandbox exit. The old checker very rarely detected these leaks, because it
relied on garbage collection to be run in a short window of time.
- Panics if we forgot to enable leak check on a ref-counted object (we will try
to remove it from the map when it is destroyed, but it will never have been
added).
- Can store extra logging information in the map values without adding to the
size of the ref count struct itself. With the size of just an int64, the ref
count object remains compact, meaning frequent operations like IncRef/DecRef
are more cache-efficient.
- Can aggregate leak results in a single report after the sandbox exits.
Instead of having warnings littered in the log, which were
non-deterministically triggered by garbage collection, we can print all
warning messages at once. Note that this could also be a limitation--the
sandbox must exit properly for leaks to be detected.
Some basic benchmarking indicates that this change does not significantly
affect performance when leak checking is enabled, which is understandable
since registering/unregistering is only done once for each filesystem object.
Updates #1486.
PiperOrigin-RevId: 338685972
|
|
When the commit description contains "commit ", it will be wrongly identified
as commit hash. This commit changes to take only lines begins with "commit "
as a fix, since the description is always indented by `git log`.
Copybara uses merge commit for external contributors, this causes that not all
commits contain a Piper ID. Adding `--first-parent` to `git log` so that it
only lists commits that contain a Piper ID.
PiperOrigin-RevId: 338183812
|
|
PiperOrigin-RevId: 338168977
|
|
Using the newer bazel rules necessitates a transition from proto1 to
proto2. In order to resolve the incompatibility between proto2 and
gogoproto, the cri runtimeoptions proto must be vendored.
Further, some of the semantics of bazel caching changed during the
transition. It is now necessary to:
- Ensure that :gopath depends only on pure library targets, as the
propagation of go_binary build attributes (pure, static) will
affected the generated files (though content remains the same,
there are conflicts with respect to the gopath).
- Update bazel.mk to include the possibility of binaries in the
bazel-out directory, as it will now put runsc and others there.
This required some refinements to the mechanism of extracting
paths, since some the existing regex resulted in false positives.
- Change nogo rules to prevent escape generation on binary targets.
For some reason, the newer version of bazel attempted to run the
nogo analysis on the binary targets, which fails due to the fact
that objdump does not work on the final binary. This must be due
to a change in the semantics of aspects in bazel3.
PiperOrigin-RevId: 337958324
|
|
PiperOrigin-RevId: 337581114
|
|
PiperOrigin-RevId: 337544107
|
|
PiperOrigin-RevId: 337515664
|
|
Add parser and formatting for golang benchmarks for docker benchmarks.
Change adds a library for printing and parsing Test parameters and metrics.
Benchmarks use the library to print parameters in the Benchmark title
(e.g. the name field in b.Run()), and to report CustomMetrics. Parser
uses the library to parse printed data from benchmark output and
put it into BigQuery structs.
PiperOrigin-RevId: 336365628
|
|
PiperOrigin-RevId: 336343819
|
|
PiperOrigin-RevId: 336340035
|
|
PiperOrigin-RevId: 336126583
|