Age | Commit message (Collapse) | Author |
|
Currently behavior of config groups with `default: false` is buggy. The
intention is that adding an empty suppression section for that group to a
specific analyzer config should enable reporting for that analyzer. i.e.,
```
groups:
- name: foo
regex: "^foo/"
default: false
global:
...
analyzers:
asmdecl:
foo: # Enabled.
```
This should enable the foo group only for asmdecl. Unfortunately, today the
actual behavior depends on the contents of the `global:` section. If `global:`
contains an entry for foo, then it will work as described. If `global:` does
_not_ contain an entry for foo, then the group default (disabled) always
applies and the individual analyzer options have no effect.
The cause of this is confusion in `AnalyzerConfig.shouldReport`, which doesn't
distinguish between explicit suppression via a global suppression/exclude and
simply having no configuration at all. Make this more explicit, so that the no
configuration case can continue to per-analyzer configuration before falling
back to the group default.
The last test case in the added test fails without this change.
This re-enables several opted-in analyzers for external dependencies, which
have gained a few more false positives to suppress.
PiperOrigin-RevId: 386904725
|
|
PiperOrigin-RevId: 385894869
|
|
This change makes the checklocks analyzer considerable more powerful, adding:
* The ability to traverse complex structures, e.g. to have multiple nested
fields as part of the annotation.
* The ability to resolve simple anonymous functions and closures, and perform
lock analysis across these invocations. This does not apply to closures that
are passed elsewhere, since it is not possible to know the context in which
they might be invoked.
* The ability to annotate return values in addition to receivers and other
parameters, with the same complex structures noted above.
* Ignoring locking semantics for "fresh" objects, i.e. objects that are
allocated in the local frame (typically a new-style function).
* Sanity checking of locking state across block transitions and returns, to
ensure that no unexpected locks are held.
Note that initially, most of these findings are excluded by a comprehensive
nogo.yaml. The findings that are included are fundamental lock violations.
The changes here should be relatively low risk, minor refactorings to either
include necessary annotations to simplify the code structure (in general
removing closures in favor of methods) so that the analyzer can be easily
track the lock state.
This change additional includes two changes to nogo itself:
* Sanity checking of all types to ensure that the binary and ast-derived
types have a consistent objectpath, to prevent the bug above from occurring
silently (and causing much confusion). This also requires a trick in
order to ensure that serialized facts are consumable downstream. This can
be removed with https://go-review.googlesource.com/c/tools/+/331789 merged.
* A minor refactoring to isolation the objdump settings in its own package.
This was originally used to implement the sanity check above, but this
information is now being passed another way. The minor refactor is preserved
however, since it cleans up the code slightly and is minimal risk.
PiperOrigin-RevId: 382613300
|
|
This is a suite of changes intended to dramatically speed up nogo speed.
First, there are minor changes that help efficiency significantly.
* Gob-based encoding is used internally, and JSON only used for the final
set of findings. This is done to preserve the existing format (which is
consumed by external tooling), and to facilitate manual debugging.
* Unnecessary regex compilation is elided in the configuration, and care is
taken for merges to prevent redundant entries. I'm not sure quite sure how,
but it turns out that this was consumed a significant amount of time,
presumably compiling the same regexes over and over again.
Second, this change enables bazel workers for nogo analyzers.
Workers enable persistent processes instead of creating and tearing down a
sandbox every invocation. A library is introduced to abstraction these details,
and allow the tools to still be written using standard flags, etc.
The key here is that these binaries and the core of nogo become aware of
caches with worker.Cache. This allows us to save significant time loading the
same set of files and findings over and over again. These caches are keyed by
the digests that are provided by bazel, and are capped in overall size.
Note that the worker package attempts to capture output during each run, but
tools are no longer permitted to write to stdout. This necessitated dropping
some spurious output from checklocks.
PiperOrigin-RevId: 370505732
|
|
Presently, the standard library facts are not serialized in a deterministic
order. This means that they have the possibility to change on each iteration,
requiring a large scale re-analysis of all downstream actions, which includes
all packages.
Improve cache-ability of nogo actions by improving the determinism of the both
facts and findings. Internally, default facts should be serialized as a sorted
list for this reason already.
PiperOrigin-RevId: 370188259
|
|
PiperOrigin-RevId: 369505182
|
|
PiperOrigin-RevId: 367517305
|
|
PiperOrigin-RevId: 366555466
|
|
This validates that struct fields if annotated with "// checklocks:mu" where
"mu" is a mutex field in the same struct then access to the field is only
done with "mu" locked.
All types that are guarded by a mutex must be annotated with
// +checklocks:<mutex field name>
For more details please refer to README.md.
PiperOrigin-RevId: 360729328
|
|
This is required only for the built-in bazel nogo functionality.
Since we roll these targets manually via the wrappers, we don't need
to use go_tool_library. The inconsistent use of these targets leads
to conflicting instantiations of go_default_library and go_tool_library,
which both contain the same output files.
PiperOrigin-RevId: 355184975
|
|
For validation, the "on" key in existing YAML files is changed to a literal
string. In the YAML spec, on is a keyword which encodes a boolean value, so
without relying on a specific implementation the YAML files are technically
not encoding an object that complies with the specification.
PiperOrigin-RevId: 350172147
|
|
PiperOrigin-RevId: 347047550
|
|
PiperOrigin-RevId: 339570821
|
|
This splits the nogo rules into a separate configuration yaml file, and
allows for multiple files to be provided.
Because attrs cannot be passed down to aspects, this required that all
findings are propagated up the aspect Provider. This doesn't mean that
any extra work must be done, just that this information must be carried
through the graph, and some additional starlark complexity is required.
PiperOrigin-RevId: 339076357
|
|
PiperOrigin-RevId: 338784921
|
|
- When encodeState.resolve() determines that the resolved reflect.Value is
contained by a previously-resolved object, set wire.Ref.Type to the
containing object's type (existing.obj.Type()) rather than the contained
value's type (obj.Type()).
- When encodeState.resolve() determines that the resolved reflect.Value
contains a previously-resolved object, handle cases where the new object
contains *multiple* previously-resolved objects. (This may cause
previously-allocated object IDs to become unused; to facilitate this, change
encodeState.pending to a map, and change the wire format to prefix each
object with its object ID.)
- Add encodeState.encodedStructs to avoid redundant encoding of structs, since
deduplication of objects via encodeState.resolve() doesn't work for objects
instantiated by StateSave() and passed to SaveValue() (i.e. fields tagged
`state:".(whatever)"`).
- Make unexported array fields deserializable via slices that refer to them by
casting away their unexportedness in decodeState.decodeObject().
Updates #1663
PiperOrigin-RevId: 338727687
|
|
Our current reference leak checker uses finalizers to verify whether an object
has reached zero references before it is garbage collected. There are multiple
problems with this mechanism, so a rewrite is in order.
With finalizers, there is no way to guarantee that a finalizer will run before
the program exits. When an unreachable object with a finalizer is garbage
collected, its finalizer will be added to a queue and run asynchronously. The
best we can do is run garbage collection upon sandbox exit to make sure that
all finalizers are enqueued.
Furthermore, if there is a chain of finalized objects, e.g. A points to B
points to C, garbage collection needs to run multiple times before all of the
finalizers are enqueued. The first GC run will register the finalizer for A but
not free it. It takes another GC run to free A, at which point B's finalizer
can be registered. As a result, we need to run GC as many times as the length
of the longest such chain to have a somewhat reliable leak checker.
Finally, a cyclical chain of structs pointing to one another will never be
garbage collected if a finalizer is set. This is a well-known issue with Go
finalizers (https://github.com/golang/go/issues/7358). Using leak checking on
filesystem objects that produce cycles will not work and even result in memory
leaks.
The new leak checker stores reference counted objects in a global map when
leak check is enabled and removes them once they are destroyed. At sandbox
exit, any remaining objects in the map are considered as leaked. This provides
a deterministic way of detecting leaks without relying on the complexities of
finalizers and garbage collection.
This approach has several benefits over the former, including:
- Always detects leaks of objects that should be destroyed very close to
sandbox exit. The old checker very rarely detected these leaks, because it
relied on garbage collection to be run in a short window of time.
- Panics if we forgot to enable leak check on a ref-counted object (we will try
to remove it from the map when it is destroyed, but it will never have been
added).
- Can store extra logging information in the map values without adding to the
size of the ref count struct itself. With the size of just an int64, the ref
count object remains compact, meaning frequent operations like IncRef/DecRef
are more cache-efficient.
- Can aggregate leak results in a single report after the sandbox exits.
Instead of having warnings littered in the log, which were
non-deterministically triggered by garbage collection, we can print all
warning messages at once. Note that this could also be a limitation--the
sandbox must exit properly for leaks to be detected.
Some basic benchmarking indicates that this change does not significantly
affect performance when leak checking is enabled, which is understandable
since registering/unregistering is only done once for each filesystem object.
Updates #1486.
PiperOrigin-RevId: 338685972
|
|
PiperOrigin-RevId: 338168977
|
|
Using the newer bazel rules necessitates a transition from proto1 to
proto2. In order to resolve the incompatibility between proto2 and
gogoproto, the cri runtimeoptions proto must be vendored.
Further, some of the semantics of bazel caching changed during the
transition. It is now necessary to:
- Ensure that :gopath depends only on pure library targets, as the
propagation of go_binary build attributes (pure, static) will
affected the generated files (though content remains the same,
there are conflicts with respect to the gopath).
- Update bazel.mk to include the possibility of binaries in the
bazel-out directory, as it will now put runsc and others there.
This required some refinements to the mechanism of extracting
paths, since some the existing regex resulted in false positives.
- Change nogo rules to prevent escape generation on binary targets.
For some reason, the newer version of bazel attempted to run the
nogo analysis on the binary targets, which fails due to the fact
that objdump does not work on the final binary. This must be due
to a change in the semantics of aspects in bazel3.
PiperOrigin-RevId: 337958324
|
|
PiperOrigin-RevId: 337581114
|
|
PiperOrigin-RevId: 337544107
|
|
PiperOrigin-RevId: 337515664
|
|
PiperOrigin-RevId: 336343819
|
|
PiperOrigin-RevId: 336126583
|
|
This change also adds support to go_stateify for detecting an appropriate
receiver name, avoiding a large number of false positives.
PiperOrigin-RevId: 335994587
|
|
When nogo checks are violated, they will automatically posted
as annotations on the specific GitHub commit. This allows us
to ensure analysis & style rules and have them called out.
PiperOrigin-RevId: 334447285
|
|
PiperOrigin-RevId: 331206424
|
|
PiperOrigin-RevId: 329408633
|
|
This immediately revealed an escape analysis violation (!), where
the sync.Map was being used in a context that escapes were not
allowed. This is a relatively minor fix and is included.
PiperOrigin-RevId: 328611237
|
|
Updates #3374
PiperOrigin-RevId: 328378700
|
|
PiperOrigin-RevId: 325280924
|
|
PiperOrigin-RevId: 322788791
|
|
PiperOrigin-RevId: 321053634
|
|
Updates #2972
PiperOrigin-RevId: 317113059
|
|
Updates #2972
PiperOrigin-RevId: 316942245
|
|
PiperOrigin-RevId: 307941984
|
|
See tools/nogo/README.md.
The checkescape tool is able to perform recursive escape analysis, using the
actual generated binary to confirm the results produced by the compiler itself.
As an initial use case, this replaces the manual escape analysis tests used for
go_marshal, and validates that the CopyIn and CopyOut paths will not require
any allocation or stack splits.
Updates #2243
PiperOrigin-RevId: 307532986
|