summaryrefslogtreecommitdiffhomepage
path: root/tools/nogo/config.go
AgeCommit message (Collapse)Author
2021-07-26Fix per-analyzer overrides of default-disabled groupsMichael Pratt
Currently behavior of config groups with `default: false` is buggy. The intention is that adding an empty suppression section for that group to a specific analyzer config should enable reporting for that analyzer. i.e., ``` groups: - name: foo regex: "^foo/" default: false global: ... analyzers: asmdecl: foo: # Enabled. ``` This should enable the foo group only for asmdecl. Unfortunately, today the actual behavior depends on the contents of the `global:` section. If `global:` contains an entry for foo, then it will work as described. If `global:` does _not_ contain an entry for foo, then the group default (disabled) always applies and the individual analyzer options have no effect. The cause of this is confusion in `AnalyzerConfig.shouldReport`, which doesn't distinguish between explicit suppression via a global suppression/exclude and simply having no configuration at all. Make this more explicit, so that the no configuration case can continue to per-analyzer configuration before falling back to the group default. The last test case in the added test fails without this change. This re-enables several opted-in analyzers for external dependencies, which have gained a few more false positives to suppress. PiperOrigin-RevId: 386904725
2021-04-26nogo: enable bazel workers and other optimizations.Adin Scannell
This is a suite of changes intended to dramatically speed up nogo speed. First, there are minor changes that help efficiency significantly. * Gob-based encoding is used internally, and JSON only used for the final set of findings. This is done to preserve the existing format (which is consumed by external tooling), and to facilitate manual debugging. * Unnecessary regex compilation is elided in the configuration, and care is taken for merges to prevent redundant entries. I'm not sure quite sure how, but it turns out that this was consumed a significant amount of time, presumably compiling the same regexes over and over again. Second, this change enables bazel workers for nogo analyzers. Workers enable persistent processes instead of creating and tearing down a sandbox every invocation. A library is introduced to abstraction these details, and allow the tools to still be written using standard flags, etc. The key here is that these binaries and the core of nogo become aware of caches with worker.Cache. This allows us to save significant time loading the same set of files and findings over and over again. These caches are keyed by the digests that are provided by bazel, and are capped in overall size. Note that the worker package attempts to capture output during each run, but tools are no longer permitted to write to stdout. This necessitated dropping some spurious output from checklocks. PiperOrigin-RevId: 370505732
2020-10-26Add nogo configuration.Adin Scannell
This splits the nogo rules into a separate configuration yaml file, and allows for multiple files to be provided. Because attrs cannot be passed down to aspects, this required that all findings are propagated up the aspect Provider. This doesn't mean that any extra work must be done, just that this information must be carried through the graph, and some additional starlark complexity is required. PiperOrigin-RevId: 339076357
2020-10-23Fix nogo tests in //pkg/sentry/socket/...Ting-Yu Wang
PiperOrigin-RevId: 338784921
2020-10-23//pkg/state fixes for VFS2.Jamie Liu
- When encodeState.resolve() determines that the resolved reflect.Value is contained by a previously-resolved object, set wire.Ref.Type to the containing object's type (existing.obj.Type()) rather than the contained value's type (obj.Type()). - When encodeState.resolve() determines that the resolved reflect.Value contains a previously-resolved object, handle cases where the new object contains *multiple* previously-resolved objects. (This may cause previously-allocated object IDs to become unused; to facilitate this, change encodeState.pending to a map, and change the wire format to prefix each object with its object ID.) - Add encodeState.encodedStructs to avoid redundant encoding of structs, since deduplication of objects via encodeState.resolve() doesn't work for objects instantiated by StateSave() and passed to SaveValue() (i.e. fields tagged `state:".(whatever)"`). - Make unexported array fields deserializable via slices that refer to them by casting away their unexportedness in decodeState.decodeObject(). Updates #1663 PiperOrigin-RevId: 338727687
2020-10-23Rewrite reference leak checker without finalizers.Dean Deng
Our current reference leak checker uses finalizers to verify whether an object has reached zero references before it is garbage collected. There are multiple problems with this mechanism, so a rewrite is in order. With finalizers, there is no way to guarantee that a finalizer will run before the program exits. When an unreachable object with a finalizer is garbage collected, its finalizer will be added to a queue and run asynchronously. The best we can do is run garbage collection upon sandbox exit to make sure that all finalizers are enqueued. Furthermore, if there is a chain of finalized objects, e.g. A points to B points to C, garbage collection needs to run multiple times before all of the finalizers are enqueued. The first GC run will register the finalizer for A but not free it. It takes another GC run to free A, at which point B's finalizer can be registered. As a result, we need to run GC as many times as the length of the longest such chain to have a somewhat reliable leak checker. Finally, a cyclical chain of structs pointing to one another will never be garbage collected if a finalizer is set. This is a well-known issue with Go finalizers (https://github.com/golang/go/issues/7358). Using leak checking on filesystem objects that produce cycles will not work and even result in memory leaks. The new leak checker stores reference counted objects in a global map when leak check is enabled and removes them once they are destroyed. At sandbox exit, any remaining objects in the map are considered as leaked. This provides a deterministic way of detecting leaks without relying on the complexities of finalizers and garbage collection. This approach has several benefits over the former, including: - Always detects leaks of objects that should be destroyed very close to sandbox exit. The old checker very rarely detected these leaks, because it relied on garbage collection to be run in a short window of time. - Panics if we forgot to enable leak check on a ref-counted object (we will try to remove it from the map when it is destroyed, but it will never have been added). - Can store extra logging information in the map values without adding to the size of the ref count struct itself. With the size of just an int64, the ref count object remains compact, meaning frequent operations like IncRef/DecRef are more cache-efficient. - Can aggregate leak results in a single report after the sandbox exits. Instead of having warnings littered in the log, which were non-deterministically triggered by garbage collection, we can print all warning messages at once. Note that this could also be a limitation--the sandbox must exit properly for leaks to be detected. Some basic benchmarking indicates that this change does not significantly affect performance when leak checking is enabled, which is understandable since registering/unregistering is only done once for each filesystem object. Updates #1486. PiperOrigin-RevId: 338685972
2020-10-20Fix nogo test in //pkg/tcpip/...Ting-Yu Wang
PiperOrigin-RevId: 338168977
2020-10-19Remove legacy bazel configurations.Adin Scannell
Using the newer bazel rules necessitates a transition from proto1 to proto2. In order to resolve the incompatibility between proto2 and gogoproto, the cri runtimeoptions proto must be vendored. Further, some of the semantics of bazel caching changed during the transition. It is now necessary to: - Ensure that :gopath depends only on pure library targets, as the propagation of go_binary build attributes (pure, static) will affected the generated files (though content remains the same, there are conflicts with respect to the gopath). - Update bazel.mk to include the possibility of binaries in the bazel-out directory, as it will now put runsc and others there. This required some refinements to the mechanism of extracting paths, since some the existing regex resulted in false positives. - Change nogo rules to prevent escape generation on binary targets. For some reason, the newer version of bazel attempted to run the nogo analysis on the binary targets, which fails due to the fact that objdump does not work on the final binary. This must be due to a change in the semantics of aspects in bazel3. PiperOrigin-RevId: 337958324
2020-10-07Add staticcheck and staticstyle analyzers.Adin Scannell
This change also adds support to go_stateify for detecting an appropriate receiver name, avoiding a large number of false positives. PiperOrigin-RevId: 335994587
2020-08-31Change nogo failures to test failures, instead of build failures.Adin Scannell
PiperOrigin-RevId: 329408633
2020-08-26Support stdlib analyzers with nogo.Adin Scannell
This immediately revealed an escape analysis violation (!), where the sync.Map was being used in a context that escapes were not allowed. This is a relatively minor fix and is included. PiperOrigin-RevId: 328611237
2020-04-22Specify a memory file in platform.New().Andrei Vagin
PiperOrigin-RevId: 307941984
2020-04-20Add internal nogo analysis & checkescape tool.Adin Scannell
See tools/nogo/README.md. The checkescape tool is able to perform recursive escape analysis, using the actual generated binary to confirm the results produced by the compiler itself. As an initial use case, this replaces the manual escape analysis tests used for go_marshal, and validates that the CopyIn and CopyOut paths will not require any allocation or stack splits. Updates #2243 PiperOrigin-RevId: 307532986