Revision tags: llvmorg-21-init, llvmorg-19.1.7, llvmorg-19.1.6, llvmorg-19.1.5, llvmorg-19.1.4, llvmorg-19.1.3, llvmorg-19.1.2, llvmorg-19.1.1, llvmorg-19.1.0, llvmorg-19.1.0-rc4, llvmorg-19.1.0-rc3 |
|
#
bde4ffe7 |
| 13-Aug-2024 |
David Benjamin <davidben@google.com> |
Don't pass null pointers to memcmp and memcpy in libFuzzer (#96775)
In C, it is UB to call `memcmp(NULL, NULL, 0)`, `memcpy(NULL, NULL, 0)`,
etc. Unfortunately, `(NULL, 0)` is the natural represent
Don't pass null pointers to memcmp and memcpy in libFuzzer (#96775)
In C, it is UB to call `memcmp(NULL, NULL, 0)`, `memcpy(NULL, NULL, 0)`,
etc. Unfortunately, `(NULL, 0)` is the natural representation of an
empty sequence of objects and extremely common in real world code. As a
result, all C code, and C++ code which calls into C functions, must
carefully guard all calls to `memcpy`.
This is a serious, real world usability issue in C and should be fixed
in the language (see #49459). In the meantime, pay the cost of the extra
branch to avoid tripping UBSan in libFuzzer. Once the usability problem
in C has been fixed, these checks can be removed.
Fixes #96772
show more ...
|
Revision tags: llvmorg-19.1.0-rc2, llvmorg-19.1.0-rc1, llvmorg-20-init, llvmorg-18.1.8, llvmorg-18.1.7, llvmorg-18.1.6, llvmorg-18.1.5, llvmorg-18.1.4, llvmorg-18.1.3, llvmorg-18.1.2, llvmorg-18.1.1, llvmorg-18.1.0, llvmorg-18.1.0-rc4, llvmorg-18.1.0-rc3, llvmorg-18.1.0-rc2, llvmorg-18.1.0-rc1, llvmorg-19-init, llvmorg-17.0.6, llvmorg-17.0.5, llvmorg-17.0.4 |
|
#
747e0d9f |
| 22-Oct-2023 |
Kazu Hirata <kazu@google.com> |
[compiler-rt] Use std::clamp (NFC)
|
Revision tags: llvmorg-17.0.3, llvmorg-17.0.2, llvmorg-17.0.1, llvmorg-17.0.0, llvmorg-17.0.0-rc4, llvmorg-17.0.0-rc3, llvmorg-17.0.0-rc2, llvmorg-17.0.0-rc1, llvmorg-18-init, llvmorg-16.0.6, llvmorg-16.0.5, llvmorg-16.0.4, llvmorg-16.0.3, llvmorg-16.0.2, llvmorg-16.0.1, llvmorg-16.0.0, llvmorg-16.0.0-rc4 |
|
#
513d9b9f |
| 10-Mar-2023 |
Wu, Yingcong <yingcong.wu@intel.com> |
[libfuzzer] avoid unneccessary copy
Avoid some unneccessary copy
Reviewed By: fmayer
Differential Revision: https://reviews.llvm.org/D145758
|
#
e09b3be4 |
| 08-Mar-2023 |
Vitaly Buka <vitalybuka@google.com> |
[fuzzer] Make sure we never delete Fuzzer
|
#
91985c2e |
| 08-Mar-2023 |
Wu, Yingcong <yingcong.wu@intel.com> |
Use the right printf format specifiers
Some printf format strings in libfuzzer are using the wrong specifizers, fix in this commit.
Reviewed By: vitalybuka
Differential Revision: https://reviews.l
Use the right printf format specifiers
Some printf format strings in libfuzzer are using the wrong specifizers, fix in this commit.
Reviewed By: vitalybuka
Differential Revision: https://reviews.llvm.org/D145033
show more ...
|
Revision tags: llvmorg-16.0.0-rc3, llvmorg-16.0.0-rc2, llvmorg-16.0.0-rc1, llvmorg-17-init, llvmorg-15.0.7, llvmorg-15.0.6, llvmorg-15.0.5, llvmorg-15.0.4, llvmorg-15.0.3, working, llvmorg-15.0.2, llvmorg-15.0.1, llvmorg-15.0.0, llvmorg-15.0.0-rc3 |
|
#
aa0e9046 |
| 15-Aug-2022 |
Maxim Schessler <maxim.schessler@gmail.com> |
Libfuzzer fix for Ctrl + c not working with -fork and -ignore_crashes=1
In some cases running Libfuzzer in fork mode with -ignore_crashes=1 counts ctrl+c as crash and restarts.
Thread: https://gith
Libfuzzer fix for Ctrl + c not working with -fork and -ignore_crashes=1
In some cases running Libfuzzer in fork mode with -ignore_crashes=1 counts ctrl+c as crash and restarts.
Thread: https://github.com/google/oss-fuzz/issues/4547
Credit: Marcel Boehme <marcel.boehme@acm.org>
Reviewed By: vitalybuka
Differential Revision: https://reviews.llvm.org/D130990
show more ...
|
Revision tags: llvmorg-15.0.0-rc2, llvmorg-15.0.0-rc1, llvmorg-16-init |
|
#
92fb3101 |
| 28-Jun-2022 |
Kostya Serebryany <kcc@google.com> |
[libFuzzer] Extend the fuzz target intarface to allow -1 return value.
With this change, fuzz targets may choose to return -1 to indicate that the input should not be added to the corpus regardless
[libFuzzer] Extend the fuzz target intarface to allow -1 return value.
With this change, fuzz targets may choose to return -1 to indicate that the input should not be added to the corpus regardless of the coverage it generated.
Reviewed By: morehouse
Differential Revision: https://reviews.llvm.org/D128749
show more ...
|
Revision tags: llvmorg-14.0.6, llvmorg-14.0.5, llvmorg-14.0.4, llvmorg-14.0.3, llvmorg-14.0.2, llvmorg-14.0.1, llvmorg-14.0.0, llvmorg-14.0.0-rc4, llvmorg-14.0.0-rc3, llvmorg-14.0.0-rc2, llvmorg-14.0.0-rc1, llvmorg-15-init, llvmorg-13.0.1, llvmorg-13.0.1-rc3, llvmorg-13.0.1-rc2, llvmorg-13.0.1-rc1 |
|
#
9e7b7303 |
| 20-Oct-2021 |
PZ Read <netfirewall@gmail.com> |
[libFuzzer] Update InputInfo.TimeOfUnit when replacing it in the corpus.
Previously, when the fuzzing loop replaced an input in the corpus, it didn't update the execution time of the input. Therefor
[libFuzzer] Update InputInfo.TimeOfUnit when replacing it in the corpus.
Previously, when the fuzzing loop replaced an input in the corpus, it didn't update the execution time of the input. Therefore, some schedulers (e.g. Entropic) would adjust weights based on the incorrect execution time.
This patch updates the execution time of the input when replacing it.
Reviewed By: morehouse
Differential Revision: https://reviews.llvm.org/D111479
show more ...
|
Revision tags: llvmorg-13.0.0, llvmorg-13.0.0-rc4, llvmorg-13.0.0-rc3, llvmorg-13.0.0-rc2 |
|
#
7c921753 |
| 03-Aug-2021 |
Kostya Serebryany <kcc@google.com> |
[libFuzzer] replace Vector/Set with std::vector/std::set. The custom names are not required any more since we now build with a private version of libc++. Fix some of the 81+ character lines. Mechanic
[libFuzzer] replace Vector/Set with std::vector/std::set. The custom names are not required any more since we now build with a private version of libc++. Fix some of the 81+ character lines. Mechanical change, NFC expected.
[libFuzzer] replace Vector/Set with std::vector/std::set.
Reviewed By: morehouse
Differential Revision: https://reviews.llvm.org/D107374
show more ...
|
Revision tags: llvmorg-13.0.0-rc1, llvmorg-14-init |
|
#
c5d72517 |
| 02-Jul-2021 |
Marco Vanotti <mvanotti@google.com> |
Revert "Refactor mutation strategies into a standalone library"
This reverts commit 361f742f168de0f0f256802a329c19d081615d0d.
|
#
361f742f |
| 02-Jul-2021 |
Aaron Green <aarongreen@google.com> |
Refactor mutation strategies into a standalone library
This change introduces libMutagen/libclang_rt.mutagen.a as a subset of libFuzzer/libclang_rt.fuzzer.a. This library contains only the fuzzing s
Refactor mutation strategies into a standalone library
This change introduces libMutagen/libclang_rt.mutagen.a as a subset of libFuzzer/libclang_rt.fuzzer.a. This library contains only the fuzzing strategies used by libFuzzer to produce new test inputs from provided inputs, dictionaries, and SanitizerCoverage feedback.
Most of this change is simply moving sections of code to one side or the other of the library boundary. The only meaningful new code is:
* The Mutagen.h interface and its implementation in Mutagen.cpp. * The following methods in MutagenDispatcher.cpp: * UseCmp * UseMemmem * SetCustomMutator * SetCustomCrossOver * LateInitialize (similar to the MutationDispatcher's original constructor) * Mutate_AddWordFromTORC (uses callbacks instead of accessing TPC directly) * StartMutationSequence * MutationSequence * DictionaryEntrySequence * RecommendDictionary * RecommendDictionaryEntry * FuzzerMutate.cpp (which now justs sets callbacks and handles printing) * MutagenUnittest.cpp (which adds tests of Mutagen.h)
A note on performance: This change was tested with a 100 passes of test/fuzzer/LargeTest.cpp with 1000 runs per pass, both with and without the change. The running time distribution was qualitatively similar both with and without the change, and the average difference was within 30 microseconds (2.240 ms/run vs 2.212 ms/run, respectively). Both times were much higher than observed with the fully optimized system clang (~0.38 ms/run), most likely due to the combination of CMake "dev mode" settings (e.g. CMAKE_BUILD_TYPE="Debug", LLVM_ENABLE_LTO=OFF, etc.). The difference between the two versions built similarly seems to be "in the noise" and suggests no meaningful performance degradation.
Reviewed By: morehouse
Differential Revision: https://reviews.llvm.org/D102447
show more ...
|
Revision tags: llvmorg-12.0.1, llvmorg-12.0.1-rc4, llvmorg-12.0.1-rc3, llvmorg-12.0.1-rc2 |
|
#
fd0a2f75 |
| 26-May-2021 |
Matt Morehouse <mascasa@google.com> |
Revert "Refactor mutation strategies into a standalone library"
This reverts commit c4a41cd77c15c2905ac74beeec09f8343a65a549 due to buildbot failure.
|
#
c4a41cd7 |
| 25-May-2021 |
Aaron Green <aarongreen@google.com> |
Refactor mutation strategies into a standalone library
This change introduces libMutagen/libclang_rt.mutagen.a as a subset of libFuzzer/libclang_rt.fuzzer.a. This library contains only the fuzzing s
Refactor mutation strategies into a standalone library
This change introduces libMutagen/libclang_rt.mutagen.a as a subset of libFuzzer/libclang_rt.fuzzer.a. This library contains only the fuzzing strategies used by libFuzzer to produce new test inputs from provided inputs, dictionaries, and SanitizerCoverage feedback.
Most of this change is simply moving sections of code to one side or the other of the library boundary. The only meaningful new code is:
* The Mutagen.h interface and its implementation in Mutagen.cpp. * The following methods in MutagenDispatcher.cpp: * UseCmp * UseMemmem * SetCustomMutator * SetCustomCrossOver * LateInitialize (similar to the MutationDispatcher's original constructor) * Mutate_AddWordFromTORC (uses callbacks instead of accessing TPC directly) * StartMutationSequence * MutationSequence * DictionaryEntrySequence * RecommendDictionary * RecommendDictionaryEntry * FuzzerMutate.cpp (which now justs sets callbacks and handles printing) * MutagenUnittest.cpp (which adds tests of Mutagen.h)
A note on performance: This change was tested with a 100 passes of test/fuzzer/LargeTest.cpp with 1000 runs per pass, both with and without the change. The running time distribution was qualitatively similar both with and without the change, and the average difference was within 30 microseconds (2.240 ms/run vs 2.212 ms/run, respectively). Both times were much higher than observed with the fully optimized system clang (~0.38 ms/run), most likely due to the combination of CMake "dev mode" settings (e.g. CMAKE_BUILD_TYPE="Debug", LLVM_ENABLE_LTO=OFF, etc.). The difference between the two versions built similarly seems to be "in the noise" and suggests no meaningful performance degradation.
Reviewed By: morehouse
Differential Revision: https://reviews.llvm.org/D102447
show more ...
|
Revision tags: llvmorg-12.0.1-rc1 |
|
#
827ccc93 |
| 16-Apr-2021 |
Alexey Vishnyakov <vishnya@ispras.ru> |
[fuzzer] Print reloaded file paths
In order to integrate libFuzzer with a dynamic symbolic execution tool Sydr we need to print loaded file paths.
Reviewed By: morehouse
Differential Revision: htt
[fuzzer] Print reloaded file paths
In order to integrate libFuzzer with a dynamic symbolic execution tool Sydr we need to print loaded file paths.
Reviewed By: morehouse
Differential Revision: https://reviews.llvm.org/D100303
show more ...
|
Revision tags: llvmorg-12.0.0, llvmorg-12.0.0-rc5, llvmorg-12.0.0-rc4 |
|
#
4b82f614 |
| 12-Mar-2021 |
Matt Morehouse <mascasa@google.com> |
[libFuzzer] Use macro instead of __attribute__.
This should fix the Windows buildbot errors.
|
Revision tags: llvmorg-12.0.0-rc3 |
|
#
5908c7ca |
| 04-Mar-2021 |
Jonas Paulsson <paulsson@linux.vnet.ibm.com> |
[libFuzzer] Add attribute noinline on Fuzzer::ExecuteCallback().
The inlining of this function needs to be disabled as it is part of the inpsected stack traces. It's string representation will look
[libFuzzer] Add attribute noinline on Fuzzer::ExecuteCallback().
The inlining of this function needs to be disabled as it is part of the inpsected stack traces. It's string representation will look different depending on if it was inlined or not which will cause it's string comparison to fail.
When it was inlined in only one of the two execution stacks, minimize_two_crashes.test failed on SystemZ. For details see https://bugs.llvm.org/show_bug.cgi?id=49152.
Reviewers: Ulrich Weigand, Matt Morehouse, Arthur Eubanks
Differential Revision: https://reviews.llvm.org/D97975
show more ...
|
#
6708186c |
| 12-Mar-2021 |
Aaron Green <aarongreen@google.com> |
[crt][fuzzer] Fix up various numeric conversions
Attempting to build a standalone libFuzzer in Fuchsia's default toolchain for the purpose of cross-compiling the unit tests revealed a number of not
[crt][fuzzer] Fix up various numeric conversions
Attempting to build a standalone libFuzzer in Fuchsia's default toolchain for the purpose of cross-compiling the unit tests revealed a number of not-quite-proper type conversions. Fuchsia's toolchain include `-std=c++17` and `-Werror`, among others, leading to many errors like `-Wshorten-64-to-32`, `-Wimplicit-float-conversion`, etc.
Most of these have been addressed by simply making the conversion explicit with a `static_cast`. These typically fell into one of two categories: 1) conversions between types where high precision isn't critical, e.g. the "energy" calculations for `InputInfo`, and 2) conversions where the values will never reach the bits being truncated, e.g. `DftTimeInSeconds` is not going to exceed 136 years.
The major exception to this is the number of features: there are several places that treat features as `size_t`, and others as `uint32_t`. This change makes the decision to cap the features at 32 bits. The maximum value of a feature as produced by `TracePC::CollectFeatures` is roughly: (NumPCsInPCTables + ValueBitMap::kMapSizeInBits + ExtraCountersBegin() - ExtraCountersEnd() + log2(SIZE_MAX)) * 8
It's conceivable for extremely large targets and/or extra counters that this limit could be reached. This shouldn't break fuzzing, but it will cause certain features to collide and lower the fuzzers overall precision. To address this, this change adds a warning to TracePC::PrintModuleInfo about excessive feature size if it is detected, and recommends refactoring the fuzzer into several smaller ones.
Reviewed By: morehouse
Differential Revision: https://reviews.llvm.org/D97992
show more ...
|
Revision tags: llvmorg-12.0.0-rc2, llvmorg-11.1.0, llvmorg-11.1.0-rc3, llvmorg-12.0.0-rc1, llvmorg-13-init, llvmorg-11.1.0-rc2, llvmorg-11.1.0-rc1, llvmorg-11.0.1, llvmorg-11.0.1-rc2, llvmorg-11.0.1-rc1 |
|
#
dc62d5ec |
| 23-Oct-2020 |
Max Moroz <mmoroz@chromium.org> |
[libFuzzer] Added -print_full_coverage flag.
-print_full_coverage=1 produces a detailed branch coverage dump when run on a single file. Uses same infrastructure as -print_coverage flag, but prints a
[libFuzzer] Added -print_full_coverage flag.
-print_full_coverage=1 produces a detailed branch coverage dump when run on a single file. Uses same infrastructure as -print_coverage flag, but prints all branches (regardless of coverage status) in an easy-to-parse format. Usage: For internal use with machine learning fuzzing models which require detailed coverage information on seed files to generate mutations.
Differential Revision: https://reviews.llvm.org/D85928
show more ...
|
Revision tags: llvmorg-11.0.0, llvmorg-11.0.0-rc6, llvmorg-11.0.0-rc5, llvmorg-11.0.0-rc4, llvmorg-11.0.0-rc3 |
|
#
66df9894 |
| 16-Sep-2020 |
mhl <mhl@fb.com> |
[libfuzzer] Reduce default verbosity when printing large mutation sequences
When using a custom mutator (e.g. thrift mutator, similar to LPM) that calls back into libfuzzer's mutations via `LLVMFuzz
[libfuzzer] Reduce default verbosity when printing large mutation sequences
When using a custom mutator (e.g. thrift mutator, similar to LPM) that calls back into libfuzzer's mutations via `LLVMFuzzerMutate`, the mutation sequences needed to achieve new coverage can get prohibitively large.
Printing these large sequences has two downsides:
1) It makes the logs hard to understand for a human. 2) The performance cost slows down fuzzing.
In this patch I change the `PrintMutationSequence` function to take a max number of entries, to achieve this goal. I also update `PrintStatusForNewUnit` to default to printing only 10 entries, in the default verbosity level (1), requiring the user to set verbosity to 2 if they want the full mutation sequence.
For our use case, turning off verbosity is not an option, as that would also disable `PrintStats()` which is very useful for infrastructure that analyzes the logs in realtime. I imagine most users of libfuzzer always want those logs in the default.
I built a fuzzer locally with this patch applied to libfuzzer.
When running with the default verbosity, I see logs like this:
#65 NEW cov: 4799 ft: 10443 corp: 41/1447Kb lim: 64000 exec/s: 1 rss: 575Mb L: 28658/62542 MS: 196 Custom-CrossOver-ChangeBit-EraseBytes-ChangeBit-ChangeBit-ChangeBit-CrossOver-ChangeBit-CrossOver- DE: "\xff\xff\xff\x0e"-"\xfe\xff\xff\x7f"-"\xfe\xff\xff\x7f"-"\x17\x00\x00\x00\x00\x00\x00\x00"-"\x00\x00\x00\xf9"-"\xff\xff\xff\xff"-"\xfa\xff\xff\xff"-"\xf7\xff\xff\xff"-"@\xff\xff\xff\xff\xff\xff\xff"-"E\x00"- #67 NEW cov: 4810 ft: 10462 corp: 42/1486Kb lim: 64000 exec/s: 1 rss: 577Mb L: 39823/62542 MS: 135 Custom-CopyPart-ShuffleBytes-ShuffleBytes-ChangeBit-ChangeBinInt-EraseBytes-ChangeBit-ChangeBinInt-ChangeBit- DE: "\x01\x00\x00\x00\x00\x00\x01\xf1"-"\x00\x00\x00\x07"-"\x00\x0d"-"\xfd\xff\xff\xff"-"\xfe\xff\xff\xf4"-"\xe3\xff\xff\xff"-"\xff\xff\xff\xf1"-"\xea\xff\xff\xff"-"\x00\x00\x00\xfd"-"\x01\x00\x00\x05"-
Staring hard at the logs it's clear that the cap of 10 is applied.
When running with verbosity level 2, the logs look like the below:
#66 NEW cov: 4700 ft: 10188 corp: 37/1186Kb lim: 64000 exec/s: 2 rss: 509Mb L: 47616/61231 MS: 520 Custom-CopyPart-ChangeBinInt-ChangeBit-ChangeByte-EraseBytes-PersAutoDict-CopyPart-ShuffleBytes-ChangeBit-ShuffleBytes-CopyPart-EraseBytes-CopyPart-ChangeBinInt-CopyPart-ChangeByte-ShuffleBytes-ChangeBinInt-ShuffleBytes-ChangeBit-CMP-ShuffleBytes-ChangeBit-CrossOver-ChangeBinInt-ChangeByte-ShuffleBytes-CrossOver-EraseBytes-ChangeBinInt-InsertRepeatedBytes-PersAutoDict-InsertRepeatedBytes-InsertRepeatedBytes-CrossOver-ChangeByte-ShuffleBytes-CopyPart-ShuffleBytes-CopyPart-CrossOver-ChangeBit-ShuffleBytes-CrossOver-PersAutoDict-ChangeByte-ChangeBit-ShuffleBytes-CrossOver-ChangeByte-EraseBytes-CopyPart-ChangeBinInt-PersAutoDict-CrossOver-ShuffleBytes-CrossOver-CrossOver-EraseBytes-CrossOver-EraseBytes-CrossOver-ChangeBit-ChangeBinInt-ChangeByte-EraseBytes-ShuffleBytes-ShuffleBytes-ChangeBit-EraseBytes-ChangeBinInt-ChangeBit-ChangeBinInt-CopyPart-EraseBytes-PersAutoDict-EraseBytes-CopyPart-ChangeBinInt-ChangeByte-CrossOver-ChangeBinInt-ShuffleBytes-PersAutoDict-PersAutoDict-ChangeBinInt-CopyPart-ChangeBinInt-CrossOver-ChangeBit-ChangeBinInt-CopyPart-ChangeByte-ChangeBit-CopyPart-CrossOver-ChangeByte-ChangeBit-ChangeByte-ShuffleBytes-CMP-ChangeBit-CopyPart-ChangeBit-ChangeByte-ChangeBinInt-PersAutoDict-ChangeBinInt-CrossOver-ChangeBinInt-ChangeBit-ChangeBinInt-ChangeBinInt-PersAutoDict-ChangeBinInt-ChangeBinInt-ChangeByte-CopyPart-ShuffleBytes-ChangeByte-ChangeBit-ChangeByte-ChangeByte-EraseBytes-CrossOver-ChangeByte-ChangeByte-EraseBytes-EraseBytes-InsertRepeatedBytes-ShuffleBytes-CopyPart-CopyPart-ChangeBit-ShuffleBytes-PersAutoDict-ShuffleBytes-ChangeBit-ChangeByte-ChangeBit-ShuffleBytes-ChangeByte-ChangeBinInt-CrossOver-ChangeBinInt-ChangeBit-EraseBytes-CopyPart-ChangeByte-CrossOver-EraseBytes-CrossOver-ChangeByte-ShuffleBytes-ChangeByte-ChangeBinInt-CrossOver-ChangeByte-InsertRepeatedBytes-InsertByte-ShuffleBytes-PersAutoDict-ChangeBit-ChangeByte-ChangeBit-ShuffleBytes-ShuffleBytes-CopyPart-ShuffleBytes-EraseBytes-ShuffleBytes-ShuffleBytes-CrossOver-ChangeBinInt-CopyPart-CopyPart-CopyPart-EraseBytes-EraseBytes-ChangeByte-ChangeBinInt-ShuffleBytes-CMP-InsertByte-EraseBytes-ShuffleBytes-CopyPart-ChangeBit-CrossOver-CopyPart-CopyPart-ShuffleBytes-ChangeByte-ChangeByte-ChangeBinInt-EraseBytes-ChangeByte-ChangeBinInt-ChangeBit-ChangeBit-ChangeByte-ShuffleBytes-PersAutoDict-PersAutoDict-CMP-ChangeBit-ShuffleBytes-PersAutoDict-ChangeBinInt-EraseBytes-EraseBytes-ShuffleBytes-ChangeByte-ShuffleBytes-ChangeBit-EraseBytes-CMP-ShuffleBytes-ChangeByte-ChangeBinInt-EraseBytes-ChangeBinInt-ChangeByte-EraseBytes-ChangeByte-CrossOver-ShuffleBytes-EraseBytes-EraseBytes-ShuffleBytes-ChangeBit-EraseBytes-CopyPart-ShuffleBytes-ShuffleBytes-CrossOver-CopyPart-ChangeBinInt-ShuffleBytes-CrossOver-InsertByte-InsertByte-ChangeBinInt-ChangeBinInt-CopyPart-EraseBytes-ShuffleBytes-ChangeBit-ChangeBit-EraseBytes-ChangeByte-ChangeByte-ChangeBinInt-CrossOver-ChangeBinInt-ChangeBinInt-ShuffleBytes-ShuffleBytes-ChangeByte-ChangeByte-ChangeBinInt-ShuffleBytes-CrossOver-EraseBytes-CopyPart-CopyPart-CopyPart-ChangeBit-ShuffleBytes-ChangeByte-EraseBytes-ChangeByte-InsertRepeatedBytes-InsertByte-InsertRepeatedBytes-PersAutoDict-EraseBytes-ShuffleBytes-ChangeByte-ShuffleBytes-ChangeBinInt-ShuffleBytes-ChangeBinInt-ChangeBit-CrossOver-CrossOver-ShuffleBytes-CrossOver-CopyPart-CrossOver-CrossOver-CopyPart-ChangeByte-ChangeByte-CrossOver-ChangeBit-ChangeBinInt-EraseBytes-ShuffleBytes-EraseBytes-CMP-PersAutoDict-PersAutoDict-InsertByte-ChangeBit-ChangeByte-CopyPart-CrossOver-ChangeByte-ChangeBit-ChangeByte-CopyPart-ChangeBinInt-EraseBytes-CrossOver-ChangeBit-CrossOver-PersAutoDict-CrossOver-ChangeByte-CrossOver-ChangeByte-ChangeByte-CrossOver-ShuffleBytes-CopyPart-CopyPart-ShuffleBytes-ChangeByte-ChangeByte-ChangeBinInt-ChangeBinInt-ChangeBinInt-ChangeBinInt-ShuffleBytes-CrossOver-ChangeBinInt-ShuffleBytes-ChangeBit-PersAutoDict-ChangeBinInt-ShuffleBytes-ChangeBinInt-ChangeByte-CrossOver-ChangeBit-CopyPart-ChangeBit-ChangeBit-CopyPart-ChangeByte-PersAutoDict-ChangeBit-ShuffleBytes-ChangeByte-ChangeBit-CrossOver-ChangeByte-CrossOver-ChangeByte-CrossOver-ChangeBit-ChangeByte-ChangeBinInt-PersAutoDict-CopyPart-ChangeBinInt-ChangeBit-CrossOver-ChangeBit-PersAutoDict-ShuffleBytes-EraseBytes-CrossOver-ChangeByte-ChangeBinInt-ShuffleBytes-ChangeBinInt-InsertRepeatedBytes-PersAutoDict-CrossOver-ChangeByte-Custom-PersAutoDict-CopyPart-CopyPart-ChangeBinInt-ShuffleBytes-ChangeBinInt-ChangeBit-ShuffleBytes-CrossOver-CMP-ChangeByte-CopyPart-ShuffleBytes-CopyPart-CopyPart-CrossOver-CrossOver-CrossOver-ShuffleBytes-ChangeByte-ChangeBinInt-ChangeBit-ChangeBit-ChangeBit-ChangeByte-EraseBytes-ChangeByte-ChangeBit-ChangeByte-ChangeByte-CopyPart-PersAutoDict-ChangeBinInt-PersAutoDict-PersAutoDict-PersAutoDict-CopyPart-CopyPart-CrossOver-ChangeByte-ChangeBinInt-ShuffleBytes-ChangeBit-CopyPart-EraseBytes-CopyPart-CopyPart-CrossOver-ChangeByte-EraseBytes-ShuffleBytes-ChangeByte-CopyPart-EraseBytes-CopyPart-CrossOver-ChangeBinInt-ChangeBinInt-InsertByte-ChangeBinInt-ChangeBit-ChangeByte-CopyPart-ChangeByte-EraseBytes-ChangeByte-ChangeBit-ChangeByte-ShuffleBytes-CopyPart-ChangeBinInt-EraseBytes-CrossOver-ChangeBit-ChangeBit-CrossOver-EraseBytes-ChangeBinInt-CopyPart-CopyPart-ChangeBinInt-ChangeBit-EraseBytes-InsertRepeatedBytes-EraseBytes-ChangeBit-CrossOver-CrossOver-EraseBytes-EraseBytes-ChangeByte-CopyPart-CopyPart-ShuffleBytes-ChangeByte-ChangeBit-ChangeByte-EraseBytes-ChangeBit-ChangeByte-ChangeByte-CrossOver-CopyPart-EraseBytes-ChangeByte-EraseBytes-ChangeByte-ShuffleBytes-ShuffleBytes-ChangeByte-CopyPart-ChangeByte-ChangeByte-ChangeBit-CopyPart-ChangeBit-ChangeBinInt-CopyPart-ShuffleBytes-ChangeBit-ChangeBinInt-ChangeBit-EraseBytes-CMP-CrossOver-CopyPart-ChangeBinInt-CrossOver-CrossOver-CopyPart-CrossOver-CrossOver-InsertByte-InsertByte-CopyPart-Custom- DE: "warn"-"\x00\x00\x00\x80"-"\xfe\xff\xff\xfb"-"\xff\xff"-"\x10\x00\x00\x00"-"\xfe\xff\xff\xff"-"\xff\xff\xff\xf6"-"U\x01\x00\x00\x00\x00\x00\x00"-"\xd9\xff\xff\xff"-"\xfe\xff\xff\xea"-"\xf0\xff\xff\xff"-"\xfc\xff\xff\xff"-"warn"-"\xff\xff\xff\xff"-"\xfe\xff\xff\xfb"-"\x00\x00\x00\x80"-"\xfe\xff\xff\xf1"-"\xfe\xff\xff\xea"-"\x00\x00\x00\x00\x00\x00\x012"-"\xe2\x00"-"\xfb\xff\xff\xff"-"\x00\x00\x00\x00"-"\xe9\xff\xff\xff"-"\xff\xff"-"\x00\x00\x00\x80"-"\x01\x00\x04\xc9"-"\xf0\xff\xff\xff"-"\xf9\xff\xff\xff"-"\xff\xff\xff\xff\xff\xff\xff\x12"-"\xe2\x00"-"\xfe\xff\xff\xff"-"\xfe\xff\xff\xea"-"\xff\xff\xff\xff"-"\xf4\xff\xff\xff"-"\xe9\xff\xff\xff"-"\xf1\xff\xff\xff"- #48 NEW cov: 4502 ft: 9151 corp: 27/750Kb lim: 64000 exec/s: 2 rss: 458Mb L: 50772/50772 MS: 259 ChangeByte-ShuffleBytes-ChangeBinInt-ChangeByte-ChangeByte-ChangeByte-ChangeByte-ChangeBit-CopyPart-CrossOver-CopyPart-ChangeByte-CrossOver-CopyPart-ChangeBit-ChangeByte-EraseBytes-ChangeByte-CopyPart-CopyPart-CopyPart-ChangeBit-EraseBytes-ChangeBinInt-CrossOver-CopyPart-CrossOver-CopyPart-ChangeBit-ChangeByte-ChangeBit-InsertByte-CrossOver-InsertRepeatedBytes-InsertRepeatedBytes-InsertRepeatedBytes-ChangeBinInt-EraseBytes-InsertRepeatedBytes-InsertByte-ChangeBit-ShuffleBytes-ChangeBit-ChangeBit-CopyPart-ChangeBit-ChangeByte-CrossOver-ChangeBinInt-ChangeByte-CrossOver-CMP-ChangeByte-CrossOver-ChangeByte-ShuffleBytes-ShuffleBytes-ChangeByte-ChangeBinInt-CopyPart-EraseBytes-CrossOver-ChangeBit-ChangeBinInt-InsertByte-ChangeBit-CopyPart-ChangeBinInt-ChangeByte-CrossOver-ChangeBit-EraseBytes-CopyPart-ChangeBinInt-ChangeBit-ChangeBit-ChangeByte-CopyPart-ChangeBinInt-CrossOver-PersAutoDict-ChangeByte-ChangeBit-ChangeByte-ChangeBinInt-ChangeBinInt-EraseBytes-CopyPart-CopyPart-ChangeByte-ChangeByte-EraseBytes-PersAutoDict-CopyPart-ChangeByte-ChangeByte-EraseBytes-CrossOver-CopyPart-CopyPart-CopyPart-ChangeByte-ChangeBit-CMP-CopyPart-ChangeBinInt-ChangeBinInt-CrossOver-ChangeBit-ChangeBit-EraseBytes-ChangeByte-ShuffleBytes-ChangeBit-ChangeBinInt-CMP-InsertRepeatedBytes-CopyPart-Custom-ChangeByte-CrossOver-EraseBytes-ChangeBit-CopyPart-CrossOver-CMP-ShuffleBytes-EraseBytes-CrossOver-PersAutoDict-ChangeByte-CrossOver-CopyPart-CrossOver-CrossOver-ShuffleBytes-ChangeBinInt-CrossOver-ChangeBinInt-ShuffleBytes-PersAutoDict-ChangeByte-EraseBytes-ChangeBit-CrossOver-EraseBytes-CrossOver-ChangeBit-ChangeBinInt-EraseBytes-InsertByte-InsertRepeatedBytes-InsertByte-InsertByte-ChangeByte-ChangeBinInt-ChangeBit-CrossOver-ChangeByte-CrossOver-EraseBytes-ChangeByte-ShuffleBytes-ChangeBit-ChangeBit-ShuffleBytes-CopyPart-ChangeByte-PersAutoDict-ChangeBit-ChangeByte-InsertRepeatedBytes-CMP-CrossOver-ChangeByte-EraseBytes-ShuffleBytes-CrossOver-ShuffleBytes-ChangeBinInt-ChangeBinInt-CopyPart-PersAutoDict-ShuffleBytes-ChangeBit-CopyPart-ShuffleBytes-CopyPart-EraseBytes-ChangeByte-ChangeBit-ChangeBit-ChangeBinInt-ChangeByte-CopyPart-EraseBytes-ChangeBinInt-EraseBytes-EraseBytes-PersAutoDict-CMP-PersAutoDict-CrossOver-CrossOver-ChangeBit-CrossOver-PersAutoDict-CrossOver-CopyPart-ChangeByte-EraseBytes-ChangeByte-ShuffleBytes-ChangeByte-ChangeByte-CrossOver-ChangeBit-EraseBytes-ChangeByte-EraseBytes-ChangeBinInt-CrossOver-CrossOver-EraseBytes-ChangeBinInt-CrossOver-ChangeBit-ShuffleBytes-ChangeBit-ChangeByte-EraseBytes-ChangeBit-CrossOver-CrossOver-CrossOver-ChangeByte-ChangeBit-ShuffleBytes-ChangeBit-ChangeBit-EraseBytes-CrossOver-CrossOver-CopyPart-ShuffleBytes-ChangeByte-ChangeByte-CopyPart-CrossOver-CopyPart-CrossOver-CrossOver-EraseBytes-EraseBytes-ShuffleBytes-InsertRepeatedBytes-ChangeBit-CopyPart-Custom- DE: "\xfe\xff\xff\xfc"-"\x00\x00\x00\x00"-"F\x00"-"\xf3\xff\xff\xff"-"St9exception"-"_\x00\x00\x00"-"\xf6\xff\xff\xff"-"\xfe\xff\xff\xff"-"\x00\x00\x00\x00"-"p\x02\x00\x00\x00\x00\x00\x00"-"\xfe\xff\xff\xfb"-"\xff\xff"-"\xff\xff\xff\xff"-"\x01\x00\x00\x07"-"\xfe\xff\xff\xfe"-
These are prohibitively large and of limited value in the default case (when someone is running the fuzzer, not debugging it), in my opinion.
Reviewed By: morehouse
Differential Revision: https://reviews.llvm.org/D86658
show more ...
|
Revision tags: llvmorg-11.0.0-rc2, llvmorg-11.0.0-rc1, llvmorg-12-init |
|
#
1bb1eac6 |
| 08-Jul-2020 |
Dokyung Song <dokyungs@google.com> |
[libFuzzer] Add a command-line option for tracing mutation of corpus inputs in the dot graph format.
This patch adds a new command-line option -mutation_graph_file=FILE for debugging purposes, which
[libFuzzer] Add a command-line option for tracing mutation of corpus inputs in the dot graph format.
This patch adds a new command-line option -mutation_graph_file=FILE for debugging purposes, which traces how corpus inputs evolve during a fuzzing run. For each new input that is added to the corpus, a new vertex corresponding to the added input, as well as a new edge that connects its base input to itself are written to the given file. Each vertex is labeled with the filename of the input, and each edge is labeled with the mutation sequence that led to the input w.r.t. its base input.
The format of the mutation graph file is the dot file format. Once prepended and appended with "graph {" and "}", respectively, the graph becomes a valid dot file and can be visualized.
Differential Revision: https://reviews.llvm.org/D86560
show more ...
|
#
5cda4dc7 |
| 17-Aug-2020 |
Dokyung Song <dokyungs@google.com> |
[libFuzzer] Scale energy assigned to each input based on input execution time.
This patch scales the energy computed by the Entropic schedule based on the execution time of each input. The input exe
[libFuzzer] Scale energy assigned to each input based on input execution time.
This patch scales the energy computed by the Entropic schedule based on the execution time of each input. The input execution time is compared with the average execution time of inputs in the corpus, and, based on the amount by which they differ, the energy is scaled from 0.1x (for inputs executing slow) to 3x (for inputs executing fast). Note that the exact scaling criteria and formula is borrowed from AFL.
On FuzzBench, this gives a sizeable throughput increase, which in turn leads to more coverage on several benchmarks. For details, see the following report.
https://storage.googleapis.com/fuzzer-test-suite-public/exectime-report/index.html
Differential Revision: https://reviews.llvm.org/D86092
show more ...
|
#
b53243e1 |
| 01-Sep-2020 |
Dokyung Song <dokyungs@google.com> |
[libFuzzer] Evenly select inputs to cross over with from the corpus regardless of the input's coverage.
This patch adds an option "cross_over_uniform_dist", which, if 1, considers all inputs in the
[libFuzzer] Evenly select inputs to cross over with from the corpus regardless of the input's coverage.
This patch adds an option "cross_over_uniform_dist", which, if 1, considers all inputs in the corpus for the crossover input selection. More specifically, this patch uses a uniform distribution of all inputs in the corpus for the CrossOver input selection. Note that input selection for mutation is still fully determined by the scheduling policy (i.e., vanilla or Entropic); the uniform distribution only applies to the secondary input selection, only for the crossover mutation of the base input chosen by the scheduling policy. This way the corpus inputs that have useful fragments in them, even though they are deprioritized by the scheduling policy, have chances of getting mixed with other inputs that are prioritized and selected as base input for mutation.
Differential Revision: https://reviews.llvm.org/D86954
show more ...
|
#
62673c43 |
| 31-Jul-2020 |
Dokyung Song <dokyungs@google.com> |
[libFuzzer] Add an option to keep initial seed inputs around.
This patch adds an option "keep_seed" to keep all initial seed inputs in the corpus. Previously, only the initial seed inputs that find
[libFuzzer] Add an option to keep initial seed inputs around.
This patch adds an option "keep_seed" to keep all initial seed inputs in the corpus. Previously, only the initial seed inputs that find new coverage were added to the corpus, and all the other initial inputs were discarded. We observed in some circumstances that useful initial seed inputs are discarded as they find no new coverage, even though they contain useful fragments in them (e.g., SQLITE3 FuzzBench benchmark). This newly added option provides a way to keeping seed inputs in the corpus for those circumstances. With this patch, and with -keep_seed=1, all initial seed inputs are kept in the corpus regardless of whether they find new coverage or not. Further, these seed inputs are not replaced with smaller inputs even if -reduce_inputs=1.
Differential Revision: https://reviews.llvm.org/D86577
show more ...
|
#
71397362 |
| 01-Sep-2020 |
Matt Morehouse <mascasa@google.com> |
Revert "[libfuzzer] Reduce default verbosity when printing large mutation sequences"
This reverts commit 2665425908e00618074e42155ec922a37f7c9002 due to buildbot failure.
|
#
26654259 |
| 01-Sep-2020 |
mhl <mhl@fb.com> |
[libfuzzer] Reduce default verbosity when printing large mutation sequences
When using a custom mutator (e.g. thrift mutator, similar to LPM) that calls back into libfuzzer's mutations via `LLVMFuzz
[libfuzzer] Reduce default verbosity when printing large mutation sequences
When using a custom mutator (e.g. thrift mutator, similar to LPM) that calls back into libfuzzer's mutations via `LLVMFuzzerMutate`, the mutation sequences needed to achieve new coverage can get prohibitively large.
Printing these large sequences has two downsides:
1) It makes the logs hard to understand for a human. 2) The performance cost slows down fuzzing.
In this patch I change the `PrintMutationSequence` function to take a max number of entries, to achieve this goal. I also update `PrintStatusForNewUnit` to default to printing only 10 entries, in the default verbosity level (1), requiring the user to set verbosity to 2 if they want the full mutation sequence.
For our use case, turning off verbosity is not an option, as that would also disable `PrintStats()` which is very useful for infrastructure that analyzes the logs in realtime. I imagine most users of libfuzzer always want those logs in the default.
I built a fuzzer locally with this patch applied to libfuzzer.
When running with the default verbosity, I see logs like this:
#65 NEW cov: 4799 ft: 10443 corp: 41/1447Kb lim: 64000 exec/s: 1 rss: 575Mb L: 28658/62542 MS: 196 Custom-CrossOver-ChangeBit-EraseBytes-ChangeBit-ChangeBit-ChangeBit-CrossOver-ChangeBit-CrossOver- DE: "\xff\xff\xff\x0e"-"\xfe\xff\xff\x7f"-"\xfe\xff\xff\x7f"-"\x17\x00\x00\x00\x00\x00\x00\x00"-"\x00\x00\x00\xf9"-"\xff\xff\xff\xff"-"\xfa\xff\xff\xff"-"\xf7\xff\xff\xff"-"@\xff\xff\xff\xff\xff\xff\xff"-"E\x00"- #67 NEW cov: 4810 ft: 10462 corp: 42/1486Kb lim: 64000 exec/s: 1 rss: 577Mb L: 39823/62542 MS: 135 Custom-CopyPart-ShuffleBytes-ShuffleBytes-ChangeBit-ChangeBinInt-EraseBytes-ChangeBit-ChangeBinInt-ChangeBit- DE: "\x01\x00\x00\x00\x00\x00\x01\xf1"-"\x00\x00\x00\x07"-"\x00\x0d"-"\xfd\xff\xff\xff"-"\xfe\xff\xff\xf4"-"\xe3\xff\xff\xff"-"\xff\xff\xff\xf1"-"\xea\xff\xff\xff"-"\x00\x00\x00\xfd"-"\x01\x00\x00\x05"-
Staring hard at the logs it's clear that the cap of 10 is applied.
When running with verbosity level 2, the logs look like the below:
#66 NEW cov: 4700 ft: 10188 corp: 37/1186Kb lim: 64000 exec/s: 2 rss: 509Mb L: 47616/61231 MS: 520 Custom-CopyPart-ChangeBinInt-ChangeBit-ChangeByte-EraseBytes-PersAutoDict-CopyPart-ShuffleBytes-ChangeBit-ShuffleBytes-CopyPart-EraseBytes-CopyPart-ChangeBinInt-CopyPart-ChangeByte-ShuffleBytes-ChangeBinInt-ShuffleBytes-ChangeBit-CMP-ShuffleBytes-ChangeBit-CrossOver-ChangeBinInt-ChangeByte-ShuffleBytes-CrossOver-EraseBytes-ChangeBinInt-InsertRepeatedBytes-PersAutoDict-InsertRepeatedBytes-InsertRepeatedBytes-CrossOver-ChangeByte-ShuffleBytes-CopyPart-ShuffleBytes-CopyPart-CrossOver-ChangeBit-ShuffleBytes-CrossOver-PersAutoDict-ChangeByte-ChangeBit-ShuffleBytes-CrossOver-ChangeByte-EraseBytes-CopyPart-ChangeBinInt-PersAutoDict-CrossOver-ShuffleBytes-CrossOver-CrossOver-EraseBytes-CrossOver-EraseBytes-CrossOver-ChangeBit-ChangeBinInt-ChangeByte-EraseBytes-ShuffleBytes-ShuffleBytes-ChangeBit-EraseBytes-ChangeBinInt-ChangeBit-ChangeBinInt-CopyPart-EraseBytes-PersAutoDict-EraseBytes-CopyPart-ChangeBinInt-ChangeByte-CrossOver-ChangeBinInt-ShuffleBytes-PersAutoDict-PersAutoDict-ChangeBinInt-CopyPart-ChangeBinInt-CrossOver-ChangeBit-ChangeBinInt-CopyPart-ChangeByte-ChangeBit-CopyPart-CrossOver-ChangeByte-ChangeBit-ChangeByte-ShuffleBytes-CMP-ChangeBit-CopyPart-ChangeBit-ChangeByte-ChangeBinInt-PersAutoDict-ChangeBinInt-CrossOver-ChangeBinInt-ChangeBit-ChangeBinInt-ChangeBinInt-PersAutoDict-ChangeBinInt-ChangeBinInt-ChangeByte-CopyPart-ShuffleBytes-ChangeByte-ChangeBit-ChangeByte-ChangeByte-EraseBytes-CrossOver-ChangeByte-ChangeByte-EraseBytes-EraseBytes-InsertRepeatedBytes-ShuffleBytes-CopyPart-CopyPart-ChangeBit-ShuffleBytes-PersAutoDict-ShuffleBytes-ChangeBit-ChangeByte-ChangeBit-ShuffleBytes-ChangeByte-ChangeBinInt-CrossOver-ChangeBinInt-ChangeBit-EraseBytes-CopyPart-ChangeByte-CrossOver-EraseBytes-CrossOver-ChangeByte-ShuffleBytes-ChangeByte-ChangeBinInt-CrossOver-ChangeByte-InsertRepeatedBytes-InsertByte-ShuffleBytes-PersAutoDict-ChangeBit-ChangeByte-ChangeBit-ShuffleBytes-ShuffleBytes-CopyPart-ShuffleBytes-EraseBytes-ShuffleBytes-ShuffleBytes-CrossOver-ChangeBinInt-CopyPart-CopyPart-CopyPart-EraseBytes-EraseBytes-ChangeByte-ChangeBinInt-ShuffleBytes-CMP-InsertByte-EraseBytes-ShuffleBytes-CopyPart-ChangeBit-CrossOver-CopyPart-CopyPart-ShuffleBytes-ChangeByte-ChangeByte-ChangeBinInt-EraseBytes-ChangeByte-ChangeBinInt-ChangeBit-ChangeBit-ChangeByte-ShuffleBytes-PersAutoDict-PersAutoDict-CMP-ChangeBit-ShuffleBytes-PersAutoDict-ChangeBinInt-EraseBytes-EraseBytes-ShuffleBytes-ChangeByte-ShuffleBytes-ChangeBit-EraseBytes-CMP-ShuffleBytes-ChangeByte-ChangeBinInt-EraseBytes-ChangeBinInt-ChangeByte-EraseBytes-ChangeByte-CrossOver-ShuffleBytes-EraseBytes-EraseBytes-ShuffleBytes-ChangeBit-EraseBytes-CopyPart-ShuffleBytes-ShuffleBytes-CrossOver-CopyPart-ChangeBinInt-ShuffleBytes-CrossOver-InsertByte-InsertByte-ChangeBinInt-ChangeBinInt-CopyPart-EraseBytes-ShuffleBytes-ChangeBit-ChangeBit-EraseBytes-ChangeByte-ChangeByte-ChangeBinInt-CrossOver-ChangeBinInt-ChangeBinInt-ShuffleBytes-ShuffleBytes-ChangeByte-ChangeByte-ChangeBinInt-ShuffleBytes-CrossOver-EraseBytes-CopyPart-CopyPart-CopyPart-ChangeBit-ShuffleBytes-ChangeByte-EraseBytes-ChangeByte-InsertRepeatedBytes-InsertByte-InsertRepeatedBytes-PersAutoDict-EraseBytes-ShuffleBytes-ChangeByte-ShuffleBytes-ChangeBinInt-ShuffleBytes-ChangeBinInt-ChangeBit-CrossOver-CrossOver-ShuffleBytes-CrossOver-CopyPart-CrossOver-CrossOver-CopyPart-ChangeByte-ChangeByte-CrossOver-ChangeBit-ChangeBinInt-EraseBytes-ShuffleBytes-EraseBytes-CMP-PersAutoDict-PersAutoDict-InsertByte-ChangeBit-ChangeByte-CopyPart-CrossOver-ChangeByte-ChangeBit-ChangeByte-CopyPart-ChangeBinInt-EraseBytes-CrossOver-ChangeBit-CrossOver-PersAutoDict-CrossOver-ChangeByte-CrossOver-ChangeByte-ChangeByte-CrossOver-ShuffleBytes-CopyPart-CopyPart-ShuffleBytes-ChangeByte-ChangeByte-ChangeBinInt-ChangeBinInt-ChangeBinInt-ChangeBinInt-ShuffleBytes-CrossOver-ChangeBinInt-ShuffleBytes-ChangeBit-PersAutoDict-ChangeBinInt-ShuffleBytes-ChangeBinInt-ChangeByte-CrossOver-ChangeBit-CopyPart-ChangeBit-ChangeBit-CopyPart-ChangeByte-PersAutoDict-ChangeBit-ShuffleBytes-ChangeByte-ChangeBit-CrossOver-ChangeByte-CrossOver-ChangeByte-CrossOver-ChangeBit-ChangeByte-ChangeBinInt-PersAutoDict-CopyPart-ChangeBinInt-ChangeBit-CrossOver-ChangeBit-PersAutoDict-ShuffleBytes-EraseBytes-CrossOver-ChangeByte-ChangeBinInt-ShuffleBytes-ChangeBinInt-InsertRepeatedBytes-PersAutoDict-CrossOver-ChangeByte-Custom-PersAutoDict-CopyPart-CopyPart-ChangeBinInt-ShuffleBytes-ChangeBinInt-ChangeBit-ShuffleBytes-CrossOver-CMP-ChangeByte-CopyPart-ShuffleBytes-CopyPart-CopyPart-CrossOver-CrossOver-CrossOver-ShuffleBytes-ChangeByte-ChangeBinInt-ChangeBit-ChangeBit-ChangeBit-ChangeByte-EraseBytes-ChangeByte-ChangeBit-ChangeByte-ChangeByte-CopyPart-PersAutoDict-ChangeBinInt-PersAutoDict-PersAutoDict-PersAutoDict-CopyPart-CopyPart-CrossOver-ChangeByte-ChangeBinInt-ShuffleBytes-ChangeBit-CopyPart-EraseBytes-CopyPart-CopyPart-CrossOver-ChangeByte-EraseBytes-ShuffleBytes-ChangeByte-CopyPart-EraseBytes-CopyPart-CrossOver-ChangeBinInt-ChangeBinInt-InsertByte-ChangeBinInt-ChangeBit-ChangeByte-CopyPart-ChangeByte-EraseBytes-ChangeByte-ChangeBit-ChangeByte-ShuffleBytes-CopyPart-ChangeBinInt-EraseBytes-CrossOver-ChangeBit-ChangeBit-CrossOver-EraseBytes-ChangeBinInt-CopyPart-CopyPart-ChangeBinInt-ChangeBit-EraseBytes-InsertRepeatedBytes-EraseBytes-ChangeBit-CrossOver-CrossOver-EraseBytes-EraseBytes-ChangeByte-CopyPart-CopyPart-ShuffleBytes-ChangeByte-ChangeBit-ChangeByte-EraseBytes-ChangeBit-ChangeByte-ChangeByte-CrossOver-CopyPart-EraseBytes-ChangeByte-EraseBytes-ChangeByte-ShuffleBytes-ShuffleBytes-ChangeByte-CopyPart-ChangeByte-ChangeByte-ChangeBit-CopyPart-ChangeBit-ChangeBinInt-CopyPart-ShuffleBytes-ChangeBit-ChangeBinInt-ChangeBit-EraseBytes-CMP-CrossOver-CopyPart-ChangeBinInt-CrossOver-CrossOver-CopyPart-CrossOver-CrossOver-InsertByte-InsertByte-CopyPart-Custom- DE: "warn"-"\x00\x00\x00\x80"-"\xfe\xff\xff\xfb"-"\xff\xff"-"\x10\x00\x00\x00"-"\xfe\xff\xff\xff"-"\xff\xff\xff\xf6"-"U\x01\x00\x00\x00\x00\x00\x00"-"\xd9\xff\xff\xff"-"\xfe\xff\xff\xea"-"\xf0\xff\xff\xff"-"\xfc\xff\xff\xff"-"warn"-"\xff\xff\xff\xff"-"\xfe\xff\xff\xfb"-"\x00\x00\x00\x80"-"\xfe\xff\xff\xf1"-"\xfe\xff\xff\xea"-"\x00\x00\x00\x00\x00\x00\x012"-"\xe2\x00"-"\xfb\xff\xff\xff"-"\x00\x00\x00\x00"-"\xe9\xff\xff\xff"-"\xff\xff"-"\x00\x00\x00\x80"-"\x01\x00\x04\xc9"-"\xf0\xff\xff\xff"-"\xf9\xff\xff\xff"-"\xff\xff\xff\xff\xff\xff\xff\x12"-"\xe2\x00"-"\xfe\xff\xff\xff"-"\xfe\xff\xff\xea"-"\xff\xff\xff\xff"-"\xf4\xff\xff\xff"-"\xe9\xff\xff\xff"-"\xf1\xff\xff\xff"- #48 NEW cov: 4502 ft: 9151 corp: 27/750Kb lim: 64000 exec/s: 2 rss: 458Mb L: 50772/50772 MS: 259 ChangeByte-ShuffleBytes-ChangeBinInt-ChangeByte-ChangeByte-ChangeByte-ChangeByte-ChangeBit-CopyPart-CrossOver-CopyPart-ChangeByte-CrossOver-CopyPart-ChangeBit-ChangeByte-EraseBytes-ChangeByte-CopyPart-CopyPart-CopyPart-ChangeBit-EraseBytes-ChangeBinInt-CrossOver-CopyPart-CrossOver-CopyPart-ChangeBit-ChangeByte-ChangeBit-InsertByte-CrossOver-InsertRepeatedBytes-InsertRepeatedBytes-InsertRepeatedBytes-ChangeBinInt-EraseBytes-InsertRepeatedBytes-InsertByte-ChangeBit-ShuffleBytes-ChangeBit-ChangeBit-CopyPart-ChangeBit-ChangeByte-CrossOver-ChangeBinInt-ChangeByte-CrossOver-CMP-ChangeByte-CrossOver-ChangeByte-ShuffleBytes-ShuffleBytes-ChangeByte-ChangeBinInt-CopyPart-EraseBytes-CrossOver-ChangeBit-ChangeBinInt-InsertByte-ChangeBit-CopyPart-ChangeBinInt-ChangeByte-CrossOver-ChangeBit-EraseBytes-CopyPart-ChangeBinInt-ChangeBit-ChangeBit-ChangeByte-CopyPart-ChangeBinInt-CrossOver-PersAutoDict-ChangeByte-ChangeBit-ChangeByte-ChangeBinInt-ChangeBinInt-EraseBytes-CopyPart-CopyPart-ChangeByte-ChangeByte-EraseBytes-PersAutoDict-CopyPart-ChangeByte-ChangeByte-EraseBytes-CrossOver-CopyPart-CopyPart-CopyPart-ChangeByte-ChangeBit-CMP-CopyPart-ChangeBinInt-ChangeBinInt-CrossOver-ChangeBit-ChangeBit-EraseBytes-ChangeByte-ShuffleBytes-ChangeBit-ChangeBinInt-CMP-InsertRepeatedBytes-CopyPart-Custom-ChangeByte-CrossOver-EraseBytes-ChangeBit-CopyPart-CrossOver-CMP-ShuffleBytes-EraseBytes-CrossOver-PersAutoDict-ChangeByte-CrossOver-CopyPart-CrossOver-CrossOver-ShuffleBytes-ChangeBinInt-CrossOver-ChangeBinInt-ShuffleBytes-PersAutoDict-ChangeByte-EraseBytes-ChangeBit-CrossOver-EraseBytes-CrossOver-ChangeBit-ChangeBinInt-EraseBytes-InsertByte-InsertRepeatedBytes-InsertByte-InsertByte-ChangeByte-ChangeBinInt-ChangeBit-CrossOver-ChangeByte-CrossOver-EraseBytes-ChangeByte-ShuffleBytes-ChangeBit-ChangeBit-ShuffleBytes-CopyPart-ChangeByte-PersAutoDict-ChangeBit-ChangeByte-InsertRepeatedBytes-CMP-CrossOver-ChangeByte-EraseBytes-ShuffleBytes-CrossOver-ShuffleBytes-ChangeBinInt-ChangeBinInt-CopyPart-PersAutoDict-ShuffleBytes-ChangeBit-CopyPart-ShuffleBytes-CopyPart-EraseBytes-ChangeByte-ChangeBit-ChangeBit-ChangeBinInt-ChangeByte-CopyPart-EraseBytes-ChangeBinInt-EraseBytes-EraseBytes-PersAutoDict-CMP-PersAutoDict-CrossOver-CrossOver-ChangeBit-CrossOver-PersAutoDict-CrossOver-CopyPart-ChangeByte-EraseBytes-ChangeByte-ShuffleBytes-ChangeByte-ChangeByte-CrossOver-ChangeBit-EraseBytes-ChangeByte-EraseBytes-ChangeBinInt-CrossOver-CrossOver-EraseBytes-ChangeBinInt-CrossOver-ChangeBit-ShuffleBytes-ChangeBit-ChangeByte-EraseBytes-ChangeBit-CrossOver-CrossOver-CrossOver-ChangeByte-ChangeBit-ShuffleBytes-ChangeBit-ChangeBit-EraseBytes-CrossOver-CrossOver-CopyPart-ShuffleBytes-ChangeByte-ChangeByte-CopyPart-CrossOver-CopyPart-CrossOver-CrossOver-EraseBytes-EraseBytes-ShuffleBytes-InsertRepeatedBytes-ChangeBit-CopyPart-Custom- DE: "\xfe\xff\xff\xfc"-"\x00\x00\x00\x00"-"F\x00"-"\xf3\xff\xff\xff"-"St9exception"-"_\x00\x00\x00"-"\xf6\xff\xff\xff"-"\xfe\xff\xff\xff"-"\x00\x00\x00\x00"-"p\x02\x00\x00\x00\x00\x00\x00"-"\xfe\xff\xff\xfb"-"\xff\xff"-"\xff\xff\xff\xff"-"\x01\x00\x00\x07"-"\xfe\xff\xff\xfe"-
These are prohibitively large and of limited value in the default case (when someone is running the fuzzer, not debugging it), in my opinion.
Reviewed By: morehouse
Differential Revision: https://reviews.llvm.org/D86658
show more ...
|