#
10c5b023 |
| 14-Nov-2019 |
maxv <maxv@NetBSD.org> |
Add support for Kernel Memory Sanitizer (kMSan). It detects uninitialized memory used by the kernel at run time, and just like kASan and kCSan, it is an excellent feature. It has already detected 38
Add support for Kernel Memory Sanitizer (kMSan). It detects uninitialized memory used by the kernel at run time, and just like kASan and kCSan, it is an excellent feature. It has already detected 38 uninitialized variables in the kernel during my testing, which I have since discreetly fixed.
We use two shadows: - "shad", to track uninitialized memory with a bit granularity (1:1). Each bit set to 1 in the shad corresponds to one uninitialized bit of real kernel memory. - "orig", to track the origin of the memory with a 4-byte granularity (1:1). Each uint32_t cell in the orig indicates the origin of the associated uint32_t of real kernel memory.
The memory consumption of these shadows is consequent, so at least 4GB of RAM is recommended to run kMSan.
The compiler inserts calls to specific __msan_* functions on each memory access, to manage both the shad and the orig and detect uninitialized memory accesses that change the execution flow (like an "if" on an uninitialized variable).
We mark as uninit several types of memory buffers (stack, pools, kmem, malloc, uvm_km), and check each buffer passed to copyout, copyoutstr, bwrite, if_transmit_lock and DMA operations, to detect uninitialized memory that leaves the system. This allows us to detect kernel info leaks in a way that is more efficient and also more user-friendly than KLEAK.
Contrary to kASan, kMSan requires comprehensive coverage, ie we cannot tolerate having one non-instrumented function, because this could cause false positives. kMSan cannot instrument ASM functions, so I converted most of them to __asm__ inlines, which kMSan is able to instrument. Those that remain receive special treatment.
Contrary to kASan again, kMSan uses a TLS, so we must context-switch this TLS during interrupts. We use different contexts depending on the interrupt level.
The orig tracks precisely the origin of a buffer. We use a special encoding for the orig values, and pack together in each uint32_t cell of the orig: - a code designating the type of memory (Stack, Pool, etc), and - a compressed pointer, which points either (1) to a string containing the name of the variable associated with the cell, or (2) to an area in the kernel .text section which we resolve to a symbol name + offset.
This encoding allows us not to consume extra memory for associating information with each cell, and produces a precise output, that can tell for example the name of an uninitialized variable on the stack, the function in which it was pushed on the stack, and the function where we accessed this uninitialized variable.
kMSan is available with LLVM, but not with GCC.
The code is organized in a way that is similar to kASan and kCSan, so it means that other architectures than amd64 can be supported.
show more ...
|
#
ae73490e |
| 07-Apr-2019 |
maxv <maxv@NetBSD.org> |
Provide a code argument in kasan_mark(), and give a code to each caller. Five codes used: GenericRedZone, MallocRedZone, KmemRedZone, PoolRedZone, and PoolUseAfterFree.
This can greatly help debuggi
Provide a code argument in kasan_mark(), and give a code to each caller. Five codes used: GenericRedZone, MallocRedZone, KmemRedZone, PoolRedZone, and PoolUseAfterFree.
This can greatly help debugging complex memory corruptions.
show more ...
|
#
cdf8841d |
| 07-Mar-2019 |
maxv <maxv@NetBSD.org> |
Mmh, fix len, mh_size includes the malloc header, but we don't redzone it.
|
#
807789b5 |
| 23-Dec-2018 |
maxv <maxv@NetBSD.org> |
Simplify the KASAN API, use only kasan_mark() and explain briefly. The alloc/free naming was too confusing.
|
#
38f575d1 |
| 20-Oct-2018 |
martin <martin@NetBSD.org> |
Do not assume size_t == unsigned long
|
#
008bce6f |
| 22-Aug-2018 |
christos <christos@NetBSD.org> |
- opt_kasan.h is included from <sys/asan.h> - now that we are not using inlines, we need one more ifdef.
|
#
740156e9 |
| 22-Aug-2018 |
maxv <maxv@NetBSD.org> |
Add back the KASAN ifdefs in kern_malloc until we sort out the type issue, and fix sys/asan.h. Tested on i386, amd64 and amd64-kasan.
|
#
7f4e877e |
| 22-Aug-2018 |
maxv <maxv@NetBSD.org> |
Reduce the number of KASAN ifdefs, suggested by Christos/Taylor.
|
#
ba43769b |
| 21-Aug-2018 |
maxv <maxv@NetBSD.org> |
Need to keep track of the requested size, when realloc is used under kASan. Maybe we could use mh_rqsz by default.
|
#
ab3113fa |
| 21-Aug-2018 |
pgoyette <pgoyette@NetBSD.org> |
Conditionalize inclusion of kasan.h so that rump can build.
|
#
acb25765 |
| 20-Aug-2018 |
maxv <maxv@NetBSD.org> |
Add support for kASan on amd64. Written by me, with some parts inspired from Siddharth Muralee's initial work. This feature can detect several kinds of memory bugs, and it's an excellent feature.
It
Add support for kASan on amd64. Written by me, with some parts inspired from Siddharth Muralee's initial work. This feature can detect several kinds of memory bugs, and it's an excellent feature.
It can be enabled by uncommenting these three lines in GENERIC:
#makeoptions KASAN=1 # Kernel Address Sanitizer #options KASAN #no options SVS
The kernel is compiled without SVS, without DMAP and without PCPU area. A shadow area is created at boot time, and it can cover the upper 128TB of the address space. This area is populated gradually as we allocate memory. With this design the memory consumption is kept at its lowest level.
The compiler calls the __asan_* functions each time a memory access is done. We verify whether this access is legal by looking at the shadow area.
We declare our own special memcpy/memset/etc functions, because the compiler's builtins don't add the __asan_* instrumentation.
Initially all the mappings are marked as valid. During dynamic allocations, we add a redzone, which we mark as invalid. Any access on it will trigger a kASan error message. Additionally, the compiler adds a redzone on global variables, and we mark these redzones as invalid too. The illegal-access detection works with a 1-byte granularity.
For now, we cover three areas:
- global variables - kmem_alloc-ated areas - malloc-ated areas
More will come, but that's a good start.
show more ...
|
#
b120dbba |
| 20-Aug-2018 |
maxv <maxv@NetBSD.org> |
Compute the pointer earlier, not in the return statement. No functional change.
|
#
f08cc415 |
| 28-Jul-2017 |
martin <martin@NetBSD.org> |
Avoid integer overflow in kern_malloc(). Reported by Ilja Van Sprundel. XXX Time to kill malloc() completely!
|
#
ff56bf48 |
| 06-Feb-2015 |
maxv <maxv@NetBSD.org> |
Don't include <uvm/uvm_extern.h>
|
#
20fc3c69 |
| 06-Feb-2015 |
maxv <maxv@NetBSD.org> |
Kill kmeminit().
|
#
4ae03c18 |
| 19-May-2014 |
rmind <rmind@NetBSD.org> |
- Split off PRU_ATTACH and PRU_DETACH logic into separate functions. - Replace malloc with kmem and eliminate M_PCB while here. - Sprinkle more asserts.
|
#
f1d428af |
| 30-Apr-2012 |
rmind <rmind@NetBSD.org> |
- Replace some malloc(9) uses with kmem(9). - G/C M_IPMOPTS, M_IPMADDR and M_BWMETER.
|
#
e05eb71d |
| 29-Apr-2012 |
dsl <dsl@NetBSD.org> |
Remove everything to do with 'struct malloc_type' and the malloc link_set. To make code in 'external' (etc) still compile, MALLOC_DECLARE() still has to generate something of type 'struct malloc_ty
Remove everything to do with 'struct malloc_type' and the malloc link_set. To make code in 'external' (etc) still compile, MALLOC_DECLARE() still has to generate something of type 'struct malloc_type *', with normal optimisation gcc generates a compile-time 0. MALLOC_DEFINE() and friends have no effect. Fix one or two places where the code would no longer compile.
show more ...
|
#
dbd08155 |
| 29-Apr-2012 |
dsl <dsl@NetBSD.org> |
Remove the unused 'struct malloc_type' args to kern_malloc/realloc/free The M_xxx arg is left on the calls to malloc() and free(), maybe they could be converted to an enumeration and just saved in
Remove the unused 'struct malloc_type' args to kern_malloc/realloc/free The M_xxx arg is left on the calls to malloc() and free(), maybe they could be converted to an enumeration and just saved in the malloc header (for deep diag use). Remove the malloc_type from mbuf extension. Fixes rump build as well. Welcome to 6.99.6
show more ...
|
#
4b760398 |
| 28-Apr-2012 |
rmind <rmind@NetBSD.org> |
Remove MALLOC_DEBUG and MALLOCLOG, which is dead code after malloc(9) move to kmem(9). Note: kmem(9) has debugging facilities under DEBUG/DIAGNOSTIC. However, expensive kmguard and debug_freecheck h
Remove MALLOC_DEBUG and MALLOCLOG, which is dead code after malloc(9) move to kmem(9). Note: kmem(9) has debugging facilities under DEBUG/DIAGNOSTIC. However, expensive kmguard and debug_freecheck have to be enabled manually.
show more ...
|
#
76ee9eff |
| 06-Feb-2012 |
drochner <drochner@NetBSD.org> |
align allocations >=pagesize at a page boundary, to preserve traditional malloc(9) semantics fixes dri mappings shared per mmap (at least on i945) approved by releng
|
#
28f4072d |
| 30-Jan-2012 |
mrg <mrg@NetBSD.org> |
make sure that the 'struct malloc' header on allocations is properly aligned to (ALIGNBYTES+1). this ensures that the memory that malloc(9) returns is correctly aligned for the platform. this chang
make sure that the 'struct malloc' header on allocations is properly aligned to (ALIGNBYTES+1). this ensures that the memory that malloc(9) returns is correctly aligned for the platform. this change has an effect on hppa, ia64, sparc and sparc64.
necessary on sparc for 8-byte load/store instructions. with this my SS20 boots multiuser again.
show more ...
|
#
3ad48751 |
| 30-Jan-2012 |
rmind <rmind@NetBSD.org> |
- kern_realloc: fix a recent regression, use correct size of current allocation. - kern_malloc: constify.
|
#
8effb66e |
| 28-Jan-2012 |
rmind <rmind@NetBSD.org> |
- Instead of kmem_cache_max, calculate max index and avoid a shift. - Use __read_mostly and __cacheline_aligned. - Make kmem_{intr_alloc,free} public. - Misc.
|
#
e62ee4d4 |
| 27-Jan-2012 |
para <para@NetBSD.org> |
extending vmem(9) to be able to allocated resources for it's own needs. simplifying uvm_map handling (no special kernel entries anymore no relocking) make malloc(9) a thin wrapper around kmem(9) (wit
extending vmem(9) to be able to allocated resources for it's own needs. simplifying uvm_map handling (no special kernel entries anymore no relocking) make malloc(9) a thin wrapper around kmem(9) (with private interface for interrupt safety reasons)
releng@ acknowledged
show more ...
|