#
479c151d |
| 20-Sep-2024 |
jsg <jsg@openbsd.org> |
remove unneeded semicolons; checked by millert@
|
#
42a1f524 |
| 30-Mar-2024 |
miod <miod@openbsd.org> |
In _malloc_init(), round up the region being mprotected RW to the malloc page size, rather than relying upon mprotect to round up to the actual mmu page size.
This repairs malloc operation on system
In _malloc_init(), round up the region being mprotected RW to the malloc page size, rather than relying upon mprotect to round up to the actual mmu page size.
This repairs malloc operation on systems where the malloc page size (1 << _MAX_PAGE_SHIFT) is larger than the mmu page size.
ok otto@
show more ...
|
#
8fa61426 |
| 19-Dec-2023 |
otto <otto@openbsd.org> |
A small cleanup of malloc_bytes(), getting rid of a goto and a tiny bit of optimization; ok tb@ asou@
|
#
2a60a4d2 |
| 04-Dec-2023 |
otto <otto@openbsd.org> |
Save backtraces to show in leak dump. Depth of backtrace set by malloc option D (aka 1), 2, 3 or 4. No performance impact if not used. ok asou@
|
#
5f3e01b7 |
| 04-Nov-2023 |
otto <otto@openbsd.org> |
KNF plus fixed a few signed vs unsigned compares (that we actually not real problems)
|
#
53a1814d |
| 26-Oct-2023 |
otto <otto@openbsd.org> |
A few micro-optimizations; ok asou@
|
#
0778079a |
| 22-Oct-2023 |
otto <otto@openbsd.org> |
When option D is active, store callers for all chunks; this avoids the 0x0 call sites for leak reports. Also display more info on detected write of free chunks: print the info about where the chunk w
When option D is active, store callers for all chunks; this avoids the 0x0 call sites for leak reports. Also display more info on detected write of free chunks: print the info about where the chunk was allocated, and for the preceding chunk as well. ok asou@
show more ...
|
#
db142dbd |
| 09-Sep-2023 |
asou <asou@openbsd.org> |
Print waring message when not allocated memory in putleakinfo().
ok otto.
|
#
5f0c994b |
| 30-Jun-2023 |
otto <otto@openbsd.org> |
Recommit "Allow to ask for deeper callers for leak reports using malloc options"
Now only enabled for platforms where it's know to work and written as a inline functions instead of a macro.
|
#
e59245c0 |
| 23-Jun-2023 |
otto <otto@openbsd.org> |
Revert previous, not all platforms allow compiling __builtin_return_address(a) with a != 0.
|
#
1be2752a |
| 22-Jun-2023 |
otto <otto@openbsd.org> |
Allow to ask for deeper callers for leak reports using malloc options. ok deraadt@
|
#
9889fdb6 |
| 07-Jun-2023 |
aoyama <aoyama@openbsd.org> |
Add portable version and m88k-specific version lb() function, because unfortunately gcc3 does not have __builtin_clz().
ok miod@ otto@
|
#
e78208e2 |
| 04-Jun-2023 |
otto <otto@openbsd.org> |
More thorough write-afetr-free checks.
On free, chunks (the pieces of a pages used for smaller allocations) are junked and then validated after they leave the delayed free list. So after free, a ch
More thorough write-afetr-free checks.
On free, chunks (the pieces of a pages used for smaller allocations) are junked and then validated after they leave the delayed free list. So after free, a chunk always contains junk bytes. This means that if we start with the right contents for a new page of chunks, we can *validate* instead of *write* junk bytes when (re)-using a chunk.
With this, we can detect write-after-free when a chunk is recycled, not justy when a chunk is in the delayed free list. We do a little bit more work on initial allocation of a page of chunks and when re-using (as we validate now even on junk level 1).
Also: some extra consistency checks for recallocaray(3) and fixes in error messages to make them more consistent, with man page bits.
Plus regress additions.
show more ...
|
#
d88bac1a |
| 27-May-2023 |
otto <otto@openbsd.org> |
Remove malloc interposition, a workaround that was once needed for emacs ok guenther@
|
#
760f5d48 |
| 10-May-2023 |
otto <otto@openbsd.org> |
As mmap(2) is no longer a LOCK syscall, do away with the extra unlock-lock dance it serves no real purpose any more. Confirmed by a small performance increase in tests. ok @tb
|
#
0eee8115 |
| 21-Apr-2023 |
jsg <jsg@openbsd.org> |
remove duplicate include ok otto@
|
#
b8e81c95 |
| 16-Apr-2023 |
otto <otto@openbsd.org> |
Dump (leak) info using utrace(2) and compile the code always in except for bootblocks. This way we have built-in leak detecction always (if enable by malloc flags). See man pages for details.
|
#
250bcd55 |
| 05-Apr-2023 |
otto <otto@openbsd.org> |
Introduce variation in location of junked bytes; ok tb@
|
#
42f826b2 |
| 01-Apr-2023 |
otto <otto@openbsd.org> |
Check all chunks in the delayed free list for write-after-free. Should catch more of them and closer (in time) to the WAF. ok tb@
|
#
d7c8d7e7 |
| 25-Mar-2023 |
otto <otto@openbsd.org> |
Change malloc chunk sizes to be fine grained.
The basic idea is simple: one of the reasons the recent sshd bug is potentially exploitable is that a (erroneously) freed malloc chunk gets re-used in a
Change malloc chunk sizes to be fine grained.
The basic idea is simple: one of the reasons the recent sshd bug is potentially exploitable is that a (erroneously) freed malloc chunk gets re-used in a different role. malloc has power of two chunk sizes and so one page of chunks holds many different types of allocations. Userland malloc has no knowledge of types, we only know about sizes. So I changed that to use finer-grained chunk sizes.
This has some performance impact as we need to allocate chunk pages in more cases. Gain it back by allocation chunk_info pages in a bundle, and use less buckets is !malloc option S. The chunk sizes used are 16, 32, 48, 64, 80, 96, 112, 128, 160, 192, 224, 256, 320, 384, 448, 512, 640, 768, 896, 1024, 1280, 1536, 1792, 2048 (and a few more for sparc64 with its 8k sized pages and loongson with its 16k pages).
If malloc option S (or rather cache size 0) is used we use strict multiple of 16 sized chunks, to get as many buckets as possible. ssh(d) enabled malloc option S, in general security sensitive programs should.
See the find_bucket() and bin_of() functions. Thanks to Tony Finch for pointing me to code to compute nice bucket sizes.
ok tb@
show more ...
|
#
eed1419e |
| 27-Feb-2023 |
otto <otto@openbsd.org> |
There is no reason to-be-cleared chunks cannot participate in delayed freeing; ok tb@
|
#
e70a8168 |
| 27-Dec-2022 |
otto <otto@openbsd.org> |
Change the way malloc_init() works so that the main data structures can be made immutable to provide extra protection. Also init pools on-demand: only pools that are actually used are initialized.
Change the way malloc_init() works so that the main data structures can be made immutable to provide extra protection. Also init pools on-demand: only pools that are actually used are initialized.
Tested by many
show more ...
|
#
0b4b01c9 |
| 14-Oct-2022 |
deraadt <deraadt@openbsd.org> |
put the malloc_readonly struct into the "openbsd.mutable" section, so that the kernel and ld.so will know not to mark it immutable. malloc handles the read/write transitions by itself.
|
#
1c17c713 |
| 30-Jun-2022 |
guenther <guenther@openbsd.org> |
To figure our whether a large allocation can be grown into the following page(s) we've been first mquery()ing for it, mmapp()ing w/o MAP_FIXED if available, and then munmap()ing if there was a race.
To figure our whether a large allocation can be grown into the following page(s) we've been first mquery()ing for it, mmapp()ing w/o MAP_FIXED if available, and then munmap()ing if there was a race. Instead, just try it directly with mmap(MAP_FIXED | __MAP_NOREPLACE)
tested in snaps for weeks
ok deraadt@
show more ...
|
#
799edd11 |
| 26-Feb-2022 |
otto <otto@openbsd.org> |
Currently malloc caches a number of free'ed regions up to 128k in size. This cache is indexed by size (in # of pages), so it is very quick to check. Some programs allocate and deallocate larger allo
Currently malloc caches a number of free'ed regions up to 128k in size. This cache is indexed by size (in # of pages), so it is very quick to check. Some programs allocate and deallocate larger allocations in a frantic way. Accomodate those programs by also keeping a cache of regions between 128k and 2M, in a cache of variable sized regions.
Tested by many in snaps; ok deraadt@
show more ...
|