History log of /netbsd-src/sys/arch/amd64/include/param.h (Results 1 – 25 of 38)
Revision Date Author Comments
# ff2d91f1 29-Jun-2020 jdolecek <jdolecek@NetBSD.org>

increase UPAGES (used for lwp kernel stack) for SVS so the the
amount of actually usable kernel stack is the same for SVS and
non-SVS kernels (currently 12 KiB)

discussed with maxv@, part of investi

increase UPAGES (used for lwp kernel stack) for SVS so the the
amount of actually usable kernel stack is the same for SVS and
non-SVS kernels (currently 12 KiB)

discussed with maxv@, part of investigation for PR kern/S55402

show more ...


# 5d43782c 17-Mar-2020 maxv <maxv@NetBSD.org>

Add a redzone between the pcb and the stack. Sent to port-amd64@.


# 081da2e4 08-Feb-2020 maxv <maxv@NetBSD.org>

Retire KLEAK.

KLEAK was a nice feature and served its purpose; it allowed us to detect
dozens of info leaks on the kernel->userland boundary, and thanks to it we
tackled a good part of the infoleak

Retire KLEAK.

KLEAK was a nice feature and served its purpose; it allowed us to detect
dozens of info leaks on the kernel->userland boundary, and thanks to it we
tackled a good part of the infoleak problem 1.5 years ago.

Nowadays however, we have kMSan, which can detect uninitialized memory in
the kernel. kMSan supersedes KLEAK: it can detect what KLEAK was able to
detect, but in addition, (1) it operates in all of the kernel and not just
the kernel->userland boundary, (2) it requires no user interaction, and (3)
it is deterministic and not statistical.

That makes kMSan the feature of choice to detect info leaks nowadays;
people interested in detecting info leaks should boot a kMSan kernel and
just wait for the magic to happen.

KLEAK was a good ride, and a fun project, but now is time for it to go.

Discussed with several people, including Thomas Barabosch.

show more ...


# e5b2948d 22-Jan-2020 ad <ad@NetBSD.org>

Move the UBC defaults into vmparam.h


# 96eb10ba 17-Jan-2020 ad <ad@NetBSD.org>

Bump UBC_WINSHIFT & UBC_NWINS to more reasonable values for amd64.


# 10c5b023 14-Nov-2019 maxv <maxv@NetBSD.org>

Add support for Kernel Memory Sanitizer (kMSan). It detects uninitialized
memory used by the kernel at run time, and just like kASan and kCSan, it
is an excellent feature. It has already detected 38

Add support for Kernel Memory Sanitizer (kMSan). It detects uninitialized
memory used by the kernel at run time, and just like kASan and kCSan, it
is an excellent feature. It has already detected 38 uninitialized variables
in the kernel during my testing, which I have since discreetly fixed.

We use two shadows:
- "shad", to track uninitialized memory with a bit granularity (1:1).
Each bit set to 1 in the shad corresponds to one uninitialized bit of
real kernel memory.
- "orig", to track the origin of the memory with a 4-byte granularity
(1:1). Each uint32_t cell in the orig indicates the origin of the
associated uint32_t of real kernel memory.

The memory consumption of these shadows is consequent, so at least 4GB of
RAM is recommended to run kMSan.

The compiler inserts calls to specific __msan_* functions on each memory
access, to manage both the shad and the orig and detect uninitialized
memory accesses that change the execution flow (like an "if" on an
uninitialized variable).

We mark as uninit several types of memory buffers (stack, pools, kmem,
malloc, uvm_km), and check each buffer passed to copyout, copyoutstr,
bwrite, if_transmit_lock and DMA operations, to detect uninitialized memory
that leaves the system. This allows us to detect kernel info leaks in a way
that is more efficient and also more user-friendly than KLEAK.

Contrary to kASan, kMSan requires comprehensive coverage, ie we cannot
tolerate having one non-instrumented function, because this could cause
false positives. kMSan cannot instrument ASM functions, so I converted
most of them to __asm__ inlines, which kMSan is able to instrument. Those
that remain receive special treatment.

Contrary to kASan again, kMSan uses a TLS, so we must context-switch this
TLS during interrupts. We use different contexts depending on the interrupt
level.

The orig tracks precisely the origin of a buffer. We use a special encoding
for the orig values, and pack together in each uint32_t cell of the orig:
- a code designating the type of memory (Stack, Pool, etc), and
- a compressed pointer, which points either (1) to a string containing
the name of the variable associated with the cell, or (2) to an area
in the kernel .text section which we resolve to a symbol name + offset.

This encoding allows us not to consume extra memory for associating
information with each cell, and produces a precise output, that can tell
for example the name of an uninitialized variable on the stack, the
function in which it was pushed on the stack, and the function where we
accessed this uninitialized variable.

kMSan is available with LLVM, but not with GCC.

The code is organized in a way that is similar to kASan and kCSan, so it
means that other architectures than amd64 can be supported.

show more ...


# 09ba5bf4 28-Sep-2019 christos <christos@NetBSD.org>

remove local version of mstohz() now that <sys/param.h> provides it.


# 001c9b0e 20-Aug-2019 riastradh <riastradh@NetBSD.org>

New macro ALIGNED_POINTER_LOAD.

To be used with ALIGNED_POINTER(p,t) instead of writing *(const t *)p
directly. This way, on machines without strict alignment, we can use
memcpy to pacify sanitizer

New macro ALIGNED_POINTER_LOAD.

To be used with ALIGNED_POINTER(p,t) instead of writing *(const t *)p
directly. This way, on machines without strict alignment, we can use
memcpy to pacify sanitizers, while getting the same compiled code in
the end with a single (say) MOV instruction.

show more ...


# 573f65ca 16-Mar-2019 rin <rin@NetBSD.org>

Bump STACK_ALIGNBYTES to (16 - 1) to satisfy requirement by AMD64
System V ABI in kernel level. This is because

(1) for LLDB, we want to bypass libc/csu (and therefore manual stack
alignment in

Bump STACK_ALIGNBYTES to (16 - 1) to satisfy requirement by AMD64
System V ABI in kernel level. This is because

(1) for LLDB, we want to bypass libc/csu (and therefore manual stack
alignment in _start), and

(2) rtld in glibc >= 2.23 for Linux/x86_64 requires it.

Fix SEGV for Linux/x86_64 binaries with glibc >= 2.23, reported as
PR port-amd64/54052.

show more ...


# 427af037 11-Feb-2019 cherry <cherry@NetBSD.org>

We reorganise definitions for XEN source support as follows:

XEN - common sources required for baseline XEN support.
XENPV - sources required for support of XEN in PV mode.
XENPVHVM - sources requir

We reorganise definitions for XEN source support as follows:

XEN - common sources required for baseline XEN support.
XENPV - sources required for support of XEN in PV mode.
XENPVHVM - sources required for support for XEN in HVM mode.
XENPVH - sources required for support for XEN in PVH mode.

show more ...


# 072aa173 07-Jan-2019 jdolecek <jdolecek@NetBSD.org>

move DEV_BSIZE, DEV_BSHIFT out of MD param.h, they are same on all ports

also move BLKDEV_IOSIZE, MAXPHYS, but allow override since some ports
have different value (powerpc uses NBPG for BLKDEV_IOSI

move DEV_BSIZE, DEV_BSHIFT out of MD param.h, they are same on all ports

also move BLKDEV_IOSIZE, MAXPHYS, but allow override since some ports
have different value (powerpc uses NBPG for BLKDEV_IOSIZE, sun2/sun3
have lower MAXPHYS)

show more ...


# e5fadd7f 02-Dec-2018 maxv <maxv@NetBSD.org>

Introduce KLEAK, a new feature that can detect kernel information leaks.

It works by tainting memory sources with marker values, letting the data
travel through the kernel, and scanning the kernel<-

Introduce KLEAK, a new feature that can detect kernel information leaks.

It works by tainting memory sources with marker values, letting the data
travel through the kernel, and scanning the kernel<->user frontier for
these marker values. Combined with compiler instrumentation and rotation
of the markers, it is able to yield relevant results with little effort.

We taint the pools and the stack, and scan copyout/copyoutstr. KLEAK is
supported on amd64 only for now, but it is not complicated to add more
architectures (just a matter of having the address of .text, and a stack
unwinder).

A userland tool is provided, that allows to execute a command in rounds
and monitor the leaks generated all the while.

KLEAK already detected directly 12 kernel info leaks, and prompted changes
that in total fixed 25+ leaks.

Based on an idea developed jointly with Thomas Barabosch (of Fraunhofer
FKIE).

show more ...


# 7c492317 22-Aug-2018 maxv <maxv@NetBSD.org>

Add support for monitoring the stack with kASan. This allows us to detect
illegal memory accesses occuring there.

The compiler inlines a piece of code in each function that adds redzones
around the

Add support for monitoring the stack with kASan. This allows us to detect
illegal memory accesses occuring there.

The compiler inlines a piece of code in each function that adds redzones
around the local variables and poisons them. The illegal accesses are then
detected using the usual kASan machinery.

The stack size is doubled, from 4 pages to 8 pages.

Several boot functions are marked with the __noasan flag, to prevent the
compiler from adding redzones in them (because we haven't yet initialized
kASan). The kasan_early_init function is called early at boot time to
quickly create the shadow for the current stack; after this is done, we
don't need __noasan anymore in the boot path.

We pass -fasan-shadow-offset=0xDFFF900000000000, because the compiler
wants to do
shad = shadow-offset + (addr >> 3)
and we do, in kasan_addr_to_shad
shad = KASAN_SHADOW_START + ((addr - CANONICAL_BASE) >> 3)
hence
shad = KASAN_SHADOW_START + (addr >> 3) - (CANONICAL_BASE >> 3)
= [KASAN_SHADOW_START - (CANONICAL_BASE >> 3)] + (addr >> 3)
implies
shadow-offset = KASAN_SHADOW_START - (CANONICAL_BASE >> 3)
= 0xFFFF800000000000 - (0xFFFF800000000000 >> 3)
= 0xDFFF900000000000

In UVM, we add a kasan_free (that is not preceded by a kasan_alloc). We
don't add poisoned redzones ourselves, but all the functions we execute
do, so we need to manually clear the poison before freeing the stack.

With the help of Kamil for the makefile stuff.

show more ...


# f4df18fb 16-Mar-2018 maxv <maxv@NetBSD.org>

Add one more page for the stack, to compensate for the fact that SVS's
stack switching mechanism consumes approximately one page.


# a96590af 19-Feb-2018 sborrill <sborrill@NetBSD.org>

Double size of MSGBUFSIZE as existing value is not big enough to hold boot dmesg
on modern server-class hardware with lots of CPUs, etc.


# 17f34603 11-Jan-2018 maxv <maxv@NetBSD.org>

Initialize ist0 in cpu_init_tss. On amd64 this is the DDB stack, and it has
nothing to do with ci_intrstack. While here, style, and don't forget to
pass UVM_KMF_ZERO in uvm_km_alloc.


# b58f1fcb 14-Jun-2017 maxv <maxv@NetBSD.org>

Define MAXPHYSMEM globally.


# 2b265831 02-Feb-2017 maxv <maxv@NetBSD.org>

Increase KERNTEXTOFF from 1MB to 2MB on amd64. [1MB; 2MB[ is now handled
by UVM, so there is no physical loss.

On amd64 we always remap the kernel text with 2MB pages, and because of the
1MB start a

Increase KERNTEXTOFF from 1MB to 2MB on amd64. [1MB; 2MB[ is now handled
by UVM, so there is no physical loss.

On amd64 we always remap the kernel text with 2MB pages, and because of the
1MB start address we were forced to map [0MB; 2MB[ inside the first large
page. The problem is, the lower half is used by UVM to allocate physical
pages, and it is possible that some of these could be used by userland. We
could end up with userland-controllable data mapped into the kernel text on
a privileged page, which is far from being a good idea from a security pov.

I am not fixing i386 yet, because the large page size depends on PAE, and
we probably don't want to have a text located at 4MB on low-memory systems.

(note: I didn't introduce this issue, it was already there when I came in)

show more ...


# 15149c9a 20-Jan-2017 maya <maya@NetBSD.org>

increase max io mem on amd64. some devices need it.


# 9884c86a 27-Oct-2015 mrg <mrg@NetBSD.org>

make sure MSGBUFSIZE can't expand strangely by using parens.


# 0c794722 20-Apr-2012 rmind <rmind@NetBSD.org>

- Convert x86 MD code, mainly pmap(9) e.g. TLB shootdown code, to use
kcpuset(9) and thus replace hardcoded CPU bitmasks. This removes the
limitation of maximum CPUs.

- Support up to 256 CPUs o

- Convert x86 MD code, mainly pmap(9) e.g. TLB shootdown code, to use
kcpuset(9) and thus replace hardcoded CPU bitmasks. This removes the
limitation of maximum CPUs.

- Support up to 256 CPUs on amd64 architecture by default.

Bug fixes, improvements, completion of Xen part and testing on 64-core
AMD Opteron(tm) Processor 6282 SE (also, as Xen HVM domU with 128 CPUs)
by Manuel Bouyer.

show more ...


# 5f319369 04-Feb-2012 para <para@NetBSD.org>

improve sizing of kmem_arena now that more allocations are made from it
don't enforce limits if not required

ok: riz@


# dd23e710 24-Jan-2012 christos <christos@NetBSD.org>

Use and define ALIGN() ALIGN_POINTER() and STACK_ALIGN() consistently,
and avoid definining them in 10 different places if not needed.


# e8bec33b 20-Jan-2012 joerg <joerg@NetBSD.org>

Change CMSG_SPACE and CMSG_LEN to provide Integer Constant Expressions
again. This was changed in sys/socket.h r1.51 to work around fallout
from the IPv6 aux data migration. It broke the historic ABI

Change CMSG_SPACE and CMSG_LEN to provide Integer Constant Expressions
again. This was changed in sys/socket.h r1.51 to work around fallout
from the IPv6 aux data migration. It broke the historic ABI on some
platforms. This commit restores compatibility for netbsd32 code on such
platforms and provides a template for future changes to the CMSG_*
alignment. Revert PCC/Clang workarounds in postfix and tmux.

show more ...


# 914d59e3 26-Jul-2011 yamt <yamt@NetBSD.org>

g/c round_pdr


12