History log of /openbsd-src/sys/arch/amd64/include/frameasm.h (Results 1 – 25 of 27)
Revision Date Author Comments
# 2dd0808e 27-Jul-2023 guenther <guenther@openbsd.org>

The interrupt resume (Xdoreti) and recurse (Xspllower) paths are
invoked using indirect branches and should have endbr64's.

ok deraadt@


# e3e62cc7 17-Apr-2023 deraadt <deraadt@openbsd.org>

Add endbr64 instructions to most of the ENTRY() macros.
The IDTVEC() and KIDTVEC() macros also get a endbr64, and therefore we need
to change the way that vectors are aliased with a new IDTVEC_ALIAS(

Add endbr64 instructions to most of the ENTRY() macros.
The IDTVEC() and KIDTVEC() macros also get a endbr64, and therefore we need
to change the way that vectors are aliased with a new IDTVEC_ALIAS() macro.
with guenther, jsg

show more ...


# a212cfe0 12-Nov-2020 guenther <guenther@openbsd.org>

Simplify interrupt entry stubs to not push around bogus trapno+err
slots but rather go directly from the iretq frame to an intrframe.
This saves 22 bytes in each of the 148 interrupt entry points.

o

Simplify interrupt entry stubs to not push around bogus trapno+err
slots but rather go directly from the iretq frame to an intrframe.
This saves 22 bytes in each of the 148 interrupt entry points.

ok mpi@

show more ...


# 02a59bc4 09-Nov-2020 guenther <guenther@openbsd.org>

Give sizes and types to more functions and objects.
No effect on object code, just symbol table accuracy

ok mpi@


# bb386764 02-Nov-2020 guenther <guenther@openbsd.org>

Restore abstraction of register saving into macros in frameasm.h
The Meltdown mitigation work ran right across the previous abstractions;
draw slightly different lines and use separate macros for int

Restore abstraction of register saving into macros in frameasm.h
The Meltdown mitigation work ran right across the previous abstractions;
draw slightly different lines and use separate macros for interrupts
vs traps vs syscall.

The generated ASM for traps and general interrupts is completely
unchanged; the ASM for the four directly routed interrupts is brought
into line with the general interrupts; the ASM for syscalls is
changed to delay reenabling interrupts until after all registers
are saved and cleared.

ok mpi@

show more ...


# 5c3fa5a3 07-Aug-2019 guenther <guenther@openbsd.org>

Mitigate CVE-2019-1125: block speculation past conditional jump to mis-skip
or mis-take swapgs in interrupt path and in trap/fault/exception path. The
latter is improved to have no conditionals arou

Mitigate CVE-2019-1125: block speculation past conditional jump to mis-skip
or mis-take swapgs in interrupt path and in trap/fault/exception path. The
latter is improved to have no conditionals around this when Meltdown mitigation
is in effect. Codepatch out the fences based on the description of CPU bugs
in the (well written) Linux commit message.

feedback from kettenis@
ok deraadt@

show more ...


# ddcc9d3c 12-May-2019 guenther <guenther@openbsd.org>

s/availible/available/


# a4858df8 23-Jul-2018 guenther <guenther@openbsd.org>

Do "Return stack refilling", based on the "Return stack underflow" discussion
and its associated appendix at https://support.google.com/faqs/answer/7625886
This should address at least some cases of

Do "Return stack refilling", based on the "Return stack underflow" discussion
and its associated appendix at https://support.google.com/faqs/answer/7625886
This should address at least some cases of "SpectreRSB" and earlier
Spectre variants; more commits to follow.

The refilling is done in the enter-kernel-from-userspace and
return-to-userspace-from-kernel paths, making sure to do it before
unblocking interrupts so that a successive interrupt can't get the
CPU to C code without doing this refill. Per the link above, it
also does it immediately after mwait, apparently in case the low-power
CPU states of idle-via-mwait flush the RSB.

ok mlarkin@ deraadt@

show more ...


# 38863611 21-Jul-2018 guenther <guenther@openbsd.org>

Remove the "got meltdown?" conditional from INTRENTRY by doing it
unconditionally and codepatching it out on CPUs that don't need/do
the mitigation.
Align the from-{kernel,userspace} targets in INT

Remove the "got meltdown?" conditional from INTRENTRY by doing it
unconditionally and codepatching it out on CPUs that don't need/do
the mitigation.
Align the from-{kernel,userspace} targets in INTRENTRY with _ALIGN_TRAPS
Align x2apic_eoi using KUENTRY() instead of the artisinal
segment+label+.globl bits it uses currently
s/testq/testb/ for SEL_RPL checks

ok kettenis@ mlarkin@

show more ...


# 841a8c5c 10-Jul-2018 guenther <guenther@openbsd.org>

Drop the ignored selectors (tf_[defg]s) from the trap and interrupt frames.

ok mlarkin@ deraadt@ mpi@ kettenis@


# 6404221f 09-Jul-2018 guenther <guenther@openbsd.org>

Use a slightly more efficient zeroing idiom when clearing GPRs

ok mlarkin@ mortimer@


# f0f07b0b 14-Jun-2018 guenther <guenther@openbsd.org>

Clear the GPRs when entering the kernel from userspace so that
user-controlled values can't take part in speculative execution in
the kernel down paths that end up "not taken" but that may cause
user

Clear the GPRs when entering the kernel from userspace so that
user-controlled values can't take part in speculative execution in
the kernel down paths that end up "not taken" but that may cause
user-visible effects (cache, etc).

prodded by dragonflybsd commit 9474cbef7fcb61cd268019694d94db6a75af7dbe
ok deraadt@ kettenis@

show more ...


# 72062f84 10-Jun-2018 guenther <guenther@openbsd.org>

Put the register-saving parts of INTRENTRY() into their own macros for
separate use later. No binary change


# 1ecfc646 09-Jun-2018 guenther <guenther@openbsd.org>

Move all the DDBPROF logic into the trap03 (#BP) handler to keep alltraps
and intr_fast_exit clean

ok mpi@


# fbad0e3e 26-Apr-2018 guenther <guenther@openbsd.org>

Reorder trapframe/intrframe to put %rbp next to %rip and make it
behave like a real call frame, thus vastly simplifying the ddb back
trace logic.

based on whinging from deraadt@
ok jasper@ mpi@ phes

Reorder trapframe/intrframe to put %rbp next to %rip and make it
behave like a real call frame, thus vastly simplifying the ddb back
trace logic.

based on whinging from deraadt@
ok jasper@ mpi@ phessler@

show more ...


# b767b017 21-Feb-2018 guenther <guenther@openbsd.org>

Meltdown: implement user/kernel page table separation.

On Intel CPUs which speculate past user/supervisor page permission checks,
use a separate page table for userspace with only the minimum of ker

Meltdown: implement user/kernel page table separation.

On Intel CPUs which speculate past user/supervisor page permission checks,
use a separate page table for userspace with only the minimum of kernel code
and data required for the transitions to/from the kernel (still marked as
supervisor-only, of course):
- the IDT (RO)
- three pages of kernel text in the .kutext section for interrupt, trap,
and syscall trampoline code (RX)
- one page of kernel data in the .kudata section for TLB flush IPIs (RW)
- the lapic page (RW, uncachable)
- per CPU: one page for the TSS+GDT (RO) and one page for trampoline
stacks (RW)

When a syscall, trap, or interrupt takes a CPU from userspace to kernel the
trampoline code switches page tables, switches stacks to the thread's real
kernel stack, then copies over the necessary bits from the trampoline stack.
On return to userspace the opposite occurs: recreate the iretq frame on the
trampoline stack, switch stack, switch page tables, and return to userspace.

mlarkin@ implemented the pmap bits and did 90% of the debugging, diagnosing
issues on MP in particular, and drove the final push to completion.
Many rounds of testing by naddy@, sthen@, and others
Thanks to Alex Wilson from Joyent for early discussions about trampolines
and their data requirements.
Per-CPU page layout mostly inspired by DragonFlyBSD.

ok mlarkin@ deraadt@

show more ...


# 99c80879 06-Jan-2018 guenther <guenther@openbsd.org>

Handle %gs like %[def]s and reset set it in cpu_switchto() instead of on
every return to userspace.

ok kettenis@ mlarkin@


# 6950c8e2 04-Sep-2016 mpi <mpi@openbsd.org>

Introduce Dynamic Profiling, a ddb(4) based & gprof compatible kernel
profiling framework.

Code patching is used to enable probes when entering functions. The
probes will call a mcount()-like funct

Introduce Dynamic Profiling, a ddb(4) based & gprof compatible kernel
profiling framework.

Code patching is used to enable probes when entering functions. The
probes will call a mcount()-like function to match the behavior of a
GPROF kernel.

Currently only available on amd64 and guarded under DDBPROF. Support
for other archs will follow soon.

A new sysctl knob, ddb.console, need to be set to 1 in securelevel 0
to be able to use this feature.

Inputs and ok guenther@

show more ...


# 92f33f2b 17-Jul-2015 guenther <guenther@openbsd.org>

Consistently use SEL_RPL as the mask when testing selector privilege level


# b13138f2 18-May-2015 guenther <guenther@openbsd.org>

Do lazy update/reset of the FS.base and %[def]s segment registers: reseting
segment registers in cpu_switchto if the old thread had made it to userspace
and restoring FS.base only on first return to

Do lazy update/reset of the FS.base and %[def]s segment registers: reseting
segment registers in cpu_switchto if the old thread had made it to userspace
and restoring FS.base only on first return to userspace since context switch.

ok mlarkin@

show more ...


# 0170fe90 17-Apr-2012 guenther <guenther@openbsd.org>

Don't try to cache the CPU's FS.base, as userland can make it a lie by
setting %fs, resulting in it not getting restored properly later

ok mikeb@ deraadt@


# 1396572d 04-Jul-2011 guenther <guenther@openbsd.org>

Force the sigreturn syscall to return to userspace via iretq by setting
the MDP_IRET flag in md_proc, then switch sigcode to enter the kernel
via syscall instead of int$80. Rearrange the return path

Force the sigreturn syscall to return to userspace via iretq by setting
the MDP_IRET flag in md_proc, then switch sigcode to enter the kernel
via syscall instead of int$80. Rearrange the return paths in both the
sysretq and iretq paths to reduce how long interrupts are blocked and
shave instructions.

ok kettenis@, extra testing krw@

show more ...


# f1665d79 13-Apr-2011 guenther <guenther@openbsd.org>

Unrevert the FS.base diff: the issues were actually elsewhere
Additional testing by jasper@ and pea@


# 727ed795 10-Apr-2011 guenther <guenther@openbsd.org>

Revert bulk of the FS.base diff, as it causes issues on some machines
and the problem isn't obvious yet.


# 2b690be9 05-Apr-2011 guenther <guenther@openbsd.org>

Add support for per-rthread base-offset for the %fs selector on amd64.
Add pcb_fsbase to the PCB for tracking what the value for the thread
is, and ci_cur_fsbase to struct cpu_info for tracking the C

Add support for per-rthread base-offset for the %fs selector on amd64.
Add pcb_fsbase to the PCB for tracking what the value for the thread
is, and ci_cur_fsbase to struct cpu_info for tracking the CPU's current
value for FS.base, then on return to user-space, skip the setting if the
CPU has the right value already. Non-threaded processes without TLS leave
FS.base zero, which can be conveniently optimized: setting %fs zeros
FS.base for fewer cycles than wrmsr.

ok kettenis@

show more ...


12