History log of /netbsd-src/sys/kern/kern_resource.c (Results 1 – 25 of 195)
Revision Date Author Comments
# a355028f 04-Oct-2023 ad <ad@NetBSD.org>

Eliminate l->l_ncsw and l->l_nivcsw. From memory think they were added
before we had per-LWP struct rusage; the same is now tracked there.


# 59e0001f 23-Sep-2023 ad <ad@NetBSD.org>

Repply this change with a couple of bugs fixed:

- Do away with separate pool_cache for some kernel objects that have no special
requirements and use the general purpose allocator instead. On one o

Repply this change with a couple of bugs fixed:

- Do away with separate pool_cache for some kernel objects that have no special
requirements and use the general purpose allocator instead. On one of my
test systems this makes for a small (~1%) but repeatable reduction in system
time during builds presumably because it decreases the kernel's cache /
memory bandwidth footprint a little.
- vfs_lockf: cache a pointer to the uidinfo and put mutex in the data segment.

show more ...


# ef0f79c8 12-Sep-2023 ad <ad@NetBSD.org>

Back out recent change to replace pool_cache with then general allocator.
Will return to this when I have time again.


# cbcf86cb 10-Sep-2023 ad <ad@NetBSD.org>

- Do away with separate pool_cache for some kernel objects that have no special
requirements and use the general purpose allocator instead. On one of my
test systems this makes for a small (~1%)

- Do away with separate pool_cache for some kernel objects that have no special
requirements and use the general purpose allocator instead. On one of my
test systems this makes for a small (~1%) but repeatable reduction in system
time during builds presumably because it decreases the kernel's cache /
memory bandwidth footprint a little.
- vfs_lockf: cache a pointer to the uidinfo and put mutex in the data segment.

show more ...


# 42ba7a81 08-Jul-2023 riastradh <riastradh@NetBSD.org>

clock_gettime(2): Fix CLOCK_PROCESS/THREAD_CPUTIME_ID.

Use same calculation as getrusage, not some ad-hoc arithmetic of
internal scheduler parameters that are periodically rewound.

PR kern/57512

X

clock_gettime(2): Fix CLOCK_PROCESS/THREAD_CPUTIME_ID.

Use same calculation as getrusage, not some ad-hoc arithmetic of
internal scheduler parameters that are periodically rewound.

PR kern/57512

XXX pullup-8
XXX pullup-9
XXX pullup-10

show more ...


# 2b6c2cf4 08-Jul-2023 riastradh <riastradh@NetBSD.org>

kern_resource.c: Fix brace placement.

No functional change intended.


# ef3476fb 09-Apr-2022 riastradh <riastradh@NetBSD.org>

sys: Use membar_release/acquire around reference drop.

This just goes through my recent reference count membar audit and
changes membar_exit to membar_release and membar_enter to
membar_acquire -- t

sys: Use membar_release/acquire around reference drop.

This just goes through my recent reference count membar audit and
changes membar_exit to membar_release and membar_enter to
membar_acquire -- this should make everything cheaper on most CPUs
without hurting correctness, because membar_acquire is generally
cheaper than membar_enter.

show more ...


# 122a3e8a 12-Mar-2022 riastradh <riastradh@NetBSD.org>

sys: Membar audit around reference count releases.

If two threads are using an object that is freed when the reference
count goes to zero, we need to ensure that all memory operations
related to the

sys: Membar audit around reference count releases.

If two threads are using an object that is freed when the reference
count goes to zero, we need to ensure that all memory operations
related to the object happen before freeing the object.

Using an atomic_dec_uint_nv(&refcnt) == 0 ensures that only one
thread takes responsibility for freeing, but it's not enough to
ensure that the other thread's memory operations happen before the
freeing.

Consider:

Thread A Thread B
obj->foo = 42; obj->baz = 73;
mumble(&obj->bar); grumble(&obj->quux);
/* membar_exit(); */ /* membar_exit(); */
atomic_dec -- not last atomic_dec -- last
/* membar_enter(); */
KASSERT(invariant(obj->foo,
obj->bar));
free_stuff(obj);

The memory barriers ensure that

obj->foo = 42;
mumble(&obj->bar);

in thread A happens before

KASSERT(invariant(obj->foo, obj->bar));
free_stuff(obj);

in thread B. Without them, this ordering is not guaranteed.

So in general it is necessary to do

membar_exit();
if (atomic_dec_uint_nv(&obj->refcnt) != 0)
return;
membar_enter();

to release a reference, for the `last one out hit the lights' style
of reference counting. (This is in contrast to the style where one
thread blocks new references and then waits under a lock for existing
ones to drain with a condvar -- no membar needed thanks to mutex(9).)

I searched for atomic_dec to find all these. Obviously we ought to
have a better abstraction for this because there's so much copypasta.
This is a stop-gap measure to fix actual bugs until we have that. It
would be nice if an abstraction could gracefully handle the different
styles of reference counting in use -- some years ago I drafted an
API for this, but making it cover everything got a little out of hand
(particularly with struct vnode::v_usecount) and I ended up setting
it aside to work on psref/localcount instead for better scalability.

I got bored of adding #ifdef __HAVE_ATOMIC_AS_MEMBAR everywhere, so I
only put it on things that look performance-critical on 5sec review.
We should really adopt membar_enter_preatomic/membar_exit_postatomic
or something (except they are applicable only to atomic r/m/w, not to
atomic_load/store_*, making the naming annoying) and get rid of all
the ifdefs.

show more ...


# 0eaaa024 23-May-2020 ad <ad@NetBSD.org>

Move proc_lock into the data segment. It was dynamically allocated because
at the time we had mutex_obj_alloc() but not __cacheline_aligned.


# ce578dfc 21-Feb-2020 joerg <joerg@NetBSD.org>

Explicitly cast pointers to uintptr_t before casting to enums. They are
not necessarily the same size. Don't cast pointers to bool, check for
NULL instead.


# 82002773 15-Feb-2020 ad <ad@NetBSD.org>

- Move the LW_RUNNING flag back into l_pflag: updating l_flag without lock
in softint_dispatch() is risky. May help with the "softint screwup"
panic.

- Correct the memory barriers around zombie

- Move the LW_RUNNING flag back into l_pflag: updating l_flag without lock
in softint_dispatch() is risky. May help with the "softint screwup"
panic.

- Correct the memory barriers around zombies switching into oblivion.

show more ...


# 2ddceed1 08-Jan-2020 ad <ad@NetBSD.org>

Hopefully fix some problems seen with MP support on non-x86, in particular
where curcpu() is defined as curlwp->l_cpu:

- mi_switch(): undo the ~2007ish optimisation to unlock curlwp before
calling

Hopefully fix some problems seen with MP support on non-x86, in particular
where curcpu() is defined as curlwp->l_cpu:

- mi_switch(): undo the ~2007ish optimisation to unlock curlwp before
calling cpu_switchto(). It's not safe to let other actors mess with the
LWP (in particular l->l_cpu) while it's still context switching. This
removes l->l_ctxswtch.

- Move the LP_RUNNING flag into l->l_flag and rename to LW_RUNNING since
it's now covered by the LWP's lock.

- Ditch lwp_exit_switchaway() and just call mi_switch() instead. Everything
is in cache anyway so it wasn't buying much by trying to avoid saving old
state. This means cpu_switchto() will never be called with prevlwp ==
NULL.

- Remove some KERNEL_LOCK handling which hasn't been needed for years.

show more ...


# b2d41f4a 21-Nov-2019 ad <ad@NetBSD.org>

calcru: ignore running softints, unless softint_timing is on.
Fixes crazy times reported for proc0.


# 31033f10 05-Apr-2019 mlelstv <mlelstv@NetBSD.org>

avoid underflow in user/system time.


# 9f866c51 13-May-2018 christos <christos@NetBSD.org>

correct the function name.


# ebee088d 09-May-2018 kre <kre@NetBSD.org>

Cause a process's user and system times to become non-decreasing.

This alters the invented values (ie: statistically calculated)
that are returned - for small values, the values are likely going to

Cause a process's user and system times to become non-decreasing.

This alters the invented values (ie: statistically calculated)
that are returned - for small values, the values are likely going to
be different than they were, but that's largely nonsense anyway
(except that the sum of utime & stime does equal cpu time consumed
by the process). Once the values get large enough to be meaningful
the difference made by this change will be in the noise, and irrelevant.

This needs a couple of additions to struct proc, so we are now into 8.99.17

show more ...


# 4656c477 08-May-2018 christos <christos@NetBSD.org>

get the maxrss from the vmspace field, and handle platforms that don't
have pmap statistics here.


# e0983e96 07-May-2018 christos <christos@NetBSD.org>

Load the struct rusage text, data, and stack fields from the vmspace struct.
Before they were all 0. We update them when we call getrusage() or on
process exit() so that the children rusage is accoun

Load the struct rusage text, data, and stack fields from the vmspace struct.
Before they were all 0. We update them when we call getrusage() or on
process exit() so that the children rusage is accounted for.

show more ...


# 90159199 08-Apr-2018 mlelstv <mlelstv@NetBSD.org>

limits are bytes, vm sizes are clicks.


# af651997 24-Mar-2017 pgoyette <pgoyette@NetBSD.org>

Add new sysctl variable proc.curproc.paxflags so a process can determine
which flags were set for it. Define some values for the variable:

CTL_PROC_PAXFLAGS_{ASLR,MPROTECT,GUARD}


# 84b8b47b 13-Jul-2016 njoly <njoly@NetBSD.org>

In dosetrlimit() round stack hard limit just like soft one.
Avoid cases where hard limit becomes smaller than soft limit.


# f0a7346d 18-Oct-2014 snj <snj@NetBSD.org>

src is too big these days to tolerate superfluous apostrophes. It's
"its", people!


# 4f6fb3bf 25-Feb-2014 pooka <pooka@NetBSD.org>

Ensure that the top level sysctl nodes (kern, vfs, net, ...) exist before
the sysctl link sets are processed, and remove redundancy.

Shaves >13kB off of an amd64 GENERIC, not to mention >1k duplicat

Ensure that the top level sysctl nodes (kern, vfs, net, ...) exist before
the sysctl link sets are processed, and remove redundancy.

Shaves >13kB off of an amd64 GENERIC, not to mention >1k duplicate
lines of code.

show more ...


# 8f9db9bc 07-Jan-2013 chs <chs@NetBSD.org>

fix setrlimit(RLIMIT_STACK) for __MACHINE_STACK_GROWS_UP platforms.


# 0558d820 21-Dec-2012 njoly <njoly@NetBSD.org>

One semi-column is enough.


12345678