#
e2bf3321 |
| 25-Jan-2025 |
bluhm <bluhm@openbsd.org> |
Keep socket lock in sonewconn() for new connection.
For TCP input unlocking we need a consistent lock of the newly created socket. Instead of releasing the lock in sonewconn() and grabbing it again
Keep socket lock in sonewconn() for new connection.
For TCP input unlocking we need a consistent lock of the newly created socket. Instead of releasing the lock in sonewconn() and grabbing it again later, it is better that sonewconn() returns a locked socket. For now only change syn_cache_get() which calls in_pcbsounlock_rele() at the end. Following diffs will push the unlock into tcp_input().
OK mvs@
show more ...
|
#
e28482df |
| 20-Jan-2025 |
bluhm <bluhm@openbsd.org> |
Do not unlock the socket in soabort().
One difference between UNIX and internet sockets is that UNIX sockets unlock in soabort() while TCP does not do that. in_pcbdetach() keeps the lock, change ui
Do not unlock the socket in soabort().
One difference between UNIX and internet sockets is that UNIX sockets unlock in soabort() while TCP does not do that. in_pcbdetach() keeps the lock, change uipc_abort() to behave similar. This also gives symetric lock and unlock in the caller. Refcount is needed to call unlock on an aborted socket. The queue 0 in soclose() is only used by UNIX sockets, so remove the "if" persocket. The "kassert" persocket in soisconnected() is not needed.
OK mvs@
show more ...
|
#
66570633 |
| 01-Jan-2025 |
bluhm <bluhm@openbsd.org> |
Fix whitespace.
|
#
af61481e |
| 05-Nov-2024 |
jsg <jsg@openbsd.org> |
remove VATTR_NULL() define, directly call vattr_null()
There used to be a predefined null vattr for !DIAGNOSTIC but that was removed in vnode.h rev 1.84 in 2007.
ok semarie@ miod@
|
#
5d7086de |
| 22-Sep-2024 |
claudio <claudio@openbsd.org> |
Increase the default buffer size for AF_UNIX from 8192 to 32768.
Using 8k for socketpairs was always on the low end side. Also this avoid a fatal error in sshd that can be triggered when the network
Increase the default buffer size for AF_UNIX from 8192 to 32768.
Using 8k for socketpairs was always on the low end side. Also this avoid a fatal error in sshd that can be triggered when the network stack is pushed hard enough to consume most of the allowed memory. By increasing the default buffer size a bit the error in sshd is avoided which is good enough for now.
Long term a better solution for sonewconn() and especially sbchecklowmem() needs to be found. m_pool_used() returns not the right information for them.
OK deraadt@ otto@
show more ...
|
#
fec7750f |
| 06-Aug-2024 |
mvs <mvs@openbsd.org> |
Use atomic_load_int(9) for unlocked read access to net.unix.*space sysctl(2) variables.
ok bluhm
|
#
161e4927 |
| 28-Jun-2024 |
mvs <mvs@openbsd.org> |
Restore original EPIPE and ENOTCONN errors priority in the uipc_send() path changed in rev 1.206. At least acme-client(1) is not happy with this change.
Reported by claudio. Tests and ok by bluhm.
|
#
d1dd449e |
| 26-Jun-2024 |
mvs <mvs@openbsd.org> |
Push socket re-lock to the vnode(9) release path within unp_detach(). The only reason to re-lock dying `so' is the lock order with vnode(9) lock, thus `unp_gc_lock' rwlock(9) could be taken after sol
Push socket re-lock to the vnode(9) release path within unp_detach(). The only reason to re-lock dying `so' is the lock order with vnode(9) lock, thus `unp_gc_lock' rwlock(9) could be taken after solock().
ok bluhm
show more ...
|
#
bb0cd11a |
| 03-May-2024 |
mvs <mvs@openbsd.org> |
Push solock() down to sosend() and remove it from soreceive() paths fro unix(4) sockets.
Push solock() deep down to sosend() and remove it from soreceive() paths for unix(4) sockets.
The transmissi
Push solock() down to sosend() and remove it from soreceive() paths fro unix(4) sockets.
Push solock() deep down to sosend() and remove it from soreceive() paths for unix(4) sockets.
The transmission of unix(4) sockets already half-unlocked because connected peer is not locked by solock() during sbappend*() call. Use `sb_mtx' mutex(9) and `sb_lock' rwlock(9) to protect both `so_snd' and `so_rcv'.
Since the `so_snd' is protected by `sb_mtx' mutex(9) the re-locking is not required in uipc_rcvd().
Do direct `so_rcv' dispose and cleanup in sofree(). This sockets is almost dead and unlinked from everywhere include spliced peer, so concurrent sotask() thread will just exit. This required to keep locks order between `i_lock' and `sb_lock'. Also this removes re-locking from sofree() for all sockets.
SB_OWNLOCK became redundant with SB_MTXLOCK, so remove it. SB_MTXLOCK was kept because checks against SB_MTXLOCK within sb*() routines are mor consistent.
Feedback and ok bluhm
show more ...
|
#
db2d50c3 |
| 02-May-2024 |
mvs <mvs@openbsd.org> |
Don't re-lock sockets in uipc_shutdown().
No reason to lock peer. It can't be or became listening socket, both sockets can't be in the middle of connecting or disconnecting.
ok bluhm
|
#
b11de491 |
| 10-Apr-2024 |
mvs <mvs@openbsd.org> |
Remove `head' socket re-locking in sonewconn().
uipc_attach() releases solock() because it should be taken after `unp_gc_lock' rwlock(9) which protects the `unp_link' list. For this reason, the list
Remove `head' socket re-locking in sonewconn().
uipc_attach() releases solock() because it should be taken after `unp_gc_lock' rwlock(9) which protects the `unp_link' list. For this reason, the listening `head' socket should be unlocked too while sonewconn() calls uipc_attach(). This could be reworked because now `so_rcv' sockbuf relies on `sb_mtx' mutex(9).
The last one `unp_link' foreach loop within unp_gc() discards sockets previously marked as UNP_GCDEAD. These sockets are not accessed from the userland. The only exception is the sosend() threads of connected sending peers, but they only sbappend*() mbuf(9) to `so_rcv'. So it's enough to unlink mbuf(9) chain with `sb_mtx' held and discard lockless.
Please note, the existing SS_NEWCONN_WAIT logic was never used because the listening unix(4) socket protected from concurrent unp_detach() by vnode(9) lock, however `head' re-locked all times.
ok bluhm
show more ...
|
#
02a82712 |
| 26-Mar-2024 |
mvs <mvs@openbsd.org> |
Use `sb_mtx' to protect `so_rcv' receive buffer of unix(4) sockets.
This makes re-locking unnecessary in the uipc_*send() paths, because it's enough to lock one socket to prevent peer from concurren
Use `sb_mtx' to protect `so_rcv' receive buffer of unix(4) sockets.
This makes re-locking unnecessary in the uipc_*send() paths, because it's enough to lock one socket to prevent peer from concurrent disconnection. As the little bonus, one unix(4) socket can perform simultaneous transmission and reception with one exception for uipc_rcvd(), which still requires the re-lock for connection oriented sockets.
The socket lock is not held while filt_soread() and filt_soexcept() called from uipc_*send() through sorwakeup(). However, the unlocked access to the `so_options', `so_state' and `so_error' is fine.
The receiving socket can't be or became listening socket. It also can't be disconnected concurrently. This makes immutable SO_ACCEPTCONN, SS_ISDISCONNECTED and SS_ISCONNECTED bits which are clean and set respectively.
`so_error' is set on the peer sockets only by unp_detach(), which also can't be called concurrently on sending socket.
This is also true for filt_fiforead() and filt_fifoexcept(). For other callers like kevent(2) or doaccept() the socket lock is still held.
ok bluhm
show more ...
|
#
fdada4b1 |
| 22-Mar-2024 |
mvs <mvs@openbsd.org> |
Use sorflush() instead of direct unp_scan(..., unp_discard) to discard dead unix(4) sockets.
The difference in direct unp_scan() and sorflush() is the mbuf(9) chain. For the first case it is still l
Use sorflush() instead of direct unp_scan(..., unp_discard) to discard dead unix(4) sockets.
The difference in direct unp_scan() and sorflush() is the mbuf(9) chain. For the first case it is still linked to the `so_rcv', for the second it is not. This is required to make `sb_mtx' mutex(9) the only `so_rcv' sockbuf protection and remove socket re-locking from the most of uipc_*send() paths. The unlinked mbuf(9) chain doesn't require any protection, so this allows to perform sleeping unp_discard() lockless.
Also, the mbuf(9) chain of the discarded socket still contains addresses of file descriptors and it is much safer to unlink it before FRELE() them. This is the reason to commit this diff standalone.
ok bluhm
show more ...
|
#
d1ea0a7c |
| 17-Mar-2024 |
mvs <mvs@openbsd.org> |
Do UNP_CONNECTING and UNP_BINDING flags check in uipc_listen() and return EINVAL if set. This prevents concurrent solisten() thread to make this socket listening while socket is unlocked.
Reported-b
Do UNP_CONNECTING and UNP_BINDING flags check in uipc_listen() and return EINVAL if set. This prevents concurrent solisten() thread to make this socket listening while socket is unlocked.
Reported-by: syzbot+4acfcd73d15382a3e7cf@syzkaller.appspotmail.com
ok mpi
show more ...
|
#
beb6838f |
| 28-Nov-2023 |
jsg <jsg@openbsd.org> |
correct spelling of FALLTHROUGH
|
#
f3489359 |
| 31-Mar-2023 |
jsg <jsg@openbsd.org> |
remove unused unp_lock ok kn@ mvs@
|
#
9e437519 |
| 21-Jan-2023 |
mvs <mvs@openbsd.org> |
Introduce per-sockbuf `sb_state' to use it with SS_CANTSENDMORE.
This time, socket's buffer lock requires solock() to be held. As a part of socket buffers standalone locking work, move socket state
Introduce per-sockbuf `sb_state' to use it with SS_CANTSENDMORE.
This time, socket's buffer lock requires solock() to be held. As a part of socket buffers standalone locking work, move socket state bits which represent its buffers state to per buffer state.
Opposing the previous reverted diff, the SS_CANTSENDMORE definition left as is, but it used only with `sb_state'. `sb_state' ored with original `so_state' when socket's data exported to the userland, so the ABI kept as it was.
Inputs from deraadt@.
ok bluhm@
show more ...
|
#
33da1efb |
| 12-Dec-2022 |
tb <tb@openbsd.org> |
Revert sb_state changes to unbreak tree.
|
#
cba97bf9 |
| 11-Dec-2022 |
mvs <mvs@openbsd.org> |
This time, socket's buffer lock requires solock() to be held. As a part of socket buffers standalone locking work, move socket state bits which represent its buffers state to per buffer state. Introd
This time, socket's buffer lock requires solock() to be held. As a part of socket buffers standalone locking work, move socket state bits which represent its buffers state to per buffer state. Introduce `sb_state' and turn SS_CANTSENDMORE to SBS_CANTSENDMORE. This bit will be processed on `so_snd' buffer only.
Move SS_CANTRCVMORE and SS_RCVATMARK bits with separate diff to make review easier and exclude possible so_rcv/so_snd mistypes.
Also, don't adjust the remaining SS_* bits right now.
ok millert@
show more ...
|
#
2b46a8cb |
| 05-Dec-2022 |
deraadt <deraadt@openbsd.org> |
zap a pile of dangling tabs
|
#
57177aaf |
| 26-Nov-2022 |
mvs <mvs@openbsd.org> |
Merge uipc_bind() with unp_bind(). Unlike other unp_*() functions, unp_bind() has the only uipc_bind() caller. In the uipc_usrreq() times, it made sense to have dedicated unp_bind() for prevent tne c
Merge uipc_bind() with unp_bind(). Unlike other unp_*() functions, unp_bind() has the only uipc_bind() caller. In the uipc_usrreq() times, it made sense to have dedicated unp_bind() for prevent tne code mess within giant switch(), but now it doesn't.
ok bluhm@
show more ...
|
#
5a8609ff |
| 15-Nov-2022 |
mvs <mvs@openbsd.org> |
style(9) fix. No functional change.
|
#
67a10ba2 |
| 13-Nov-2022 |
mvs <mvs@openbsd.org> |
Split out handlers for SOCK_DGRAM unix(4) sockets from SOCK_STREAM and SOCK_SEQPACKET. Introduce `uipc_dgram_usrreqs' to store pointers for dgram specific handlers.
The dgram pru_shutdown and pru_se
Split out handlers for SOCK_DGRAM unix(4) sockets from SOCK_STREAM and SOCK_SEQPACKET. Introduce `uipc_dgram_usrreqs' to store pointers for dgram specific handlers.
The dgram pru_shutdown and pru_send handlers were splitted to uipc_dgram_shutdown() and uipc_dgram_send(). The pru_accept, pru_rcvd and pru_abort handlers are not required for dgram sockets.
The unp_disconnect() remains shared between all unix(4) sockets because it called from common paths too.
Proposed by and ok guenther@
show more ...
|
#
280d7fb5 |
| 17-Oct-2022 |
mvs <mvs@openbsd.org> |
Change pru_abort() return type to the type of void and make pru_abort() optional.
We have no interest on pru_abort() return value. We call it only from soabort() which is dummy pru_abort() wrapper a
Change pru_abort() return type to the type of void and make pru_abort() optional.
We have no interest on pru_abort() return value. We call it only from soabort() which is dummy pru_abort() wrapper and has no return value.
Only the connection oriented sockets need to implement (*pru_abort)() handler. Such sockets are tcp(4) and unix(4) sockets, so remove existing code for all others, it doesn't called.
ok guenther@
show more ...
|
#
62440853 |
| 03-Oct-2022 |
bluhm <bluhm@openbsd.org> |
System calls should not fail due to temporary memory shortage in malloc(9) or pool_get(9). Pass down a wait flag to pru_attach(). During syscall socket(2) it is ok to wait, this logic was missing fo
System calls should not fail due to temporary memory shortage in malloc(9) or pool_get(9). Pass down a wait flag to pru_attach(). During syscall socket(2) it is ok to wait, this logic was missing for internet pcb. Pfkey and route sockets were already waiting. sonewconn() must not wait when called during TCP 3-way handshake. This logic has been preserved. Unix domain stream socket connect(2) can wait until the other side has created the socket to accept. OK mvs@
show more ...
|