xref: /minix3/minix/net/lwip/ndev.c (revision ef8d499e2d2af900e9b2ab297171d7b088652482)
1*ef8d499eSDavid van Moolenbroek /* LWIP service - ndev.c - network driver communication module */
2*ef8d499eSDavid van Moolenbroek /*
3*ef8d499eSDavid van Moolenbroek  * There is almost a one-to-one mapping between network device driver (ndev)
4*ef8d499eSDavid van Moolenbroek  * objects and ethernet interface (ethif) objects, with as major difference
5*ef8d499eSDavid van Moolenbroek  * that there may be an ndev object but not an ethif object for a driver that
6*ef8d499eSDavid van Moolenbroek  * is known to exist but has not yet replied to our initialization request:
7*ef8d499eSDavid van Moolenbroek  * without the information from the initialization request, there is no point
8*ef8d499eSDavid van Moolenbroek  * creating an ethif object just yet, while we do need to track the driver
9*ef8d499eSDavid van Moolenbroek  * process.  TODO: it would be nice if unanswered init requests timed out and
10*ef8d499eSDavid van Moolenbroek  * caused the removal of the ndev object after a while.
11*ef8d499eSDavid van Moolenbroek  *
12*ef8d499eSDavid van Moolenbroek  * Beyond that, this module aims to abstract away the low-level details of
13*ef8d499eSDavid van Moolenbroek  * communication, memory grants, and driver restarts.  Driver restarts are not
14*ef8d499eSDavid van Moolenbroek  * fully transparent to the ethif module because it needs to reinitialize
15*ef8d499eSDavid van Moolenbroek  * driver state only it knows about after a restart.  Drivers that are in the
16*ef8d499eSDavid van Moolenbroek  * process of restarting and therefore not operational are said to be disabled.
17*ef8d499eSDavid van Moolenbroek  *
18*ef8d499eSDavid van Moolenbroek  * From this module's point of view, a network driver is one of two states:
19*ef8d499eSDavid van Moolenbroek  * initializing, where it has yet to respond to our initialization request, and
20*ef8d499eSDavid van Moolenbroek  * active, where it is expected to accept and respond to all other requests.
21*ef8d499eSDavid van Moolenbroek  * This module does not keep track of higher-level states and rules however;
22*ef8d499eSDavid van Moolenbroek  * that is left to the ethif layer on one side, and the network driver itself
23*ef8d499eSDavid van Moolenbroek  * on the other side.  One important example is the interface being up or down:
24*ef8d499eSDavid van Moolenbroek  * the ndev layer will happily forward send and receive requests when the
25*ef8d499eSDavid van Moolenbroek  * interface is down, but these requests will be (resp.) dropped and rejected
26*ef8d499eSDavid van Moolenbroek  * by the network driver in that state, and will not be generated by the ethif
27*ef8d499eSDavid van Moolenbroek  * layer when the layer is down.  Imposing barriers between configure and send
28*ef8d499eSDavid van Moolenbroek  * requests is also left to the other parties.
29*ef8d499eSDavid van Moolenbroek  *
30*ef8d499eSDavid van Moolenbroek  * In this module, each active network driver has a send queue and a receive
31*ef8d499eSDavid van Moolenbroek  * queue.  The send queue is shared for packet send requests and configuration
32*ef8d499eSDavid van Moolenbroek  * change requests.  The receive queue is used for packet receive requests
33*ef8d499eSDavid van Moolenbroek  * only.  Each queue has a maximum depth, which is the minimum of a value
34*ef8d499eSDavid van Moolenbroek  * provided by the network driver during initialization and local restrictions.
35*ef8d499eSDavid van Moolenbroek  * These local restrictions are different for the two queue types: the receive
36*ef8d499eSDavid van Moolenbroek  * queue is always bounded to a hardcoded value, while the send queue has a
37*ef8d499eSDavid van Moolenbroek  * guaranteed minimum depth but may use up to the driver's maximum using spare
38*ef8d499eSDavid van Moolenbroek  * entries.  For both, a minimum depth is always available, since it is not
39*ef8d499eSDavid van Moolenbroek  * possible to cancel individual send or receive requests after they have been
40*ef8d499eSDavid van Moolenbroek  * sent to a particular driver.  This does mean that we necessarily waste a
41*ef8d499eSDavid van Moolenbroek  * large number of request structures in the common case.
42*ef8d499eSDavid van Moolenbroek  *
43*ef8d499eSDavid van Moolenbroek  * The general API model does not support the notion of blocking calls.  While
44*ef8d499eSDavid van Moolenbroek  * it would make sense to retrieve e.g. error statistics from the driver only
45*ef8d499eSDavid van Moolenbroek  * when requested by userland, implementing this without threads would be
46*ef8d499eSDavid van Moolenbroek  * seriously complicated, because such requests can have many origins (ioctl,
47*ef8d499eSDavid van Moolenbroek  * PF_ROUTE message, sysctl).  Instead, we rely on drivers updating us with the
48*ef8d499eSDavid van Moolenbroek  * latest information on everything at all times, so that we can hand over a
49*ef8d499eSDavid van Moolenbroek  * cached copy of (e.g.) those error statistics right away.  We provide a means
50*ef8d499eSDavid van Moolenbroek  * for drivers to perform rate limiting of such status updates (to prevent
51*ef8d499eSDavid van Moolenbroek  * overflowing asynsend queues), by replying to these status messages.  That
52*ef8d499eSDavid van Moolenbroek  * means that there is a request-response combo going in the opposite direction
53*ef8d499eSDavid van Moolenbroek  * of the regular messages.
54*ef8d499eSDavid van Moolenbroek  *
55*ef8d499eSDavid van Moolenbroek  * TODO: in the future we will want to obtain the list of supported media modes
56*ef8d499eSDavid van Moolenbroek  * (IFM_) from drivers, so that userland can view the list.  Given the above
57*ef8d499eSDavid van Moolenbroek  * model, the easiest way would be to obtain a copy of the full list, limited
58*ef8d499eSDavid van Moolenbroek  * to a configured number of entries, at driver initialization time.  This
59*ef8d499eSDavid van Moolenbroek  * would require that the initialization request also involve a memory grant.
60*ef8d499eSDavid van Moolenbroek  *
61*ef8d499eSDavid van Moolenbroek  * If necessary, it would not be too much work to split off this module into
62*ef8d499eSDavid van Moolenbroek  * its own libndev library.  For now, there is no point in doing this and the
63*ef8d499eSDavid van Moolenbroek  * tighter coupling allows us to optimize just a little but (see pbuf usage).
64*ef8d499eSDavid van Moolenbroek  */
65*ef8d499eSDavid van Moolenbroek 
66*ef8d499eSDavid van Moolenbroek #include "lwip.h"
67*ef8d499eSDavid van Moolenbroek #include "ndev.h"
68*ef8d499eSDavid van Moolenbroek #include "ethif.h"
69*ef8d499eSDavid van Moolenbroek 
70*ef8d499eSDavid van Moolenbroek #define LABEL_MAX	16	/* FIXME: this should be in a system header */
71*ef8d499eSDavid van Moolenbroek 
72*ef8d499eSDavid van Moolenbroek #define NDEV_SENDQ	2	/* minimum guaranteed send queue depth */
73*ef8d499eSDavid van Moolenbroek #define NDEV_RECVQ	2	/* guaranteed receive queue depth */
74*ef8d499eSDavid van Moolenbroek #define NREQ_SPARES	8	/* spare send queue (request) objects */
75*ef8d499eSDavid van Moolenbroek #define NR_NREQ		((NDEV_SENDQ + NDEV_RECVQ) * NR_NDEV + NREQ_SPARES)
76*ef8d499eSDavid van Moolenbroek 
77*ef8d499eSDavid van Moolenbroek static SIMPLEQ_HEAD(, ndev_req) nreq_freelist;
78*ef8d499eSDavid van Moolenbroek 
79*ef8d499eSDavid van Moolenbroek static struct ndev_req {
80*ef8d499eSDavid van Moolenbroek 	SIMPLEQ_ENTRY(ndev_req) nreq_next;	/* next request in queue */
81*ef8d499eSDavid van Moolenbroek 	int nreq_type;				/* type of request message */
82*ef8d499eSDavid van Moolenbroek 	cp_grant_id_t nreq_grant[NDEV_IOV_MAX];	/* grants for request */
83*ef8d499eSDavid van Moolenbroek } nreq_array[NR_NREQ];
84*ef8d499eSDavid van Moolenbroek 
85*ef8d499eSDavid van Moolenbroek static unsigned int nreq_spares;	/* number of free spare objects */
86*ef8d499eSDavid van Moolenbroek 
87*ef8d499eSDavid van Moolenbroek struct ndev_queue {
88*ef8d499eSDavid van Moolenbroek 	uint32_t nq_head;		/* ID of oldest pending request */
89*ef8d499eSDavid van Moolenbroek 	uint8_t nq_count;		/* current nr of pending requests */
90*ef8d499eSDavid van Moolenbroek 	uint8_t nq_max;			/* maximum nr of pending requests */
91*ef8d499eSDavid van Moolenbroek 	SIMPLEQ_HEAD(, ndev_req) nq_req; /* queue of pending requests */
92*ef8d499eSDavid van Moolenbroek };
93*ef8d499eSDavid van Moolenbroek 
94*ef8d499eSDavid van Moolenbroek static struct ndev {
95*ef8d499eSDavid van Moolenbroek 	endpoint_t ndev_endpt;		/* driver endpoint */
96*ef8d499eSDavid van Moolenbroek 	char ndev_label[LABEL_MAX];	/* driver label */
97*ef8d499eSDavid van Moolenbroek 	struct ethif *ndev_ethif;	/* ethif object, or NULL if init'ing */
98*ef8d499eSDavid van Moolenbroek 	struct ndev_queue ndev_sendq;	/* packet send and configure queue */
99*ef8d499eSDavid van Moolenbroek 	struct ndev_queue ndev_recvq;	/* packet receive queue */
100*ef8d499eSDavid van Moolenbroek } ndev_array[NR_NDEV];
101*ef8d499eSDavid van Moolenbroek 
102*ef8d499eSDavid van Moolenbroek static ndev_id_t ndev_max;		/* highest driver count ever seen */
103*ef8d499eSDavid van Moolenbroek 
104*ef8d499eSDavid van Moolenbroek /*
105*ef8d499eSDavid van Moolenbroek  * This macro checks whether the network driver is active rather than
106*ef8d499eSDavid van Moolenbroek  * initializing.  See above for more information.
107*ef8d499eSDavid van Moolenbroek  */
108*ef8d499eSDavid van Moolenbroek #define NDEV_ACTIVE(ndev)	((ndev)->ndev_sendq.nq_max > 0)
109*ef8d499eSDavid van Moolenbroek 
110*ef8d499eSDavid van Moolenbroek static int ndev_pending;		/* number of initializing drivers */
111*ef8d499eSDavid van Moolenbroek 
112*ef8d499eSDavid van Moolenbroek /* The CTL_MINIX MINIX_LWIP "drivers" subtree.  Dynamically numbered. */
113*ef8d499eSDavid van Moolenbroek static struct rmib_node minix_lwip_drivers_table[] = {
114*ef8d499eSDavid van Moolenbroek 	RMIB_INTPTR(RMIB_RO, &ndev_pending, "pending",
115*ef8d499eSDavid van Moolenbroek 	    "Number of drivers currently initializing"),
116*ef8d499eSDavid van Moolenbroek };
117*ef8d499eSDavid van Moolenbroek 
118*ef8d499eSDavid van Moolenbroek static struct rmib_node minix_lwip_drivers_node =
119*ef8d499eSDavid van Moolenbroek     RMIB_NODE(RMIB_RO, minix_lwip_drivers_table, "drivers",
120*ef8d499eSDavid van Moolenbroek 	"Network driver information");
121*ef8d499eSDavid van Moolenbroek 
122*ef8d499eSDavid van Moolenbroek /*
123*ef8d499eSDavid van Moolenbroek  * Initialize the network driver communication module.
124*ef8d499eSDavid van Moolenbroek  */
125*ef8d499eSDavid van Moolenbroek void
ndev_init(void)126*ef8d499eSDavid van Moolenbroek ndev_init(void)
127*ef8d499eSDavid van Moolenbroek {
128*ef8d499eSDavid van Moolenbroek 	unsigned int slot;
129*ef8d499eSDavid van Moolenbroek 	int r;
130*ef8d499eSDavid van Moolenbroek 
131*ef8d499eSDavid van Moolenbroek 	/* Initialize local variables. */
132*ef8d499eSDavid van Moolenbroek 	ndev_max = 0;
133*ef8d499eSDavid van Moolenbroek 
134*ef8d499eSDavid van Moolenbroek 	SIMPLEQ_INIT(&nreq_freelist);
135*ef8d499eSDavid van Moolenbroek 
136*ef8d499eSDavid van Moolenbroek 	for (slot = 0; slot < __arraycount(nreq_array); slot++)
137*ef8d499eSDavid van Moolenbroek 		SIMPLEQ_INSERT_TAIL(&nreq_freelist, &nreq_array[slot],
138*ef8d499eSDavid van Moolenbroek 		    nreq_next);
139*ef8d499eSDavid van Moolenbroek 
140*ef8d499eSDavid van Moolenbroek 	nreq_spares = NREQ_SPARES;
141*ef8d499eSDavid van Moolenbroek 
142*ef8d499eSDavid van Moolenbroek 	/*
143*ef8d499eSDavid van Moolenbroek 	 * Preallocate the total number of grants that we could possibly need
144*ef8d499eSDavid van Moolenbroek 	 * concurrently.  Even though it is extremely unlikely that we will
145*ef8d499eSDavid van Moolenbroek 	 * ever need that many grants in practice, the alternative is runtime
146*ef8d499eSDavid van Moolenbroek 	 * dynamic memory (re)allocation which is something we prefer to avoid
147*ef8d499eSDavid van Moolenbroek 	 * altogether.  At time of writing, we end up preallocating 320 grants
148*ef8d499eSDavid van Moolenbroek 	 * using up a total of a bit under 9KB of memory.
149*ef8d499eSDavid van Moolenbroek 	 */
150*ef8d499eSDavid van Moolenbroek 	cpf_prealloc(NR_NREQ * NDEV_IOV_MAX);
151*ef8d499eSDavid van Moolenbroek 
152*ef8d499eSDavid van Moolenbroek 
153*ef8d499eSDavid van Moolenbroek 	/*
154*ef8d499eSDavid van Moolenbroek 	 * Not needed, just for ultimate safety: start off all queues with
155*ef8d499eSDavid van Moolenbroek 	 * wildly different request sequence numbers, to minimize the chance
156*ef8d499eSDavid van Moolenbroek 	 * that any two replies will ever be confused.
157*ef8d499eSDavid van Moolenbroek 	 */
158*ef8d499eSDavid van Moolenbroek 	for (slot = 0; slot < __arraycount(ndev_array); slot++) {
159*ef8d499eSDavid van Moolenbroek 		ndev_array[slot].ndev_sendq.nq_head = slot << 21;
160*ef8d499eSDavid van Moolenbroek 		ndev_array[slot].ndev_recvq.nq_head = (slot * 2 + 1) << 20;
161*ef8d499eSDavid van Moolenbroek 	}
162*ef8d499eSDavid van Moolenbroek 
163*ef8d499eSDavid van Moolenbroek 	/* Subscribe to Data Store (DS) events from network drivers. */
164*ef8d499eSDavid van Moolenbroek 	if ((r = ds_subscribe("drv\\.net\\..*",
165*ef8d499eSDavid van Moolenbroek 	    DSF_INITIAL | DSF_OVERWRITE)) != OK)
166*ef8d499eSDavid van Moolenbroek 		panic("unable to subscribe to driver events: %d", r);
167*ef8d499eSDavid van Moolenbroek 
168*ef8d499eSDavid van Moolenbroek 	/*
169*ef8d499eSDavid van Moolenbroek 	 * Keep track of how many drivers are in "pending" state, which means
170*ef8d499eSDavid van Moolenbroek 	 * that they have not yet replied to our initialization request.
171*ef8d499eSDavid van Moolenbroek 	 */
172*ef8d499eSDavid van Moolenbroek 	ndev_pending = 0;
173*ef8d499eSDavid van Moolenbroek 
174*ef8d499eSDavid van Moolenbroek 	/* Register the minix.lwip.drivers subtree. */
175*ef8d499eSDavid van Moolenbroek 	mibtree_register_lwip(&minix_lwip_drivers_node);
176*ef8d499eSDavid van Moolenbroek }
177*ef8d499eSDavid van Moolenbroek 
178*ef8d499eSDavid van Moolenbroek /*
179*ef8d499eSDavid van Moolenbroek  * Initialize a queue for first use.
180*ef8d499eSDavid van Moolenbroek  */
181*ef8d499eSDavid van Moolenbroek static void
ndev_queue_init(struct ndev_queue * nq)182*ef8d499eSDavid van Moolenbroek ndev_queue_init(struct ndev_queue * nq)
183*ef8d499eSDavid van Moolenbroek {
184*ef8d499eSDavid van Moolenbroek 
185*ef8d499eSDavid van Moolenbroek 	/*
186*ef8d499eSDavid van Moolenbroek 	 * Only ever increase sequence numbers, to minimize the chance that
187*ef8d499eSDavid van Moolenbroek 	 * two (e.g. from different driver instances) happen to be the same.
188*ef8d499eSDavid van Moolenbroek 	 */
189*ef8d499eSDavid van Moolenbroek 	nq->nq_head++;
190*ef8d499eSDavid van Moolenbroek 
191*ef8d499eSDavid van Moolenbroek 	nq->nq_count = 0;
192*ef8d499eSDavid van Moolenbroek 	nq->nq_max = 0;
193*ef8d499eSDavid van Moolenbroek 	SIMPLEQ_INIT(&nq->nq_req);
194*ef8d499eSDavid van Moolenbroek }
195*ef8d499eSDavid van Moolenbroek 
196*ef8d499eSDavid van Moolenbroek /*
197*ef8d499eSDavid van Moolenbroek  * Advance the given request queue, freeing up the request at the head of the
198*ef8d499eSDavid van Moolenbroek  * queue including any grants in use for it.
199*ef8d499eSDavid van Moolenbroek  */
200*ef8d499eSDavid van Moolenbroek static void
ndev_queue_advance(struct ndev_queue * nq)201*ef8d499eSDavid van Moolenbroek ndev_queue_advance(struct ndev_queue * nq)
202*ef8d499eSDavid van Moolenbroek {
203*ef8d499eSDavid van Moolenbroek 	struct ndev_req * nreq;
204*ef8d499eSDavid van Moolenbroek 	cp_grant_id_t grant;
205*ef8d499eSDavid van Moolenbroek 	unsigned int i;
206*ef8d499eSDavid van Moolenbroek 
207*ef8d499eSDavid van Moolenbroek 	nreq = SIMPLEQ_FIRST(&nq->nq_req);
208*ef8d499eSDavid van Moolenbroek 
209*ef8d499eSDavid van Moolenbroek 	for (i = 0; i < __arraycount(nreq->nreq_grant); i++) {
210*ef8d499eSDavid van Moolenbroek 		grant = nreq->nreq_grant[i];
211*ef8d499eSDavid van Moolenbroek 
212*ef8d499eSDavid van Moolenbroek 		if (!GRANT_VALID(grant))
213*ef8d499eSDavid van Moolenbroek 			break;
214*ef8d499eSDavid van Moolenbroek 
215*ef8d499eSDavid van Moolenbroek 		/* TODO: make the safecopies code stop using errno. */
216*ef8d499eSDavid van Moolenbroek 		if (cpf_revoke(grant) != 0)
217*ef8d499eSDavid van Moolenbroek 			panic("unable to revoke grant: %d", -errno);
218*ef8d499eSDavid van Moolenbroek 	}
219*ef8d499eSDavid van Moolenbroek 
220*ef8d499eSDavid van Moolenbroek 	if (nreq->nreq_type != NDEV_RECV && nq->nq_count > NDEV_SENDQ) {
221*ef8d499eSDavid van Moolenbroek 		nreq_spares++;
222*ef8d499eSDavid van Moolenbroek 
223*ef8d499eSDavid van Moolenbroek 		assert(nreq_spares <= NREQ_SPARES);
224*ef8d499eSDavid van Moolenbroek 	}
225*ef8d499eSDavid van Moolenbroek 
226*ef8d499eSDavid van Moolenbroek 	SIMPLEQ_REMOVE_HEAD(&nq->nq_req, nreq_next);
227*ef8d499eSDavid van Moolenbroek 
228*ef8d499eSDavid van Moolenbroek 	SIMPLEQ_INSERT_HEAD(&nreq_freelist, nreq, nreq_next);
229*ef8d499eSDavid van Moolenbroek 
230*ef8d499eSDavid van Moolenbroek 	nq->nq_head++;
231*ef8d499eSDavid van Moolenbroek 	nq->nq_count--;
232*ef8d499eSDavid van Moolenbroek }
233*ef8d499eSDavid van Moolenbroek 
234*ef8d499eSDavid van Moolenbroek /*
235*ef8d499eSDavid van Moolenbroek  * Clear any outstanding requests from the given queue and reset it to a
236*ef8d499eSDavid van Moolenbroek  * pre-initialization state.
237*ef8d499eSDavid van Moolenbroek  */
238*ef8d499eSDavid van Moolenbroek static void
ndev_queue_reset(struct ndev_queue * nq)239*ef8d499eSDavid van Moolenbroek ndev_queue_reset(struct ndev_queue * nq)
240*ef8d499eSDavid van Moolenbroek {
241*ef8d499eSDavid van Moolenbroek 
242*ef8d499eSDavid van Moolenbroek 	while (nq->nq_count > 0) {
243*ef8d499eSDavid van Moolenbroek 		assert(!SIMPLEQ_EMPTY(&nq->nq_req));
244*ef8d499eSDavid van Moolenbroek 
245*ef8d499eSDavid van Moolenbroek 		ndev_queue_advance(nq);
246*ef8d499eSDavid van Moolenbroek 	}
247*ef8d499eSDavid van Moolenbroek 
248*ef8d499eSDavid van Moolenbroek 	nq->nq_max = 0;
249*ef8d499eSDavid van Moolenbroek }
250*ef8d499eSDavid van Moolenbroek 
251*ef8d499eSDavid van Moolenbroek /*
252*ef8d499eSDavid van Moolenbroek  * Obtain a request object for use in a new request.  Return the request
253*ef8d499eSDavid van Moolenbroek  * object, with its request type field set to 'type', and with the request
254*ef8d499eSDavid van Moolenbroek  * sequence ID returned in 'seq'.  Return NULL if no request objects are
255*ef8d499eSDavid van Moolenbroek  * available for the given request type.  If the caller does send off the
256*ef8d499eSDavid van Moolenbroek  * request, a call to ndev_queue_add() must follow immediately after.  If the
257*ef8d499eSDavid van Moolenbroek  * caller fails to send off the request for other reasons, it need not do
258*ef8d499eSDavid van Moolenbroek  * anything: this function does not perform any actions that need to be undone.
259*ef8d499eSDavid van Moolenbroek  */
260*ef8d499eSDavid van Moolenbroek static struct ndev_req *
ndev_queue_get(struct ndev_queue * nq,int type,uint32_t * seq)261*ef8d499eSDavid van Moolenbroek ndev_queue_get(struct ndev_queue * nq, int type, uint32_t * seq)
262*ef8d499eSDavid van Moolenbroek {
263*ef8d499eSDavid van Moolenbroek 	struct ndev_req *nreq;
264*ef8d499eSDavid van Moolenbroek 
265*ef8d499eSDavid van Moolenbroek 	/* Has the hard queue depth limit been reached? */
266*ef8d499eSDavid van Moolenbroek 	if (nq->nq_count == nq->nq_max)
267*ef8d499eSDavid van Moolenbroek 		return NULL;
268*ef8d499eSDavid van Moolenbroek 
269*ef8d499eSDavid van Moolenbroek 	/*
270*ef8d499eSDavid van Moolenbroek 	 * For send requests, we may use request objects from a shared "spares"
271*ef8d499eSDavid van Moolenbroek 	 * pool, if available.
272*ef8d499eSDavid van Moolenbroek 	 */
273*ef8d499eSDavid van Moolenbroek 	if (type != NDEV_RECV && nq->nq_count >= NDEV_SENDQ &&
274*ef8d499eSDavid van Moolenbroek 	    nreq_spares == 0)
275*ef8d499eSDavid van Moolenbroek 		return NULL;
276*ef8d499eSDavid van Moolenbroek 
277*ef8d499eSDavid van Moolenbroek 	assert(!SIMPLEQ_EMPTY(&nreq_freelist));
278*ef8d499eSDavid van Moolenbroek 	nreq = SIMPLEQ_FIRST(&nreq_freelist);
279*ef8d499eSDavid van Moolenbroek 
280*ef8d499eSDavid van Moolenbroek 	nreq->nreq_type = type;
281*ef8d499eSDavid van Moolenbroek 
282*ef8d499eSDavid van Moolenbroek 	*seq = nq->nq_head + nq->nq_count;
283*ef8d499eSDavid van Moolenbroek 
284*ef8d499eSDavid van Moolenbroek 	return nreq;
285*ef8d499eSDavid van Moolenbroek }
286*ef8d499eSDavid van Moolenbroek 
287*ef8d499eSDavid van Moolenbroek /*
288*ef8d499eSDavid van Moolenbroek  * Add a successfully sent request to the given queue.  The request must have
289*ef8d499eSDavid van Moolenbroek  * been obtained using ndev_queue_get() directly before the call to this
290*ef8d499eSDavid van Moolenbroek  * function.  This function never fails.
291*ef8d499eSDavid van Moolenbroek  */
292*ef8d499eSDavid van Moolenbroek static void
ndev_queue_add(struct ndev_queue * nq,struct ndev_req * nreq)293*ef8d499eSDavid van Moolenbroek ndev_queue_add(struct ndev_queue * nq, struct ndev_req * nreq)
294*ef8d499eSDavid van Moolenbroek {
295*ef8d499eSDavid van Moolenbroek 
296*ef8d499eSDavid van Moolenbroek 	if (nreq->nreq_type != NDEV_RECV && nq->nq_count >= NDEV_SENDQ) {
297*ef8d499eSDavid van Moolenbroek 		assert(nreq_spares > 0);
298*ef8d499eSDavid van Moolenbroek 
299*ef8d499eSDavid van Moolenbroek 		nreq_spares--;
300*ef8d499eSDavid van Moolenbroek 	}
301*ef8d499eSDavid van Moolenbroek 
302*ef8d499eSDavid van Moolenbroek 	SIMPLEQ_REMOVE_HEAD(&nreq_freelist, nreq_next);
303*ef8d499eSDavid van Moolenbroek 
304*ef8d499eSDavid van Moolenbroek 	SIMPLEQ_INSERT_TAIL(&nq->nq_req, nreq, nreq_next);
305*ef8d499eSDavid van Moolenbroek 
306*ef8d499eSDavid van Moolenbroek 	nq->nq_count++;
307*ef8d499eSDavid van Moolenbroek }
308*ef8d499eSDavid van Moolenbroek 
309*ef8d499eSDavid van Moolenbroek /*
310*ef8d499eSDavid van Moolenbroek  * Remove the head of the given request queue, but only if it matches the given
311*ef8d499eSDavid van Moolenbroek  * request type and sequence ID.  Return TRUE if the head was indeed removed,
312*ef8d499eSDavid van Moolenbroek  * or FALSE if the head of the request queue (if any) did not match the given
313*ef8d499eSDavid van Moolenbroek  * type and/or sequence ID.
314*ef8d499eSDavid van Moolenbroek  */
315*ef8d499eSDavid van Moolenbroek static int
ndev_queue_remove(struct ndev_queue * nq,int type,uint32_t seq)316*ef8d499eSDavid van Moolenbroek ndev_queue_remove(struct ndev_queue * nq, int type, uint32_t seq)
317*ef8d499eSDavid van Moolenbroek {
318*ef8d499eSDavid van Moolenbroek 	struct ndev_req *nreq;
319*ef8d499eSDavid van Moolenbroek 
320*ef8d499eSDavid van Moolenbroek 	if (nq->nq_count < 1 || nq->nq_head != seq)
321*ef8d499eSDavid van Moolenbroek 		return FALSE;
322*ef8d499eSDavid van Moolenbroek 
323*ef8d499eSDavid van Moolenbroek 	assert(!SIMPLEQ_EMPTY(&nq->nq_req));
324*ef8d499eSDavid van Moolenbroek 	nreq = SIMPLEQ_FIRST(&nq->nq_req);
325*ef8d499eSDavid van Moolenbroek 
326*ef8d499eSDavid van Moolenbroek 	if (nreq->nreq_type != type)
327*ef8d499eSDavid van Moolenbroek 		return FALSE;
328*ef8d499eSDavid van Moolenbroek 
329*ef8d499eSDavid van Moolenbroek 	ndev_queue_advance(nq);
330*ef8d499eSDavid van Moolenbroek 
331*ef8d499eSDavid van Moolenbroek 	return TRUE;
332*ef8d499eSDavid van Moolenbroek }
333*ef8d499eSDavid van Moolenbroek 
334*ef8d499eSDavid van Moolenbroek /*
335*ef8d499eSDavid van Moolenbroek  * Send an initialization request to a driver.  If this is a new driver, the
336*ef8d499eSDavid van Moolenbroek  * ethif module does not get to know about the driver until it answers to this
337*ef8d499eSDavid van Moolenbroek  * request, as the ethif module needs much of what the reply contains.  On the
338*ef8d499eSDavid van Moolenbroek  * other hand, if this is a restarted driver, it will stay disabled until the
339*ef8d499eSDavid van Moolenbroek  * init reply comes in.
340*ef8d499eSDavid van Moolenbroek  */
341*ef8d499eSDavid van Moolenbroek static void
ndev_send_init(struct ndev * ndev)342*ef8d499eSDavid van Moolenbroek ndev_send_init(struct ndev * ndev)
343*ef8d499eSDavid van Moolenbroek {
344*ef8d499eSDavid van Moolenbroek 	message m;
345*ef8d499eSDavid van Moolenbroek 	int r;
346*ef8d499eSDavid van Moolenbroek 
347*ef8d499eSDavid van Moolenbroek 	memset(&m, 0, sizeof(m));
348*ef8d499eSDavid van Moolenbroek 	m.m_type = NDEV_INIT;
349*ef8d499eSDavid van Moolenbroek 	m.m_ndev_netdriver_init.id = ndev->ndev_sendq.nq_head;
350*ef8d499eSDavid van Moolenbroek 
351*ef8d499eSDavid van Moolenbroek 	if ((r = asynsend3(ndev->ndev_endpt, &m, AMF_NOREPLY)) != OK)
352*ef8d499eSDavid van Moolenbroek 		panic("asynsend to driver failed: %d", r);
353*ef8d499eSDavid van Moolenbroek }
354*ef8d499eSDavid van Moolenbroek 
355*ef8d499eSDavid van Moolenbroek /*
356*ef8d499eSDavid van Moolenbroek  * A network device driver has been started or restarted.
357*ef8d499eSDavid van Moolenbroek  */
358*ef8d499eSDavid van Moolenbroek static void
ndev_up(const char * label,endpoint_t endpt)359*ef8d499eSDavid van Moolenbroek ndev_up(const char * label, endpoint_t endpt)
360*ef8d499eSDavid van Moolenbroek {
361*ef8d499eSDavid van Moolenbroek 	static int reported = FALSE;
362*ef8d499eSDavid van Moolenbroek 	struct ndev *ndev;
363*ef8d499eSDavid van Moolenbroek 	ndev_id_t slot;
364*ef8d499eSDavid van Moolenbroek 
365*ef8d499eSDavid van Moolenbroek 	/*
366*ef8d499eSDavid van Moolenbroek 	 * First see if we already had an entry for this driver.  If so, it has
367*ef8d499eSDavid van Moolenbroek 	 * been restarted, and we need to report it as not running to ethif.
368*ef8d499eSDavid van Moolenbroek 	 */
369*ef8d499eSDavid van Moolenbroek 	ndev = NULL;
370*ef8d499eSDavid van Moolenbroek 
371*ef8d499eSDavid van Moolenbroek 	for (slot = 0; slot < ndev_max; slot++) {
372*ef8d499eSDavid van Moolenbroek 		if (ndev_array[slot].ndev_endpt == NONE) {
373*ef8d499eSDavid van Moolenbroek 			if (ndev == NULL)
374*ef8d499eSDavid van Moolenbroek 				ndev = &ndev_array[slot];
375*ef8d499eSDavid van Moolenbroek 
376*ef8d499eSDavid van Moolenbroek 			continue;
377*ef8d499eSDavid van Moolenbroek 		}
378*ef8d499eSDavid van Moolenbroek 
379*ef8d499eSDavid van Moolenbroek 		if (!strcmp(ndev_array[slot].ndev_label, label)) {
380*ef8d499eSDavid van Moolenbroek 			/* Cancel any ongoing requests. */
381*ef8d499eSDavid van Moolenbroek 			ndev_queue_reset(&ndev_array[slot].ndev_sendq);
382*ef8d499eSDavid van Moolenbroek 			ndev_queue_reset(&ndev_array[slot].ndev_recvq);
383*ef8d499eSDavid van Moolenbroek 
384*ef8d499eSDavid van Moolenbroek 			if (ndev_array[slot].ndev_ethif != NULL) {
385*ef8d499eSDavid van Moolenbroek 				ethif_disable(ndev_array[slot].ndev_ethif);
386*ef8d499eSDavid van Moolenbroek 
387*ef8d499eSDavid van Moolenbroek 				ndev_pending++;
388*ef8d499eSDavid van Moolenbroek 			}
389*ef8d499eSDavid van Moolenbroek 
390*ef8d499eSDavid van Moolenbroek 			ndev_array[slot].ndev_endpt = endpt;
391*ef8d499eSDavid van Moolenbroek 
392*ef8d499eSDavid van Moolenbroek 			/* Attempt to resume communication. */
393*ef8d499eSDavid van Moolenbroek 			ndev_send_init(&ndev_array[slot]);
394*ef8d499eSDavid van Moolenbroek 
395*ef8d499eSDavid van Moolenbroek 			return;
396*ef8d499eSDavid van Moolenbroek 		}
397*ef8d499eSDavid van Moolenbroek 	}
398*ef8d499eSDavid van Moolenbroek 
399*ef8d499eSDavid van Moolenbroek 	if (ndev == NULL) {
400*ef8d499eSDavid van Moolenbroek 		/*
401*ef8d499eSDavid van Moolenbroek 		 * If there is no free slot for this driver in our table, we
402*ef8d499eSDavid van Moolenbroek 		 * necessarily have to ignore the driver altogether.  We report
403*ef8d499eSDavid van Moolenbroek 		 * such cases once, so that the user can recompile if desired.
404*ef8d499eSDavid van Moolenbroek 		 */
405*ef8d499eSDavid van Moolenbroek 		if (ndev_max == __arraycount(ndev_array)) {
406*ef8d499eSDavid van Moolenbroek 			if (!reported) {
407*ef8d499eSDavid van Moolenbroek 				printf("LWIP: not enough ndev slots!\n");
408*ef8d499eSDavid van Moolenbroek 
409*ef8d499eSDavid van Moolenbroek 				reported = TRUE;
410*ef8d499eSDavid van Moolenbroek 			}
411*ef8d499eSDavid van Moolenbroek 			return;
412*ef8d499eSDavid van Moolenbroek 		}
413*ef8d499eSDavid van Moolenbroek 
414*ef8d499eSDavid van Moolenbroek 		ndev = &ndev_array[ndev_max++];
415*ef8d499eSDavid van Moolenbroek 	}
416*ef8d499eSDavid van Moolenbroek 
417*ef8d499eSDavid van Moolenbroek 	/* Initialize the slot. */
418*ef8d499eSDavid van Moolenbroek 	ndev->ndev_endpt = endpt;
419*ef8d499eSDavid van Moolenbroek 	strlcpy(ndev->ndev_label, label, sizeof(ndev->ndev_label));
420*ef8d499eSDavid van Moolenbroek 	ndev->ndev_ethif = NULL;
421*ef8d499eSDavid van Moolenbroek 	ndev_queue_init(&ndev->ndev_sendq);
422*ef8d499eSDavid van Moolenbroek 	ndev_queue_init(&ndev->ndev_recvq);
423*ef8d499eSDavid van Moolenbroek 
424*ef8d499eSDavid van Moolenbroek 	ndev_send_init(ndev);
425*ef8d499eSDavid van Moolenbroek 
426*ef8d499eSDavid van Moolenbroek 	ndev_pending++;
427*ef8d499eSDavid van Moolenbroek }
428*ef8d499eSDavid van Moolenbroek 
429*ef8d499eSDavid van Moolenbroek /*
430*ef8d499eSDavid van Moolenbroek  * A network device driver has been terminated.
431*ef8d499eSDavid van Moolenbroek  */
432*ef8d499eSDavid van Moolenbroek static void
ndev_down(struct ndev * ndev)433*ef8d499eSDavid van Moolenbroek ndev_down(struct ndev * ndev)
434*ef8d499eSDavid van Moolenbroek {
435*ef8d499eSDavid van Moolenbroek 
436*ef8d499eSDavid van Moolenbroek 	/* Cancel any ongoing requests. */
437*ef8d499eSDavid van Moolenbroek 	ndev_queue_reset(&ndev->ndev_sendq);
438*ef8d499eSDavid van Moolenbroek 	ndev_queue_reset(&ndev->ndev_recvq);
439*ef8d499eSDavid van Moolenbroek 
440*ef8d499eSDavid van Moolenbroek 	/*
441*ef8d499eSDavid van Moolenbroek 	 * If this ndev object had a corresponding ethif object, tell the ethif
442*ef8d499eSDavid van Moolenbroek 	 * layer that the device is really gone now.
443*ef8d499eSDavid van Moolenbroek 	 */
444*ef8d499eSDavid van Moolenbroek 	if (ndev->ndev_ethif != NULL)
445*ef8d499eSDavid van Moolenbroek 		ethif_remove(ndev->ndev_ethif);
446*ef8d499eSDavid van Moolenbroek 	else
447*ef8d499eSDavid van Moolenbroek 		ndev_pending--;
448*ef8d499eSDavid van Moolenbroek 
449*ef8d499eSDavid van Moolenbroek 	/* Remove the driver from our own administration. */
450*ef8d499eSDavid van Moolenbroek 	ndev->ndev_endpt = NONE;
451*ef8d499eSDavid van Moolenbroek 
452*ef8d499eSDavid van Moolenbroek 	while (ndev_max > 0 && ndev_array[ndev_max - 1].ndev_endpt == NONE)
453*ef8d499eSDavid van Moolenbroek 		ndev_max--;
454*ef8d499eSDavid van Moolenbroek }
455*ef8d499eSDavid van Moolenbroek 
456*ef8d499eSDavid van Moolenbroek /*
457*ef8d499eSDavid van Moolenbroek  * The DS service has notified us of changes to our subscriptions.  That means
458*ef8d499eSDavid van Moolenbroek  * that network drivers may have been started, restarted, and/or shut down.
459*ef8d499eSDavid van Moolenbroek  * Find out what has changed, and act accordingly.
460*ef8d499eSDavid van Moolenbroek  */
461*ef8d499eSDavid van Moolenbroek void
ndev_check(void)462*ef8d499eSDavid van Moolenbroek ndev_check(void)
463*ef8d499eSDavid van Moolenbroek {
464*ef8d499eSDavid van Moolenbroek 	static const char *prefix = "drv.net.";
465*ef8d499eSDavid van Moolenbroek 	char key[DS_MAX_KEYLEN], *label;
466*ef8d499eSDavid van Moolenbroek 	size_t prefixlen;
467*ef8d499eSDavid van Moolenbroek 	endpoint_t endpt;
468*ef8d499eSDavid van Moolenbroek 	uint32_t val;
469*ef8d499eSDavid van Moolenbroek 	ndev_id_t slot;
470*ef8d499eSDavid van Moolenbroek 	int r;
471*ef8d499eSDavid van Moolenbroek 
472*ef8d499eSDavid van Moolenbroek 	prefixlen = strlen(prefix);
473*ef8d499eSDavid van Moolenbroek 
474*ef8d499eSDavid van Moolenbroek 	/* Check whether any drivers have been (re)started. */
475*ef8d499eSDavid van Moolenbroek 	while ((r = ds_check(key, NULL, &endpt)) == OK) {
476*ef8d499eSDavid van Moolenbroek 		if (strncmp(key, prefix, prefixlen) != 0 || endpt == NONE)
477*ef8d499eSDavid van Moolenbroek 			continue;
478*ef8d499eSDavid van Moolenbroek 
479*ef8d499eSDavid van Moolenbroek 		if (ds_retrieve_u32(key, &val) != OK || val != DS_DRIVER_UP)
480*ef8d499eSDavid van Moolenbroek 			continue;
481*ef8d499eSDavid van Moolenbroek 
482*ef8d499eSDavid van Moolenbroek 		label = &key[prefixlen];
483*ef8d499eSDavid van Moolenbroek 		if (label[0] == '\0' || memchr(label, '\0', LABEL_MAX) == NULL)
484*ef8d499eSDavid van Moolenbroek 			continue;
485*ef8d499eSDavid van Moolenbroek 
486*ef8d499eSDavid van Moolenbroek 		ndev_up(label, endpt);
487*ef8d499eSDavid van Moolenbroek 	}
488*ef8d499eSDavid van Moolenbroek 
489*ef8d499eSDavid van Moolenbroek 	if (r != ENOENT)
490*ef8d499eSDavid van Moolenbroek 		printf("LWIP: DS check failed (%d)\n", r);
491*ef8d499eSDavid van Moolenbroek 
492*ef8d499eSDavid van Moolenbroek 	/*
493*ef8d499eSDavid van Moolenbroek 	 * Check whether the drivers we currently know about are still up.  The
494*ef8d499eSDavid van Moolenbroek 	 * ones that are not are really gone.  It is no problem that we recheck
495*ef8d499eSDavid van Moolenbroek 	 * any drivers that have just been reported by ds_check() above.
496*ef8d499eSDavid van Moolenbroek 	 * However, we cannot check the same key: while the driver is being
497*ef8d499eSDavid van Moolenbroek 	 * restarted, its driver status is already gone from DS. Instead, see
498*ef8d499eSDavid van Moolenbroek 	 * if there is still an entry for its label, as that entry remains in
499*ef8d499eSDavid van Moolenbroek 	 * existence during the restart.  The associated endpoint may still
500*ef8d499eSDavid van Moolenbroek 	 * change however, so do not check that part: in such cases we will get
501*ef8d499eSDavid van Moolenbroek 	 * a driver-up announcement later anyway.
502*ef8d499eSDavid van Moolenbroek 	 */
503*ef8d499eSDavid van Moolenbroek 	for (slot = 0; slot < ndev_max; slot++) {
504*ef8d499eSDavid van Moolenbroek 		if (ndev_array[slot].ndev_endpt == NONE)
505*ef8d499eSDavid van Moolenbroek 			continue;
506*ef8d499eSDavid van Moolenbroek 
507*ef8d499eSDavid van Moolenbroek 		if (ds_retrieve_label_endpt(ndev_array[slot].ndev_label,
508*ef8d499eSDavid van Moolenbroek 		    &endpt) != OK)
509*ef8d499eSDavid van Moolenbroek 			ndev_down(&ndev_array[slot]);
510*ef8d499eSDavid van Moolenbroek 	}
511*ef8d499eSDavid van Moolenbroek }
512*ef8d499eSDavid van Moolenbroek 
513*ef8d499eSDavid van Moolenbroek /*
514*ef8d499eSDavid van Moolenbroek  * A network device driver has sent a reply to our initialization request.
515*ef8d499eSDavid van Moolenbroek  */
516*ef8d499eSDavid van Moolenbroek static void
ndev_init_reply(struct ndev * ndev,const message * m_ptr)517*ef8d499eSDavid van Moolenbroek ndev_init_reply(struct ndev * ndev, const message * m_ptr)
518*ef8d499eSDavid van Moolenbroek {
519*ef8d499eSDavid van Moolenbroek 	struct ndev_hwaddr hwaddr;
520*ef8d499eSDavid van Moolenbroek 	uint8_t hwaddr_len, max_send, max_recv;
521*ef8d499eSDavid van Moolenbroek 	const char *name;
522*ef8d499eSDavid van Moolenbroek 	int enabled;
523*ef8d499eSDavid van Moolenbroek 
524*ef8d499eSDavid van Moolenbroek 	/*
525*ef8d499eSDavid van Moolenbroek 	 * Make sure that we were waiting for a reply to an initialization
526*ef8d499eSDavid van Moolenbroek 	 * request, and that this is the reply to that request.
527*ef8d499eSDavid van Moolenbroek 	 */
528*ef8d499eSDavid van Moolenbroek 	if (NDEV_ACTIVE(ndev) ||
529*ef8d499eSDavid van Moolenbroek 	    m_ptr->m_netdriver_ndev_init_reply.id != ndev->ndev_sendq.nq_head)
530*ef8d499eSDavid van Moolenbroek 		return;
531*ef8d499eSDavid van Moolenbroek 
532*ef8d499eSDavid van Moolenbroek 	/*
533*ef8d499eSDavid van Moolenbroek 	 * Do just enough sanity checking on the data to pass it up to the
534*ef8d499eSDavid van Moolenbroek 	 * ethif layer, which will check the rest (e.g., name duplicates).
535*ef8d499eSDavid van Moolenbroek 	 */
536*ef8d499eSDavid van Moolenbroek 	if (memchr(m_ptr->m_netdriver_ndev_init_reply.name, '\0',
537*ef8d499eSDavid van Moolenbroek 	    sizeof(m_ptr->m_netdriver_ndev_init_reply.name)) == NULL ||
538*ef8d499eSDavid van Moolenbroek 	    m_ptr->m_netdriver_ndev_init_reply.name[0] == '\0') {
539*ef8d499eSDavid van Moolenbroek 		printf("LWIP: driver %d provided invalid name\n",
540*ef8d499eSDavid van Moolenbroek 		    m_ptr->m_source);
541*ef8d499eSDavid van Moolenbroek 
542*ef8d499eSDavid van Moolenbroek 		ndev_down(ndev);
543*ef8d499eSDavid van Moolenbroek 
544*ef8d499eSDavid van Moolenbroek 		return;
545*ef8d499eSDavid van Moolenbroek 	}
546*ef8d499eSDavid van Moolenbroek 
547*ef8d499eSDavid van Moolenbroek 	hwaddr_len = m_ptr->m_netdriver_ndev_init_reply.hwaddr_len;
548*ef8d499eSDavid van Moolenbroek 	if (hwaddr_len < 1 || hwaddr_len > __arraycount(hwaddr.nhwa_addr)) {
549*ef8d499eSDavid van Moolenbroek 		printf("LWIP: driver %d provided invalid HW-addr length\n",
550*ef8d499eSDavid van Moolenbroek 		    m_ptr->m_source);
551*ef8d499eSDavid van Moolenbroek 
552*ef8d499eSDavid van Moolenbroek 		ndev_down(ndev);
553*ef8d499eSDavid van Moolenbroek 
554*ef8d499eSDavid van Moolenbroek 		return;
555*ef8d499eSDavid van Moolenbroek 	}
556*ef8d499eSDavid van Moolenbroek 
557*ef8d499eSDavid van Moolenbroek 	if ((max_send = m_ptr->m_netdriver_ndev_init_reply.max_send) < 1 ||
558*ef8d499eSDavid van Moolenbroek 	    (max_recv = m_ptr->m_netdriver_ndev_init_reply.max_recv) < 1) {
559*ef8d499eSDavid van Moolenbroek 		printf("LWIP: driver %d provided invalid queue maximum\n",
560*ef8d499eSDavid van Moolenbroek 		    m_ptr->m_source);
561*ef8d499eSDavid van Moolenbroek 
562*ef8d499eSDavid van Moolenbroek 		ndev_down(ndev);
563*ef8d499eSDavid van Moolenbroek 
564*ef8d499eSDavid van Moolenbroek 		return;
565*ef8d499eSDavid van Moolenbroek 	}
566*ef8d499eSDavid van Moolenbroek 
567*ef8d499eSDavid van Moolenbroek 	/*
568*ef8d499eSDavid van Moolenbroek 	 * If the driver is new, allocate a new ethif object for it.  On
569*ef8d499eSDavid van Moolenbroek 	 * success, or if the driver was restarted, (re)enable the interface.
570*ef8d499eSDavid van Moolenbroek 	 * Both calls may fail, in which case we should forget about the
571*ef8d499eSDavid van Moolenbroek 	 * driver.  It may continue to send us messages, which we should then
572*ef8d499eSDavid van Moolenbroek 	 * discard.
573*ef8d499eSDavid van Moolenbroek 	 */
574*ef8d499eSDavid van Moolenbroek 	name = m_ptr->m_netdriver_ndev_init_reply.name;
575*ef8d499eSDavid van Moolenbroek 
576*ef8d499eSDavid van Moolenbroek 	if (ndev->ndev_ethif == NULL) {
577*ef8d499eSDavid van Moolenbroek 		ndev->ndev_ethif = ethif_add((ndev_id_t)(ndev - ndev_array),
578*ef8d499eSDavid van Moolenbroek 		    name, m_ptr->m_netdriver_ndev_init_reply.caps);
579*ef8d499eSDavid van Moolenbroek 		name = NULL;
580*ef8d499eSDavid van Moolenbroek 	}
581*ef8d499eSDavid van Moolenbroek 
582*ef8d499eSDavid van Moolenbroek 	if (ndev->ndev_ethif != NULL) {
583*ef8d499eSDavid van Moolenbroek 		/*
584*ef8d499eSDavid van Moolenbroek 		 * Set the maximum numbers of pending requests (for each
585*ef8d499eSDavid van Moolenbroek 		 * direction) first, because enabling the interface may cause
586*ef8d499eSDavid van Moolenbroek 		 * the ethif layer to start sending requests immediately.
587*ef8d499eSDavid van Moolenbroek 		 */
588*ef8d499eSDavid van Moolenbroek 		ndev->ndev_sendq.nq_max = max_send;
589*ef8d499eSDavid van Moolenbroek 		ndev->ndev_sendq.nq_head++;
590*ef8d499eSDavid van Moolenbroek 
591*ef8d499eSDavid van Moolenbroek 		/*
592*ef8d499eSDavid van Moolenbroek 		 * Limit the maximum number of concurrently pending receive
593*ef8d499eSDavid van Moolenbroek 		 * requests to our configured maximum.  For send requests, we
594*ef8d499eSDavid van Moolenbroek 		 * use a more dynamic approach with spare request objects.
595*ef8d499eSDavid van Moolenbroek 		 */
596*ef8d499eSDavid van Moolenbroek 		if (max_recv > NDEV_RECVQ)
597*ef8d499eSDavid van Moolenbroek 			max_recv = NDEV_RECVQ;
598*ef8d499eSDavid van Moolenbroek 		ndev->ndev_recvq.nq_max = max_recv;
599*ef8d499eSDavid van Moolenbroek 		ndev->ndev_recvq.nq_head++;
600*ef8d499eSDavid van Moolenbroek 
601*ef8d499eSDavid van Moolenbroek 		memset(&hwaddr, 0, sizeof(hwaddr));
602*ef8d499eSDavid van Moolenbroek 		memcpy(hwaddr.nhwa_addr,
603*ef8d499eSDavid van Moolenbroek 		    m_ptr->m_netdriver_ndev_init_reply.hwaddr, hwaddr_len);
604*ef8d499eSDavid van Moolenbroek 
605*ef8d499eSDavid van Moolenbroek 		/*
606*ef8d499eSDavid van Moolenbroek 		 * Provide a NULL pointer for the name if we have only just
607*ef8d499eSDavid van Moolenbroek 		 * added the interface at all.  The callee may use this to
608*ef8d499eSDavid van Moolenbroek 		 * determine whether the driver is new or has been restarted.
609*ef8d499eSDavid van Moolenbroek 		 */
610*ef8d499eSDavid van Moolenbroek 		enabled = ethif_enable(ndev->ndev_ethif, name, &hwaddr,
611*ef8d499eSDavid van Moolenbroek 		    m_ptr->m_netdriver_ndev_init_reply.hwaddr_len,
612*ef8d499eSDavid van Moolenbroek 		    m_ptr->m_netdriver_ndev_init_reply.caps,
613*ef8d499eSDavid van Moolenbroek 		    m_ptr->m_netdriver_ndev_init_reply.link,
614*ef8d499eSDavid van Moolenbroek 		    m_ptr->m_netdriver_ndev_init_reply.media);
615*ef8d499eSDavid van Moolenbroek 	} else
616*ef8d499eSDavid van Moolenbroek 		enabled = FALSE;
617*ef8d499eSDavid van Moolenbroek 
618*ef8d499eSDavid van Moolenbroek 	/*
619*ef8d499eSDavid van Moolenbroek 	 * If we did not manage to enable the interface, remove it again,
620*ef8d499eSDavid van Moolenbroek 	 * possibly also from the ethif layer.
621*ef8d499eSDavid van Moolenbroek 	 */
622*ef8d499eSDavid van Moolenbroek 	if (!enabled)
623*ef8d499eSDavid van Moolenbroek 		ndev_down(ndev);
624*ef8d499eSDavid van Moolenbroek 	else
625*ef8d499eSDavid van Moolenbroek 		ndev_pending--;
626*ef8d499eSDavid van Moolenbroek }
627*ef8d499eSDavid van Moolenbroek 
628*ef8d499eSDavid van Moolenbroek /*
629*ef8d499eSDavid van Moolenbroek  * Request that a network device driver change its configuration.  This
630*ef8d499eSDavid van Moolenbroek  * function allows for configuration of various different driver and device
631*ef8d499eSDavid van Moolenbroek  * aspects: the I/O mode (and multicast receipt list), the enabled (sub)set of
632*ef8d499eSDavid van Moolenbroek  * capabilities, the driver-specific flags, and the hardware address.  Each of
633*ef8d499eSDavid van Moolenbroek  * these settings may be changed by setting the corresponding NDEV_SET_ flag in
634*ef8d499eSDavid van Moolenbroek  * the 'set' field of the given configuration structure.  It is explicitly
635*ef8d499eSDavid van Moolenbroek  * allowed to generate a request with no NDEV_SET_ flags; such a request will
636*ef8d499eSDavid van Moolenbroek  * be sent to the driver and ultimately generate a response.  Return OK if the
637*ef8d499eSDavid van Moolenbroek  * configuration request was sent to the driver, EBUSY if no (more) requests
638*ef8d499eSDavid van Moolenbroek  * can be sent to the driver right now, or ENOMEM on grant allocation failure.
639*ef8d499eSDavid van Moolenbroek  */
640*ef8d499eSDavid van Moolenbroek int
ndev_conf(ndev_id_t id,const struct ndev_conf * nconf)641*ef8d499eSDavid van Moolenbroek ndev_conf(ndev_id_t id, const struct ndev_conf * nconf)
642*ef8d499eSDavid van Moolenbroek {
643*ef8d499eSDavid van Moolenbroek 	struct ndev *ndev;
644*ef8d499eSDavid van Moolenbroek 	struct ndev_req *nreq;
645*ef8d499eSDavid van Moolenbroek 	uint32_t seq;
646*ef8d499eSDavid van Moolenbroek 	message m;
647*ef8d499eSDavid van Moolenbroek 	cp_grant_id_t grant;
648*ef8d499eSDavid van Moolenbroek 	int r;
649*ef8d499eSDavid van Moolenbroek 
650*ef8d499eSDavid van Moolenbroek 	assert(id < __arraycount(ndev_array));
651*ef8d499eSDavid van Moolenbroek 	ndev = &ndev_array[id];
652*ef8d499eSDavid van Moolenbroek 
653*ef8d499eSDavid van Moolenbroek 	assert(ndev->ndev_endpt != NONE);
654*ef8d499eSDavid van Moolenbroek 	assert(NDEV_ACTIVE(ndev));
655*ef8d499eSDavid van Moolenbroek 
656*ef8d499eSDavid van Moolenbroek 	if ((nreq = ndev_queue_get(&ndev->ndev_sendq, NDEV_CONF,
657*ef8d499eSDavid van Moolenbroek 	    &seq)) == NULL)
658*ef8d499eSDavid van Moolenbroek 		return EBUSY;
659*ef8d499eSDavid van Moolenbroek 
660*ef8d499eSDavid van Moolenbroek 	memset(&m, 0, sizeof(m));
661*ef8d499eSDavid van Moolenbroek 	m.m_type = NDEV_CONF;
662*ef8d499eSDavid van Moolenbroek 	m.m_ndev_netdriver_conf.id = seq;
663*ef8d499eSDavid van Moolenbroek 	m.m_ndev_netdriver_conf.set = nconf->nconf_set;
664*ef8d499eSDavid van Moolenbroek 
665*ef8d499eSDavid van Moolenbroek 	grant = GRANT_INVALID;
666*ef8d499eSDavid van Moolenbroek 
667*ef8d499eSDavid van Moolenbroek 	if (nconf->nconf_set & NDEV_SET_MODE) {
668*ef8d499eSDavid van Moolenbroek 		m.m_ndev_netdriver_conf.mode = nconf->nconf_mode;
669*ef8d499eSDavid van Moolenbroek 
670*ef8d499eSDavid van Moolenbroek 		if (nconf->nconf_mode & NDEV_MODE_MCAST_LIST) {
671*ef8d499eSDavid van Moolenbroek 			assert(nconf->nconf_mclist != NULL);
672*ef8d499eSDavid van Moolenbroek 			assert(nconf->nconf_mccount != 0);
673*ef8d499eSDavid van Moolenbroek 
674*ef8d499eSDavid van Moolenbroek 			grant = cpf_grant_direct(ndev->ndev_endpt,
675*ef8d499eSDavid van Moolenbroek 			    (vir_bytes)nconf->nconf_mclist,
676*ef8d499eSDavid van Moolenbroek 			    sizeof(nconf->nconf_mclist[0]) *
677*ef8d499eSDavid van Moolenbroek 			    nconf->nconf_mccount, CPF_READ);
678*ef8d499eSDavid van Moolenbroek 
679*ef8d499eSDavid van Moolenbroek 			if (!GRANT_VALID(grant))
680*ef8d499eSDavid van Moolenbroek 				return ENOMEM;
681*ef8d499eSDavid van Moolenbroek 
682*ef8d499eSDavid van Moolenbroek 			m.m_ndev_netdriver_conf.mcast_count =
683*ef8d499eSDavid van Moolenbroek 			    nconf->nconf_mccount;
684*ef8d499eSDavid van Moolenbroek 		}
685*ef8d499eSDavid van Moolenbroek 	}
686*ef8d499eSDavid van Moolenbroek 
687*ef8d499eSDavid van Moolenbroek 	m.m_ndev_netdriver_conf.mcast_grant = grant;
688*ef8d499eSDavid van Moolenbroek 
689*ef8d499eSDavid van Moolenbroek 	if (nconf->nconf_set & NDEV_SET_CAPS)
690*ef8d499eSDavid van Moolenbroek 		m.m_ndev_netdriver_conf.caps = nconf->nconf_caps;
691*ef8d499eSDavid van Moolenbroek 
692*ef8d499eSDavid van Moolenbroek 	if (nconf->nconf_set & NDEV_SET_FLAGS)
693*ef8d499eSDavid van Moolenbroek 		m.m_ndev_netdriver_conf.flags = nconf->nconf_flags;
694*ef8d499eSDavid van Moolenbroek 
695*ef8d499eSDavid van Moolenbroek 	if (nconf->nconf_set & NDEV_SET_MEDIA)
696*ef8d499eSDavid van Moolenbroek 		m.m_ndev_netdriver_conf.media = nconf->nconf_media;
697*ef8d499eSDavid van Moolenbroek 
698*ef8d499eSDavid van Moolenbroek 	if (nconf->nconf_set & NDEV_SET_HWADDR)
699*ef8d499eSDavid van Moolenbroek 		memcpy(m.m_ndev_netdriver_conf.hwaddr,
700*ef8d499eSDavid van Moolenbroek 		    nconf->nconf_hwaddr.nhwa_addr,
701*ef8d499eSDavid van Moolenbroek 		    __arraycount(m.m_ndev_netdriver_conf.hwaddr));
702*ef8d499eSDavid van Moolenbroek 
703*ef8d499eSDavid van Moolenbroek 	if ((r = asynsend3(ndev->ndev_endpt, &m, AMF_NOREPLY)) != OK)
704*ef8d499eSDavid van Moolenbroek 		panic("asynsend to driver failed: %d", r);
705*ef8d499eSDavid van Moolenbroek 
706*ef8d499eSDavid van Moolenbroek 	nreq->nreq_grant[0] = grant; /* may also be invalid */
707*ef8d499eSDavid van Moolenbroek 	nreq->nreq_grant[1] = GRANT_INVALID;
708*ef8d499eSDavid van Moolenbroek 
709*ef8d499eSDavid van Moolenbroek 	ndev_queue_add(&ndev->ndev_sendq, nreq);
710*ef8d499eSDavid van Moolenbroek 
711*ef8d499eSDavid van Moolenbroek 	return OK;
712*ef8d499eSDavid van Moolenbroek }
713*ef8d499eSDavid van Moolenbroek 
714*ef8d499eSDavid van Moolenbroek /*
715*ef8d499eSDavid van Moolenbroek  * The network device driver has sent a reply to a configuration request.
716*ef8d499eSDavid van Moolenbroek  */
717*ef8d499eSDavid van Moolenbroek static void
ndev_conf_reply(struct ndev * ndev,const message * m_ptr)718*ef8d499eSDavid van Moolenbroek ndev_conf_reply(struct ndev * ndev, const message * m_ptr)
719*ef8d499eSDavid van Moolenbroek {
720*ef8d499eSDavid van Moolenbroek 
721*ef8d499eSDavid van Moolenbroek 	/*
722*ef8d499eSDavid van Moolenbroek 	 * Was this the request we were waiting for?  If so, remove it from the
723*ef8d499eSDavid van Moolenbroek 	 * send queue.  Otherwise, ignore this reply message.
724*ef8d499eSDavid van Moolenbroek 	 */
725*ef8d499eSDavid van Moolenbroek 	if (!NDEV_ACTIVE(ndev) || !ndev_queue_remove(&ndev->ndev_sendq,
726*ef8d499eSDavid van Moolenbroek 	    NDEV_CONF, m_ptr->m_netdriver_ndev_reply.id))
727*ef8d499eSDavid van Moolenbroek 		return;
728*ef8d499eSDavid van Moolenbroek 
729*ef8d499eSDavid van Moolenbroek 	/* Tell the ethif layer about the updated configuration. */
730*ef8d499eSDavid van Moolenbroek 	assert(ndev->ndev_ethif != NULL);
731*ef8d499eSDavid van Moolenbroek 
732*ef8d499eSDavid van Moolenbroek 	ethif_configured(ndev->ndev_ethif,
733*ef8d499eSDavid van Moolenbroek 	    m_ptr->m_netdriver_ndev_reply.result);
734*ef8d499eSDavid van Moolenbroek }
735*ef8d499eSDavid van Moolenbroek 
736*ef8d499eSDavid van Moolenbroek /*
737*ef8d499eSDavid van Moolenbroek  * Construct a packet send or receive request and send it off to a network
738*ef8d499eSDavid van Moolenbroek  * driver.  The given pbuf chain may be part of a queue.  Return OK if the
739*ef8d499eSDavid van Moolenbroek  * request was successfully sent, or ENOMEM on grant allocation failure.
740*ef8d499eSDavid van Moolenbroek  */
741*ef8d499eSDavid van Moolenbroek static int
ndev_transfer(struct ndev * ndev,const struct pbuf * pbuf,int do_send,uint32_t seq,struct ndev_req * nreq)742*ef8d499eSDavid van Moolenbroek ndev_transfer(struct ndev * ndev, const struct pbuf * pbuf, int do_send,
743*ef8d499eSDavid van Moolenbroek 	uint32_t seq, struct ndev_req * nreq)
744*ef8d499eSDavid van Moolenbroek {
745*ef8d499eSDavid van Moolenbroek 	cp_grant_id_t grant;
746*ef8d499eSDavid van Moolenbroek 	message m;
747*ef8d499eSDavid van Moolenbroek 	unsigned int i;
748*ef8d499eSDavid van Moolenbroek 	size_t left;
749*ef8d499eSDavid van Moolenbroek 	int r;
750*ef8d499eSDavid van Moolenbroek 
751*ef8d499eSDavid van Moolenbroek 	memset(&m, 0, sizeof(m));
752*ef8d499eSDavid van Moolenbroek 	m.m_type = (do_send) ? NDEV_SEND : NDEV_RECV;
753*ef8d499eSDavid van Moolenbroek 	m.m_ndev_netdriver_transfer.id = seq;
754*ef8d499eSDavid van Moolenbroek 
755*ef8d499eSDavid van Moolenbroek 	left = pbuf->tot_len;
756*ef8d499eSDavid van Moolenbroek 
757*ef8d499eSDavid van Moolenbroek 	for (i = 0; left > 0; i++) {
758*ef8d499eSDavid van Moolenbroek 		assert(i < NDEV_IOV_MAX);
759*ef8d499eSDavid van Moolenbroek 
760*ef8d499eSDavid van Moolenbroek 		grant = cpf_grant_direct(ndev->ndev_endpt,
761*ef8d499eSDavid van Moolenbroek 		    (vir_bytes)pbuf->payload, pbuf->len,
762*ef8d499eSDavid van Moolenbroek 		    (do_send) ? CPF_READ : CPF_WRITE);
763*ef8d499eSDavid van Moolenbroek 
764*ef8d499eSDavid van Moolenbroek 		if (!GRANT_VALID(grant)) {
765*ef8d499eSDavid van Moolenbroek 			while (i-- > 0)
766*ef8d499eSDavid van Moolenbroek 				(void)cpf_revoke(nreq->nreq_grant[i]);
767*ef8d499eSDavid van Moolenbroek 
768*ef8d499eSDavid van Moolenbroek 			return ENOMEM;
769*ef8d499eSDavid van Moolenbroek 		}
770*ef8d499eSDavid van Moolenbroek 
771*ef8d499eSDavid van Moolenbroek 		m.m_ndev_netdriver_transfer.grant[i] = grant;
772*ef8d499eSDavid van Moolenbroek 		m.m_ndev_netdriver_transfer.len[i] = pbuf->len;
773*ef8d499eSDavid van Moolenbroek 
774*ef8d499eSDavid van Moolenbroek 		nreq->nreq_grant[i] = grant;
775*ef8d499eSDavid van Moolenbroek 
776*ef8d499eSDavid van Moolenbroek 		assert(left >= pbuf->len);
777*ef8d499eSDavid van Moolenbroek 		left -= pbuf->len;
778*ef8d499eSDavid van Moolenbroek 		pbuf = pbuf->next;
779*ef8d499eSDavid van Moolenbroek 	}
780*ef8d499eSDavid van Moolenbroek 
781*ef8d499eSDavid van Moolenbroek 	m.m_ndev_netdriver_transfer.count = i;
782*ef8d499eSDavid van Moolenbroek 
783*ef8d499eSDavid van Moolenbroek 	/*
784*ef8d499eSDavid van Moolenbroek 	 * Unless the array is full, an invalid grant marks the end of the list
785*ef8d499eSDavid van Moolenbroek 	 * of invalid grants.
786*ef8d499eSDavid van Moolenbroek 	 */
787*ef8d499eSDavid van Moolenbroek 	if (i < __arraycount(nreq->nreq_grant))
788*ef8d499eSDavid van Moolenbroek 		nreq->nreq_grant[i] = GRANT_INVALID;
789*ef8d499eSDavid van Moolenbroek 
790*ef8d499eSDavid van Moolenbroek 	if ((r = asynsend3(ndev->ndev_endpt, &m, AMF_NOREPLY)) != OK)
791*ef8d499eSDavid van Moolenbroek 		panic("asynsend to driver failed: %d", r);
792*ef8d499eSDavid van Moolenbroek 
793*ef8d499eSDavid van Moolenbroek 	return OK;
794*ef8d499eSDavid van Moolenbroek }
795*ef8d499eSDavid van Moolenbroek 
796*ef8d499eSDavid van Moolenbroek /*
797*ef8d499eSDavid van Moolenbroek  * Send a packet to the given network driver.  Return OK if the packet is sent
798*ef8d499eSDavid van Moolenbroek  * off to the driver, EBUSY if no (more) packets can be sent to the driver at
799*ef8d499eSDavid van Moolenbroek  * this time, or ENOMEM on grant allocation failure.
800*ef8d499eSDavid van Moolenbroek  *
801*ef8d499eSDavid van Moolenbroek  * The use of 'pbuf' in this interface is a bit ugly, but it saves us from
802*ef8d499eSDavid van Moolenbroek  * having to go through an intermediate representation (e.g. an iovec array)
803*ef8d499eSDavid van Moolenbroek  * for the data being sent.  The same applies to ndev_receive().
804*ef8d499eSDavid van Moolenbroek  */
805*ef8d499eSDavid van Moolenbroek int
ndev_send(ndev_id_t id,const struct pbuf * pbuf)806*ef8d499eSDavid van Moolenbroek ndev_send(ndev_id_t id, const struct pbuf * pbuf)
807*ef8d499eSDavid van Moolenbroek {
808*ef8d499eSDavid van Moolenbroek 	struct ndev *ndev;
809*ef8d499eSDavid van Moolenbroek 	struct ndev_req *nreq;
810*ef8d499eSDavid van Moolenbroek 	uint32_t seq;
811*ef8d499eSDavid van Moolenbroek 	int r;
812*ef8d499eSDavid van Moolenbroek 
813*ef8d499eSDavid van Moolenbroek 	assert(id < __arraycount(ndev_array));
814*ef8d499eSDavid van Moolenbroek 	ndev = &ndev_array[id];
815*ef8d499eSDavid van Moolenbroek 
816*ef8d499eSDavid van Moolenbroek 	assert(ndev->ndev_endpt != NONE);
817*ef8d499eSDavid van Moolenbroek 	assert(NDEV_ACTIVE(ndev));
818*ef8d499eSDavid van Moolenbroek 
819*ef8d499eSDavid van Moolenbroek 	if ((nreq = ndev_queue_get(&ndev->ndev_sendq, NDEV_SEND,
820*ef8d499eSDavid van Moolenbroek 	    &seq)) == NULL)
821*ef8d499eSDavid van Moolenbroek 		return EBUSY;
822*ef8d499eSDavid van Moolenbroek 
823*ef8d499eSDavid van Moolenbroek 	if ((r = ndev_transfer(ndev, pbuf, TRUE /*do_send*/, seq, nreq)) != OK)
824*ef8d499eSDavid van Moolenbroek 		return r;
825*ef8d499eSDavid van Moolenbroek 
826*ef8d499eSDavid van Moolenbroek 	ndev_queue_add(&ndev->ndev_sendq, nreq);
827*ef8d499eSDavid van Moolenbroek 
828*ef8d499eSDavid van Moolenbroek 	return OK;
829*ef8d499eSDavid van Moolenbroek }
830*ef8d499eSDavid van Moolenbroek 
831*ef8d499eSDavid van Moolenbroek /*
832*ef8d499eSDavid van Moolenbroek  * The network device driver has sent a reply to a send request.
833*ef8d499eSDavid van Moolenbroek  */
834*ef8d499eSDavid van Moolenbroek static void
ndev_send_reply(struct ndev * ndev,const message * m_ptr)835*ef8d499eSDavid van Moolenbroek ndev_send_reply(struct ndev * ndev, const message * m_ptr)
836*ef8d499eSDavid van Moolenbroek {
837*ef8d499eSDavid van Moolenbroek 
838*ef8d499eSDavid van Moolenbroek 	/*
839*ef8d499eSDavid van Moolenbroek 	 * Was this the request we were waiting for?  If so, remove it from the
840*ef8d499eSDavid van Moolenbroek 	 * send queue.  Otherwise, ignore this reply message.
841*ef8d499eSDavid van Moolenbroek 	 */
842*ef8d499eSDavid van Moolenbroek 	if (!NDEV_ACTIVE(ndev) || !ndev_queue_remove(&ndev->ndev_sendq,
843*ef8d499eSDavid van Moolenbroek 	    NDEV_SEND, m_ptr->m_netdriver_ndev_reply.id))
844*ef8d499eSDavid van Moolenbroek 		return;
845*ef8d499eSDavid van Moolenbroek 
846*ef8d499eSDavid van Moolenbroek 	/* Tell the ethif layer about the result of the transmission. */
847*ef8d499eSDavid van Moolenbroek 	assert(ndev->ndev_ethif != NULL);
848*ef8d499eSDavid van Moolenbroek 
849*ef8d499eSDavid van Moolenbroek 	ethif_sent(ndev->ndev_ethif,
850*ef8d499eSDavid van Moolenbroek 	    m_ptr->m_netdriver_ndev_reply.result);
851*ef8d499eSDavid van Moolenbroek }
852*ef8d499eSDavid van Moolenbroek 
853*ef8d499eSDavid van Moolenbroek /*
854*ef8d499eSDavid van Moolenbroek  * Return TRUE if a new receive request can be spawned for a particular network
855*ef8d499eSDavid van Moolenbroek  * driver, or FALSE if its queue of receive requests is full.  This call exists
856*ef8d499eSDavid van Moolenbroek  * merely to avoid needless buffer allocatin in the case that ndev_recv() is
857*ef8d499eSDavid van Moolenbroek  * going to return EBUSY anyway.
858*ef8d499eSDavid van Moolenbroek  */
859*ef8d499eSDavid van Moolenbroek int
ndev_can_recv(ndev_id_t id)860*ef8d499eSDavid van Moolenbroek ndev_can_recv(ndev_id_t id)
861*ef8d499eSDavid van Moolenbroek {
862*ef8d499eSDavid van Moolenbroek 	struct ndev *ndev;
863*ef8d499eSDavid van Moolenbroek 
864*ef8d499eSDavid van Moolenbroek 	assert(id < __arraycount(ndev_array));
865*ef8d499eSDavid van Moolenbroek 	ndev = &ndev_array[id];
866*ef8d499eSDavid van Moolenbroek 
867*ef8d499eSDavid van Moolenbroek 	assert(ndev->ndev_endpt != NONE);
868*ef8d499eSDavid van Moolenbroek 	assert(NDEV_ACTIVE(ndev));
869*ef8d499eSDavid van Moolenbroek 
870*ef8d499eSDavid van Moolenbroek 	return (ndev->ndev_recvq.nq_count < ndev->ndev_recvq.nq_max);
871*ef8d499eSDavid van Moolenbroek }
872*ef8d499eSDavid van Moolenbroek 
873*ef8d499eSDavid van Moolenbroek /*
874*ef8d499eSDavid van Moolenbroek  * Start the process of receiving a packet from a network driver.  The packet
875*ef8d499eSDavid van Moolenbroek  * will be stored in the given pbuf chain upon completion.  Return OK if the
876*ef8d499eSDavid van Moolenbroek  * receive request is sent to the driver, EBUSY if the maximum number of
877*ef8d499eSDavid van Moolenbroek  * concurrent receive requests has been reached for this driver, or ENOMEM on
878*ef8d499eSDavid van Moolenbroek  * grant allocation failure.
879*ef8d499eSDavid van Moolenbroek  */
880*ef8d499eSDavid van Moolenbroek int
ndev_recv(ndev_id_t id,struct pbuf * pbuf)881*ef8d499eSDavid van Moolenbroek ndev_recv(ndev_id_t id, struct pbuf * pbuf)
882*ef8d499eSDavid van Moolenbroek {
883*ef8d499eSDavid van Moolenbroek 	struct ndev *ndev;
884*ef8d499eSDavid van Moolenbroek 	struct ndev_req *nreq;
885*ef8d499eSDavid van Moolenbroek 	uint32_t seq;
886*ef8d499eSDavid van Moolenbroek 	int r;
887*ef8d499eSDavid van Moolenbroek 
888*ef8d499eSDavid van Moolenbroek 	assert(id < __arraycount(ndev_array));
889*ef8d499eSDavid van Moolenbroek 	ndev = &ndev_array[id];
890*ef8d499eSDavid van Moolenbroek 
891*ef8d499eSDavid van Moolenbroek 	assert(ndev->ndev_endpt != NONE);
892*ef8d499eSDavid van Moolenbroek 	assert(NDEV_ACTIVE(ndev));
893*ef8d499eSDavid van Moolenbroek 
894*ef8d499eSDavid van Moolenbroek 	if ((nreq = ndev_queue_get(&ndev->ndev_recvq, NDEV_RECV,
895*ef8d499eSDavid van Moolenbroek 	    &seq)) == NULL)
896*ef8d499eSDavid van Moolenbroek 		return EBUSY;
897*ef8d499eSDavid van Moolenbroek 
898*ef8d499eSDavid van Moolenbroek 	if ((r = ndev_transfer(ndev, pbuf, FALSE /*do_send*/, seq,
899*ef8d499eSDavid van Moolenbroek 	    nreq)) != OK)
900*ef8d499eSDavid van Moolenbroek 		return r;
901*ef8d499eSDavid van Moolenbroek 
902*ef8d499eSDavid van Moolenbroek 	ndev_queue_add(&ndev->ndev_recvq, nreq);
903*ef8d499eSDavid van Moolenbroek 
904*ef8d499eSDavid van Moolenbroek 	return OK;
905*ef8d499eSDavid van Moolenbroek }
906*ef8d499eSDavid van Moolenbroek 
907*ef8d499eSDavid van Moolenbroek /*
908*ef8d499eSDavid van Moolenbroek  * The network device driver has sent a reply to a receive request.
909*ef8d499eSDavid van Moolenbroek  */
910*ef8d499eSDavid van Moolenbroek static void
ndev_recv_reply(struct ndev * ndev,const message * m_ptr)911*ef8d499eSDavid van Moolenbroek ndev_recv_reply(struct ndev * ndev, const message * m_ptr)
912*ef8d499eSDavid van Moolenbroek {
913*ef8d499eSDavid van Moolenbroek 
914*ef8d499eSDavid van Moolenbroek 	/*
915*ef8d499eSDavid van Moolenbroek 	 * Was this the request we were waiting for?  If so, remove it from the
916*ef8d499eSDavid van Moolenbroek 	 * receive queue.  Otherwise, ignore this reply message.
917*ef8d499eSDavid van Moolenbroek 	 */
918*ef8d499eSDavid van Moolenbroek 	if (!NDEV_ACTIVE(ndev) || !ndev_queue_remove(&ndev->ndev_recvq,
919*ef8d499eSDavid van Moolenbroek 	    NDEV_RECV, m_ptr->m_netdriver_ndev_reply.id))
920*ef8d499eSDavid van Moolenbroek 		return;
921*ef8d499eSDavid van Moolenbroek 
922*ef8d499eSDavid van Moolenbroek 	/* Tell the ethif layer about the result of the receipt. */
923*ef8d499eSDavid van Moolenbroek 	assert(ndev->ndev_ethif != NULL);
924*ef8d499eSDavid van Moolenbroek 
925*ef8d499eSDavid van Moolenbroek 	ethif_received(ndev->ndev_ethif,
926*ef8d499eSDavid van Moolenbroek 	    m_ptr->m_netdriver_ndev_reply.result);
927*ef8d499eSDavid van Moolenbroek }
928*ef8d499eSDavid van Moolenbroek 
929*ef8d499eSDavid van Moolenbroek /*
930*ef8d499eSDavid van Moolenbroek  * A network device driver sent a status report to us.  Process it and send a
931*ef8d499eSDavid van Moolenbroek  * reply.
932*ef8d499eSDavid van Moolenbroek  */
933*ef8d499eSDavid van Moolenbroek static void
ndev_status(struct ndev * ndev,const message * m_ptr)934*ef8d499eSDavid van Moolenbroek ndev_status(struct ndev * ndev, const message * m_ptr)
935*ef8d499eSDavid van Moolenbroek {
936*ef8d499eSDavid van Moolenbroek 	message m;
937*ef8d499eSDavid van Moolenbroek 	int r;
938*ef8d499eSDavid van Moolenbroek 
939*ef8d499eSDavid van Moolenbroek 	if (!NDEV_ACTIVE(ndev))
940*ef8d499eSDavid van Moolenbroek 		return;
941*ef8d499eSDavid van Moolenbroek 
942*ef8d499eSDavid van Moolenbroek 	/* Tell the ethif layer about the status update. */
943*ef8d499eSDavid van Moolenbroek 	assert(ndev->ndev_ethif != NULL);
944*ef8d499eSDavid van Moolenbroek 
945*ef8d499eSDavid van Moolenbroek 	ethif_status(ndev->ndev_ethif, m_ptr->m_netdriver_ndev_status.link,
946*ef8d499eSDavid van Moolenbroek 	    m_ptr->m_netdriver_ndev_status.media,
947*ef8d499eSDavid van Moolenbroek 	    m_ptr->m_netdriver_ndev_status.oerror,
948*ef8d499eSDavid van Moolenbroek 	    m_ptr->m_netdriver_ndev_status.coll,
949*ef8d499eSDavid van Moolenbroek 	    m_ptr->m_netdriver_ndev_status.ierror,
950*ef8d499eSDavid van Moolenbroek 	    m_ptr->m_netdriver_ndev_status.iqdrop);
951*ef8d499eSDavid van Moolenbroek 
952*ef8d499eSDavid van Moolenbroek 	/*
953*ef8d499eSDavid van Moolenbroek 	 * Send a reply, so that the driver knows it can send a new status
954*ef8d499eSDavid van Moolenbroek 	 * update without risking asynsend queue overflows.  The ID of these
955*ef8d499eSDavid van Moolenbroek 	 * messages is chosen by the driver and and we simply echo it.
956*ef8d499eSDavid van Moolenbroek 	 */
957*ef8d499eSDavid van Moolenbroek 	memset(&m, 0, sizeof(m));
958*ef8d499eSDavid van Moolenbroek 	m.m_type = NDEV_STATUS_REPLY;
959*ef8d499eSDavid van Moolenbroek 	m.m_ndev_netdriver_status_reply.id = m_ptr->m_netdriver_ndev_status.id;
960*ef8d499eSDavid van Moolenbroek 
961*ef8d499eSDavid van Moolenbroek 	if ((r = asynsend(m_ptr->m_source, &m)) != OK)
962*ef8d499eSDavid van Moolenbroek 		panic("asynsend to driver failed: %d", r);
963*ef8d499eSDavid van Moolenbroek }
964*ef8d499eSDavid van Moolenbroek 
965*ef8d499eSDavid van Moolenbroek /*
966*ef8d499eSDavid van Moolenbroek  * Process a network driver reply message.
967*ef8d499eSDavid van Moolenbroek  */
968*ef8d499eSDavid van Moolenbroek void
ndev_process(const message * m_ptr,int ipc_status)969*ef8d499eSDavid van Moolenbroek ndev_process(const message * m_ptr, int ipc_status)
970*ef8d499eSDavid van Moolenbroek {
971*ef8d499eSDavid van Moolenbroek 	struct ndev *ndev;
972*ef8d499eSDavid van Moolenbroek 	endpoint_t endpt;
973*ef8d499eSDavid van Moolenbroek 	ndev_id_t slot;
974*ef8d499eSDavid van Moolenbroek 
975*ef8d499eSDavid van Moolenbroek 	/* Find the slot of the driver that sent the message, if any. */
976*ef8d499eSDavid van Moolenbroek 	endpt = m_ptr->m_source;
977*ef8d499eSDavid van Moolenbroek 
978*ef8d499eSDavid van Moolenbroek 	for (slot = 0, ndev = ndev_array; slot < ndev_max; slot++, ndev++)
979*ef8d499eSDavid van Moolenbroek 		if (ndev->ndev_endpt == endpt)
980*ef8d499eSDavid van Moolenbroek 			break;
981*ef8d499eSDavid van Moolenbroek 
982*ef8d499eSDavid van Moolenbroek 	/*
983*ef8d499eSDavid van Moolenbroek 	 * If we cannot find a slot for the driver, drop the message.  We may
984*ef8d499eSDavid van Moolenbroek 	 * be ignoring the driver because it misbehaved or we are out of slots.
985*ef8d499eSDavid van Moolenbroek 	 */
986*ef8d499eSDavid van Moolenbroek 	if (slot == ndev_max)
987*ef8d499eSDavid van Moolenbroek 		return;
988*ef8d499eSDavid van Moolenbroek 
989*ef8d499eSDavid van Moolenbroek 	/*
990*ef8d499eSDavid van Moolenbroek 	 * Process the reply message.  For future compatibility, ignore any
991*ef8d499eSDavid van Moolenbroek 	 * unrecognized message types.
992*ef8d499eSDavid van Moolenbroek 	 */
993*ef8d499eSDavid van Moolenbroek 	switch (m_ptr->m_type) {
994*ef8d499eSDavid van Moolenbroek 	case NDEV_INIT_REPLY:
995*ef8d499eSDavid van Moolenbroek 		ndev_init_reply(ndev, m_ptr);
996*ef8d499eSDavid van Moolenbroek 
997*ef8d499eSDavid van Moolenbroek 		break;
998*ef8d499eSDavid van Moolenbroek 
999*ef8d499eSDavid van Moolenbroek 	case NDEV_CONF_REPLY:
1000*ef8d499eSDavid van Moolenbroek 		ndev_conf_reply(ndev, m_ptr);
1001*ef8d499eSDavid van Moolenbroek 
1002*ef8d499eSDavid van Moolenbroek 		break;
1003*ef8d499eSDavid van Moolenbroek 
1004*ef8d499eSDavid van Moolenbroek 	case NDEV_SEND_REPLY:
1005*ef8d499eSDavid van Moolenbroek 		ndev_send_reply(ndev, m_ptr);
1006*ef8d499eSDavid van Moolenbroek 
1007*ef8d499eSDavid van Moolenbroek 		break;
1008*ef8d499eSDavid van Moolenbroek 
1009*ef8d499eSDavid van Moolenbroek 	case NDEV_RECV_REPLY:
1010*ef8d499eSDavid van Moolenbroek 		ndev_recv_reply(ndev, m_ptr);
1011*ef8d499eSDavid van Moolenbroek 
1012*ef8d499eSDavid van Moolenbroek 		break;
1013*ef8d499eSDavid van Moolenbroek 
1014*ef8d499eSDavid van Moolenbroek 	case NDEV_STATUS:
1015*ef8d499eSDavid van Moolenbroek 		ndev_status(ndev, m_ptr);
1016*ef8d499eSDavid van Moolenbroek 
1017*ef8d499eSDavid van Moolenbroek 		break;
1018*ef8d499eSDavid van Moolenbroek 	}
1019*ef8d499eSDavid van Moolenbroek }
1020