Lines Matching defs:to
74 * TODO check the PCI spec whether BAR4 and BAR5 really have to be at least one
86 * TODO according to the PCI spec we need one bit per vector, document the
106 * by stopping the NVMf subsystem when the device is instructed to enter the
108 * collecting migration state and providing it to the vfio-user client. We
110 * complicated to do, we might support this in the future.
151 /* Magic value to validate migration data */
153 /* Version to check the data is same from source to destination */
156 /* The library uses this field to know how many fields in this
206 /* NVMe state definition used to load/restore from/to NVMe migration BAR region */
277 /* NVMf subsystem is paused, it's safe to do PCI reset, memory register,
311 /* multiple SQs can be mapped to the same CQ */
316 * we will post a NVMe completion to VM, we will not set this flag when
383 * Kicks to the controller by ctrlr_kick().
411 * Number of attempts we attempted to rearm all the SQs in the
417 * Number of times we had to apply flow control to this SQ.
431 /* Whether this PG needs kicking to wake up again. */
452 * by the get_pending_bytes callback to tell whether or not the
481 * Shadow doorbells PRPs to provide during the stop-and-copy state.
522 /* The subsystem is in PAUSED state and need to be resumed, TRUE
775 SPDK_ERRLOG("failed to translate IOVA [%#lx, %#lx) (prot=%d) to local VA: %m\n",
779 SPDK_ERRLOG("failed to translate IOVA [%#lx, %#lx) (prot=%d) to local VA: %d segments needed\n",
787 SPDK_ERRLOG("failed to get iovec for IOVA [%#lx, %#lx): %m\n",
819 SPDK_ERRLOG("GPA to VVA failed\n");
906 SPDK_ERRLOG("GPA to VVA failed\n");
935 SPDK_ERRLOG("GPA to VVA failed\n");
961 SPDK_ERRLOG("GPA to VVA failed\n");
965 /* sgl point to the first segment */
992 /* move to next level's segments */
1012 * For each queue, update the location of its doorbell to the correct location:
1117 * Should only be written to by the controller.
1139 * Copy doorbells from one buffer to the other, during switches between BAR0
1144 const volatile uint32_t *from, volatile uint32_t *to)
1148 assert(to != NULL);
1151 "%s: migrating shadow doorbells from %p to %p\n",
1152 ctrlr_id(ctrlr), from, to);
1157 to[queue_index(i, false)] = from[queue_index(i, false)];
1161 to[queue_index(i, true)] = from[queue_index(i, true)];
1326 * mappable BAR0 disabled: we need a vfio-user message to wake us up
1327 * when a client writes new doorbell values to BAR0, via the
1334 * If BAR0 is mappable, it doesn't make sense to support shadow
1350 * to send pending IRQs.
1472 * Updates eventidx to set an SQ into interrupt or polling mode.
1475 * this means that the host has submitted more items to the queue while we were
1477 * or otherwise make sure we are going to wake up again.
1514 * it won't write to BAR0 and we'll miss the update.
1523 * the case, then we've lost the race and we need to update the event
1524 * index again (after polling the queue, since the host won't write to
1531 * tail has been updated, so we need to ensure that any changes to the
1532 * queue will be visible to us if the doorbell has been updated.
1534 * The driver should provide similar ordering with a wmb() to ensure
1553 * loop); if we go to sleep while the SQ is not empty, then we won't
1562 * Arrange for an SQ to interrupt us if written. Returns non-zero if we
1590 * We couldn't arrange an eventidx guaranteed to cause a BAR0 write, as
1591 * we raced with the producer too many times; force ourselves to wake up
1607 * We're in interrupt mode, and potentially about to go to sleep. We need to
1608 * make sure any further I/O submissions are guaranteed to wake us up: for
1609 * shadow doorbells that means we may need to go through set_sq_eventidx() for
1706 /* Map PRP list to from Guest physical memory to
1737 * value, so we only have to read it for real if it appears that we are full.
1786 * As per NVMe Base spec 3.3.1.2.1, we are supposed to implement CQ flow
1833 SPDK_ERRLOG("%s: failed to trigger interrupt: %m\n",
1893 SPDK_DEBUGLOG(nvmf_vfio, "%s: try to delete cqid:%u=%p\n", ctrlr_id(vu_ctrlr),
2087 SPDK_ERRLOG("%s: failed to map I/O queue: %m\n", ctrlr_id(ctrlr));
2098 SPDK_ERRLOG("%s: failed to allocate SQ requests: %m\n", ctrlr_id(ctrlr));
2113 * The Specification prohibits the controller from writing to the shadow
2193 SPDK_ERRLOG("%s: failed to map I/O queue: %m\n", ctrlr_id(ctrlr));
2212 * The Specification prohibits the controller from writing to the shadow
2347 * Deletion of the CQ is only deferred to delete_sq_done() on
2348 * VM reboot or CC.EN change, so we have to delete it in all
2425 /* Map guest physical addresses to our virtual address space. */
2428 SPDK_ERRLOG("%s: failed to map shadow doorbell buffers\n",
2447 * Set all possible CQ head doorbells to polling mode now, such that we
2448 * don't have to worry about it later if the host creates more queues.
2450 * We only ever want interrupts for writes to the SQ tail doorbells
2462 * BAR0 doorbells and make I/O queue doorbells point to the new buffer.
2464 * This needs to account for older versions of the Linux NVMe driver,
2583 * needs to be re-armed before we go to sleep.
2600 * As we need to make sure we have free CQ slots (see
2625 * interrupt mode, we need to kick ourselves, so that we
2626 * are guaranteed to wake up and come back here.
2711 /* VFIO_DMA_MAP_FLAG_READ | VFIO_DMA_MAP_FLAG_WRITE are enabled when registering to VFIO, here we also
2713 * there is no need to register the same memory again.
2731 /* For shared CQ case, we will use q_addr() to avoid mapping CQ multiple times */
2735 SPDK_DEBUGLOG(nvmf_vfio, "Memory isn't ready to remap cqid:%d %#lx-%#lx\n",
2745 SPDK_DEBUGLOG(nvmf_vfio, "Memory isn't ready to remap sqid:%d %#lx-%#lx\n",
2831 /* Used to initiate a controller-level reset or a controller shutdown. */
2865 /* Used to re-enable the controller after a controller-level reset. */
2912 SPDK_ERRLOG("%s: failed to enable ctrlr\n", ctrlr_id(vu_ctrlr));
2958 * DSTRD is set to fixed value 0 for NVMf.
2971 SPDK_WARNLOG("%s: host tried to read BAR0 doorbell %#lx\n",
2993 /* convert byte offset to array index */
3012 SPDK_DEBUGLOG(vfio_user_db, "%s: updating BAR0 doorbell %s:%ld to %u\n",
3088 * that the client (VFIO in QEMU) is obliged to memory map them,
3089 * it might still elect to access them via regular read/write;
3172 /* vendor specific, let's set them to zero for now */
3223 * need to check whether a quiesce was requested.
3268 * so we need to re-check `vu_ctrlr->state`.
3275 SPDK_DEBUGLOG(nvmf_vfio, "%s start to resume\n", ctrlr_id(vu_ctrlr));
3281 SPDK_ERRLOG("%s: failed to resume, ret=%d\n", endpoint_id(endpoint), ret);
3310 * to ensure that this PG is quiesced. This only works because there's no
3313 * Once we've walked all PGs, we need to pause any submitted I/O via
3343 SPDK_ERRLOG("%s: failed to pause, ret=%d\n",
3360 SPDK_ERRLOG("Failed to allocate subsystem pause context\n");
3391 SPDK_DEBUGLOG(nvmf_vfio, "%s starts to quiesce\n", ctrlr_id(vu_ctrlr));
3413 SPDK_DEBUGLOG(nvmf_vfio, "%s is busy to quiesce, current state %u\n", ctrlr_id(vu_ctrlr),
3491 /* Read region 9 content and restore it to migration data structures */
3547 /* Save all data to vfio_user_nvme_migr_state first, then we will
3548 * copy it to device migration region at last.
3590 /* Save all data to device migration region */
3638 * If we are about to close the connection, we need to unregister the interrupt,
3815 SPDK_ERRLOG("%s: failed to re-map shadow doorbell buffers\n",
3901 * Since we won't be calling vfu_sgl_put() for them, we need to explicitly
3974 * group will do nothing to ADMIN queue pair for now.
4019 SPDK_ERRLOG("%s: failed to resume, ret=%d\n", endpoint_id(endpoint), ret);
4061 * When transitioning to pre-copy state we set pending_bytes to 0,
4062 * so the vfio-user client shouldn't attempt to read any migration
4148 .mtab = {.tbir = NVMF_VFIO_USER_MSIX_TABLE_BIR, .to = 0x0},
4171 SPDK_ERRLOG("vfu_ctx %p failed to initialize PCI\n", vfu_ctx);
4203 SPDK_ERRLOG("vfu_ctx %p failed to setup cfg\n", vfu_ctx);
4218 SPDK_ERRLOG("vfu_ctx %p failed to setup bar 0\n", vfu_ctx);
4225 SPDK_ERRLOG("vfu_ctx %p failed to setup bar 4\n", vfu_ctx);
4232 SPDK_ERRLOG("vfu_ctx %p failed to setup bar 5\n", vfu_ctx);
4238 SPDK_ERRLOG("vfu_ctx %p failed to setup dma callback\n", vfu_ctx);
4244 SPDK_ERRLOG("vfu_ctx %p failed to setup reset callback\n", vfu_ctx);
4250 SPDK_ERRLOG("vfu_ctx %p failed to setup INTX\n", vfu_ctx);
4256 SPDK_ERRLOG("vfu_ctx %p failed to setup MSIX\n", vfu_ctx);
4269 SPDK_ERRLOG("vfu_ctx %p failed to setup migration region\n", vfu_ctx);
4276 SPDK_ERRLOG("vfu_ctx %p failed to setup migration callbacks\n", vfu_ctx);
4282 SPDK_ERRLOG("vfu_ctx %p failed to realize\n", vfu_ctx);
4302 * We need to do this on first listening, and also after destroying a
4442 SPDK_ERRLOG("%s: failed to create vfio-user controller: %s\n",
4485 SPDK_ERRLOG("%s: error to get socket path: %s.\n", endpoint_id(endpoint), spdk_strerror(errno));
4492 SPDK_ERRLOG("%s: failed to open device memory at %s: %s.\n",
4502 SPDK_ERRLOG("%s: error to ftruncate file %s: %s.\n", endpoint_id(endpoint), path,
4510 SPDK_ERRLOG("%s: error to mmap file %s: %s.\n", endpoint_id(endpoint), path, spdk_strerror(errno));
4518 SPDK_ERRLOG("%s: error to get migration file path: %s.\n", endpoint_id(endpoint),
4525 SPDK_ERRLOG("%s: failed to open device memory at %s: %s.\n",
4535 SPDK_ERRLOG("%s: error to ftruncate migration file %s: %s.\n", endpoint_id(endpoint), path,
4543 SPDK_ERRLOG("%s: error to mmap file %s: %s.\n", endpoint_id(endpoint), path, spdk_strerror(errno));
4551 SPDK_ERRLOG("%s: error to get ctrlr file path: %s\n", endpoint_id(endpoint), spdk_strerror(errno));
4614 /* Defer to free endpoint resources until the controller
4682 /* Drop const - we will later need to pause/unpause. */
4692 * For this endpoint (which at the libvfio-user level corresponds to a socket),
4693 * if we don't currently have a controller set up, peek to see if the socket is
4694 * able to accept a new connection.
4724 * connection, there is nothing for us to poll for, and
4818 * group, so I/O completions don't have to use
4931 /* add another round thread poll to avoid recursive endpoint lock */
4951 * to the portion of the BAR that is not mmap'd */
5034 * the SQs assigned to our own poll group. Other poll groups are handled via
5052 * Poll vfio-user for this controller. We need to do this before polling
5058 * `sqs[0]` could be set to NULL in vfio_user_poll_vfu_ctx() context,
5067 * We may have just written to a doorbell owned by another
5068 * reactor: we need to prod them to make sure its SQs are polled
5092 SPDK_DEBUGLOG(nvmf_vfio, "%s: setting interrupt mode to %d\n",
5096 * interrupt_mode needs to persist across controller resets, so store
5105 * In response to the nvmf_vfio_user_create_ctrlr() path, the admin queue is now
5183 /* For I/O queues this command was generated in response to an
5216 * might write to a doorbell. This doorbell write will
5217 * go unnoticed, so let's poll the whole controller to
5242 * Add the given qpair to the given poll group. New qpairs are added via
5262 SPDK_DEBUGLOG(nvmf_vfio, "%s: add QP%d=%p(%p) to poll_group=%p\n",
5305 * would fail. The state changes to ACTIVE immediately after the
5407 * now so that the endpoint can be ready to accept another
5583 SPDK_ERRLOG("%s: failed to map IO OPC %u\n", ctrlr_id(ctrlr), cmd->opc);
5655 * If we suppressed an IRQ in post_completion(), check if it needs to be fired
5656 * here: if the host isn't up to date, and is apparently not actively processing
5684 SPDK_ERRLOG("%s: failed to trigger interrupt: %m\n",
5723 * Refer to "https://developer.arm.com/documentation/102376/0100/
5725 * cannot fix this. Use "dc civac" to invalidate cache may solve
5758 * Ensure that changes to the queue are visible to us.
5774 * a separate poller (->vfu_ctx_poller), so this poller only needs to poll the