Lines Matching defs:cluster

1299 	/* Write on cluster 0,2,4 and 5 of blob */
1591 * Manually adjust the offset of the blob's second cluster. This allows
1593 * that cross cluster boundaries. Start by asserting that the allocated
1594 * clusters are where we expect before modifying the second cluster.
1608 * Choose a page offset just before the cluster boundary. The first 6 pages of payload
1609 * will get written to the first cluster, the last 4 to the second cluster.
1629 /* Check that cluster 2 on "disk" was not modified. */
1674 * Choose a page offset just before the cluster boundary. The first 6 pages of payload
1675 * will get written to the first cluster, the last 4 to the second cluster.
2112 * Set first byte of every cluster to 0xFF.
2690 /* Compare cluster size and number to one after initialization */
3139 /* Load blobstore and check the cluster counts again. */
3171 /* Size slightly increased, but not enough to increase cluster count */
3174 /* Size doubled, increasing the cluster count */
3220 * which fits 1 bit per cluster minus the md header.
3253 /* Load blobstore and check the cluster counts again. */
3391 * Create a blobstore with a cluster size different than the default, and ensure it is
3402 /* Set cluster size to zero */
3414 * Set cluster size to blobstore page size,
3428 * Set cluster size to lower than page size,
3441 /* Set cluster size to twice the default */
3467 * Create a blobstore, reload it and ensure total usable cluster count
3485 /* Create and resize blobs to make sure that usable cluster count won't change */
3511 * so that one cluster is not enough to fit the metadata for those blobs.
3512 * To induce this condition to happen more quickly, we reduce the cluster
3662 /* Resize the blobs, alternating 1 cluster at a time.
4468 * This is to simulate behaviour when cluster is allocated after blob creation.
4559 /* Perform write on thread 1. That will allocate cluster on thread 0 via send_msg */
4566 /* Perform write on thread 0. That will try to allocate cluster,
4567 * but fail due to another thread issuing the cluster allocation first. */
4623 /* Use a very small cluster size for this test. This ensures we need multiple
4710 /* Now write to another unallocated cluster that is part of the same extent page. */
4727 /* Send unmap aligned to the whole cluster - should free it up */
4735 /* Write back to the freed cluster */
4777 /* Use a very large cluster size for this test. Check how the unmap/release cluster code path behaves when
4828 /* Unmap one whole cluster */
4835 /* Verify the data read from the cluster is zeroed out */
4842 /* Fill the same cluster with data */
4850 /* Verify the data read from the cluster has the expected data */
4856 /* Send an unaligned unmap that ecompasses one whole cluster */
4864 /* Verify the data read from the cluster is zeroed out */
4873 * check that writes don't claim the currently unmapped cluster */
4915 /* Issue #3358 had a bug with concurrent trims to the same cluster causing an assert, check for regressions.
4916 * Send three concurrent unmaps to the same cluster.
4966 /* Write data to blob, it will alloc new cluster */
4974 /* Unmap one whole cluster, but do not release this cluster */
4981 /* Verify the data read from the cluster is zeroed out */
5039 /* Target specifically second cluster in a blob as first allocation */
5052 /* Issue write to second cluster in a blob */
5091 /* Read second cluster after blob reload to confirm data written */
5361 /* For a clone we need to allocate and copy one cluster, update one page of metadata
5521 * NOTE: needs to allocate 1 cluster, 3 clusters unallocated, dependency
5586 /* Fill whole blob with a pattern, except last cluster (to be sure it
5612 /* Write every second cluster with a pattern.
5614 * Last cluster shouldn't be written, to be sure that snapshot nor clone
5664 /* Write one cluster on the top level blob. This cluster (1) covers
5665 * already allocated cluster in the snapshot2, so shouldn't be inflated
5741 /* Only one cluster from a parent should be inflated (second one
5742 * is covered by a cluster written on a top level blob, and
6926 /* If thin provisioned is set cluster should be allocated now */
6931 * Each page is separated by |. Whole block [...] symbolizes one cluster (containing 4 pages). */
7003 /* Verify write to second cluster */
7089 /* Read four io_units from second cluster
7102 /* Read second cluster
7238 /* If thin provisioned is set cluster should be allocated now */
7243 * Each page is separated by |. Whole block [...] symbolizes one cluster (containing 4 pages). */
7329 /* Verify write to second cluster */
7461 /* Read four io_units from second cluster
7478 /* Read second cluster
8006 uint64_t cluster;
8057 for (cluster = 0; cluster < snapshot2->active.num_clusters; ++cluster) {
8058 CU_ASSERT_NOT_EQUAL(snapshot2->active.clusters[cluster], 0);
8059 CU_ASSERT_NOT_EQUAL(snapshot2->active.clusters[cluster],
8060 snapshot1->active.clusters[cluster]);
8112 /* Write at the beginning of first cluster */
8124 /* Write in the middle of third cluster */
8136 /* Write at the end of last cluster */
9568 /* Write on cluster 2 and 4 of blob */
9589 /* Write on cluster 1 and 3 of blob */
9664 /* Only cluster 1 and 3 must be filled */