1 2 Device-mapper Locking architecture 3 4Overview 5 6There are 2 users in device-mapper driver 7 a) Users who uses disk drives 8 b) Users who uses ioctl management interface 9 10Management is done by dm_dev_*_ioctl and dm_table_*_ioctl routines. There are 11two major structures used in these routines/device-mapper. 12 13Table entry: 14 15typedef struct dm_table_entry { 16 struct dm_dev *dm_dev; /* backlink */ 17 uint64_t start; 18 uint64_t length; 19 20 struct dm_target *target; /* Link to table target. */ 21 void *target_config; /* Target specific data. */ 22 SLIST_ENTRY(dm_table_entry) next; 23} dm_table_entry_t; 24 25This structure stores every target part of dm device. Every device can have 26more than one target mapping entries stored in a list. This structure describe 27mapping between logical/physical blocks in dm device. 28 29start length target block device offset 300 102400 linear /dev/wd1a 384 31102400 204800 linear /dev/wd2a 384 32204800 409600 linear /dev/wd3a 384 33 34Every device has at least two tables ACTIVE and INACTIVE. Only ACTIVE table is 35used during IO. Every IO operation on dm device have to walk through dm_table_entries list. 36 37Device entry: 38 39typedef struct dm_dev { 40 char name[DM_NAME_LEN]; 41 char uuid[DM_UUID_LEN]; 42 43 int minor; 44 uint32_t flags; /* store communication protocol flags */ 45 46 kmutex_t dev_mtx; /* mutex for generall device lock */ 47 kcondvar_t dev_cv; /* cv for ioctl synchronisation */ 48 49 uint32_t event_nr; 50 uint32_t ref_cnt; 51 52 uint32_t dev_type; 53 54 dm_table_head_t table_head; 55 56 struct dm_dev_head upcalls; 57 58 struct disklabel *dk_label; /* Disklabel for this table. */ 59 60 TAILQ_ENTRY(dm_dev) next_upcall; /* LIST of mirrored, snapshoted devices. */ 61 62 TAILQ_ENTRY(dm_dev) next_devlist; /* Major device list. */ 63} dm_dev_t; 64 65Every device created in dm device-mapper is represented with this structure. 66All devices are stored in a list. Every ioctl routine have to work with this 67structure. 68 69 Locking in dm driver 70 71Locking must be done in two ways. Synchronisation between ioctl routines and 72between IO operations and ioctl. Table entries are read during IO and during some ioctl routines. There are only few routines which manipulates table lists. 73 74Read access to table list: 75 76dmsize 77dmstrategy 78dm_dev_status_ioctl 79dm_table_info_ioctl 80dm_table_deps_ioctl 81dm_disk_ioctl -> DIOCCACHESYNC ioctl 82 83Write access to table list: 84dm_dev_remove_ioctl -> remove device from list, this routine have to 85 remove all tables. 86dm_dev_resume_ioctl -> Switch tables on suspended device, switch INACTIVE 87 and ACTIVE tables. 88dm_table_clear_ioctl -> Remove INACTIVE table from table list. 89 90 91Synchronisation between readers and writers in table list 92 93I moved everything needed for table synchronisation to struct dm_table_head. 94 95typedef struct dm_table_head { 96 /* Current active table is selected with this. */ 97 int cur_active_table; 98 struct dm_table tables[2]; 99 100 kmutex_t table_mtx; 101 kcondvar_t table_cv; /*IO waiting cv */ 102 103 uint32_t io_cnt; 104} dm_table_head_t; 105 106dm_table_head_t is used as entry for every dm_table synchronisation routine. 107 108Because every table user have to get list to table list head I have implemented 109these routines to manage access to table lists. 110 111/* 112 * Destroy all table data. This function can run when there are no 113 * readers on table lists. 114 */ 115int dm_table_destroy(dm_table_head_t *, uint8_t); 116 117/* 118 * Return length of active table in device. 119 */ 120uint64_t dm_table_size(dm_table_head_t *); 121 122/* 123 * Return current active table to caller, increment io_cnt reference counter. 124 */ 125struct dm_table * dm_table_get_entry(dm_table_head_t *, uint8_t); 126 127/* 128 * Return > 0 if table is at least one table entry (returns number of entries) 129 * and return 0 if there is not. Target count returned from this function 130 * doesn't need to be true when userspace user receive it (after return 131 * there can be dm_dev_resume_ioctl), therfore this isonly informative. 132 */ 133int dm_table_get_target_count(dm_table_head_t *, uint8_t); 134 135/* 136 * Decrement io reference counter and wake up all callers, with table_head cv. 137 */ 138void dm_table_release(dm_table_head_t *, uint8_t s); 139 140/* 141 * Switch table from inactive to active mode. Have to wait until io_cnt is 0. 142 */ 143void dm_table_switch_tables(dm_table_head_t *); 144 145/* 146 * Initialize table_head structures, I'm trying to keep this structure as 147 * opaque as possible. 148 */ 149void dm_table_head_init(dm_table_head_t *); 150 151/* 152 * Destroy all variables in table_head 153 */ 154void dm_table_head_destroy(dm_table_head_t *); 155 156Internal table synchronisation protocol 157 158Readers: 159dm_table_size 160dm_table_get_target_count 161dm_table_get_target_count 162 163Readers with hold reference counter: 164dm_table_get_entry 165dm_table_release 166 167Writer: 168dm_table_destroy 169dm_table_switch_tables 170 171For managing synchronisation to table lists I use these routines. Every reader 172uses dm_table_busy routine to hold reference counter during work and dm_table_unbusy for reference counter release. Every writer have to wait while 173is reference counter 0 and only then it can work with device. It will sleep on 174head->table_cv while there are other readers. dm_table_get_entry is specific in that it will return table with hold reference counter. After dm_table_get_entry 175every caller must call dm_table_release when it doesn't want to work with it. 176 177/* 178 * Function to increment table user reference counter. Return id 179 * of table_id table. 180 * DM_TABLE_ACTIVE will return active table id. 181 * DM_TABLE_INACTIVE will return inactive table id. 182 */ 183static int 184dm_table_busy(dm_table_head_t *head, uint8_t table_id) 185{ 186 uint8_t id; 187 188 id = 0; 189 190 mutex_enter(&head->table_mtx); 191 192 if (table_id == DM_TABLE_ACTIVE) 193 id = head->cur_active_table; 194 else 195 id = 1 - head->cur_active_table; 196 197 head->io_cnt++; 198 199 mutex_exit(&head->table_mtx); 200 return id; 201} 202 203/* 204 * Function release table lock and eventually wakeup all waiters. 205 */ 206static void 207dm_table_unbusy(dm_table_head_t *head) 208{ 209 KASSERT(head->io_cnt != 0); 210 211 mutex_enter(&head->table_mtx); 212 213 if (--head->io_cnt == 0) 214 cv_broadcast(&head->table_cv); 215 216 mutex_exit(&head->table_mtx); 217} 218 219Device-mapper betwwen ioctl device synchronisation 220 221 222Every ioctl user have to find dm_device with name, uuid, minor number. 223For this dm_dev_lookup is used. This routine returns device with hold reference 224counter. 225 226void 227dm_dev_busy(dm_dev_t *dmv) 228{ 229 mutex_enter(&dmv->dev_mtx); 230 dmv->ref_cnt++; 231 mutex_exit(&dmv->dev_mtx); 232} 233 234void 235dm_dev_unbusy(dm_dev_t *dmv) 236{ 237 KASSERT(dmv->ref_cnt != 0); 238 239 mutex_enter(&dmv->dev_mtx); 240 if (--dmv->ref_cnt == 0) 241 cv_broadcast(&dmv->dev_cv); 242 mutex_exit(&dmv->dev_mtx); 243} 244 245Before returning from ioctl routine must release reference counter with 246dm_dev_unbusy. 247 248dm_dev_remove_ioctl routine have to remove dm_dev from global device list, 249and wait until all ioctl users from dm_dev are gone. 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264