original development tree for Linux kernel GTP module; now long in mainline.
You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

3279 lines
99 KiB

Btrfs: handle errors from btrfs_map_bio() everywhere With the addition of the device replace procedure, it is possible for btrfs_map_bio(READ) to report an error. This happens when the specific mirror is requested which is located on the target disk, and the copy operation has not yet copied this block. Hence the block cannot be read and this error state is indicated by returning EIO. Some background information follows now. A new mirror is added while the device replace procedure is running. btrfs_get_num_copies() returns one more, and btrfs_map_bio(GET_READ_MIRROR) adds one more mirror if a disk location is involved that was already handled by the device replace copy operation. The assigned mirror num is the highest mirror number, e.g. the value 3 in case of RAID1. If btrfs_map_bio() is invoked with mirror_num == 0 (i.e., select any mirror), the copy on the target drive is never selected because that disk shall be able to perform the write requests as quickly as possible. The parallel execution of read requests would only slow down the disk copy procedure. Second case is that btrfs_map_bio() is called with mirror_num > 0. This is done from the repair code only. In this case, the highest mirror num is assigned to the target disk, since it is used last. And when this mirror is not available because the copy procedure has not yet handled this area, an error is returned. Everywhere in the code the handling of such errors is added now. Signed-off-by: Stefan Behrens <sbehrens@giantdisaster.de> Signed-off-by: Chris Mason <chris.mason@fusionio.com>
9 years ago
Btrfs: handle errors from btrfs_map_bio() everywhere With the addition of the device replace procedure, it is possible for btrfs_map_bio(READ) to report an error. This happens when the specific mirror is requested which is located on the target disk, and the copy operation has not yet copied this block. Hence the block cannot be read and this error state is indicated by returning EIO. Some background information follows now. A new mirror is added while the device replace procedure is running. btrfs_get_num_copies() returns one more, and btrfs_map_bio(GET_READ_MIRROR) adds one more mirror if a disk location is involved that was already handled by the device replace copy operation. The assigned mirror num is the highest mirror number, e.g. the value 3 in case of RAID1. If btrfs_map_bio() is invoked with mirror_num == 0 (i.e., select any mirror), the copy on the target drive is never selected because that disk shall be able to perform the write requests as quickly as possible. The parallel execution of read requests would only slow down the disk copy procedure. Second case is that btrfs_map_bio() is called with mirror_num > 0. This is done from the repair code only. In this case, the highest mirror num is assigned to the target disk, since it is used last. And when this mirror is not available because the copy procedure has not yet handled this area, an error is returned. Everywhere in the code the handling of such errors is added now. Signed-off-by: Stefan Behrens <sbehrens@giantdisaster.de> Signed-off-by: Chris Mason <chris.mason@fusionio.com>
9 years ago
  1. /*
  2. * Copyright (C) STRATO AG 2011. All rights reserved.
  3. *
  4. * This program is free software; you can redistribute it and/or
  5. * modify it under the terms of the GNU General Public
  6. * License v2 as published by the Free Software Foundation.
  7. *
  8. * This program is distributed in the hope that it will be useful,
  9. * but WITHOUT ANY WARRANTY; without even the implied warranty of
  10. * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
  11. * General Public License for more details.
  12. *
  13. * You should have received a copy of the GNU General Public
  14. * License along with this program; if not, write to the
  15. * Free Software Foundation, Inc., 59 Temple Place - Suite 330,
  16. * Boston, MA 021110-1307, USA.
  17. */
  18. /*
  19. * This module can be used to catch cases when the btrfs kernel
  20. * code executes write requests to the disk that bring the file
  21. * system in an inconsistent state. In such a state, a power-loss
  22. * or kernel panic event would cause that the data on disk is
  23. * lost or at least damaged.
  24. *
  25. * Code is added that examines all block write requests during
  26. * runtime (including writes of the super block). Three rules
  27. * are verified and an error is printed on violation of the
  28. * rules:
  29. * 1. It is not allowed to write a disk block which is
  30. * currently referenced by the super block (either directly
  31. * or indirectly).
  32. * 2. When a super block is written, it is verified that all
  33. * referenced (directly or indirectly) blocks fulfill the
  34. * following requirements:
  35. * 2a. All referenced blocks have either been present when
  36. * the file system was mounted, (i.e., they have been
  37. * referenced by the super block) or they have been
  38. * written since then and the write completion callback
  39. * was called and no write error was indicated and a
  40. * FLUSH request to the device where these blocks are
  41. * located was received and completed.
  42. * 2b. All referenced blocks need to have a generation
  43. * number which is equal to the parent's number.
  44. *
  45. * One issue that was found using this module was that the log
  46. * tree on disk became temporarily corrupted because disk blocks
  47. * that had been in use for the log tree had been freed and
  48. * reused too early, while being referenced by the written super
  49. * block.
  50. *
  51. * The search term in the kernel log that can be used to filter
  52. * on the existence of detected integrity issues is
  53. * "btrfs: attempt".
  54. *
  55. * The integrity check is enabled via mount options. These
  56. * mount options are only supported if the integrity check
  57. * tool is compiled by defining BTRFS_FS_CHECK_INTEGRITY.
  58. *
  59. * Example #1, apply integrity checks to all metadata:
  60. * mount /dev/sdb1 /mnt -o check_int
  61. *
  62. * Example #2, apply integrity checks to all metadata and
  63. * to data extents:
  64. * mount /dev/sdb1 /mnt -o check_int_data
  65. *
  66. * Example #3, apply integrity checks to all metadata and dump
  67. * the tree that the super block references to kernel messages
  68. * each time after a super block was written:
  69. * mount /dev/sdb1 /mnt -o check_int,check_int_print_mask=263
  70. *
  71. * If the integrity check tool is included and activated in
  72. * the mount options, plenty of kernel memory is used, and
  73. * plenty of additional CPU cycles are spent. Enabling this
  74. * functionality is not intended for normal use. In most
  75. * cases, unless you are a btrfs developer who needs to verify
  76. * the integrity of (super)-block write requests, do not
  77. * enable the config option BTRFS_FS_CHECK_INTEGRITY to
  78. * include and compile the integrity check tool.
  79. */
  80. #include <linux/sched.h>
  81. #include <linux/slab.h>
  82. #include <linux/buffer_head.h>
  83. #include <linux/mutex.h>
  84. #include <linux/crc32c.h>
  85. #include <linux/genhd.h>
  86. #include <linux/blkdev.h>
  87. #include "ctree.h"
  88. #include "disk-io.h"
  89. #include "transaction.h"
  90. #include "extent_io.h"
  91. #include "volumes.h"
  92. #include "print-tree.h"
  93. #include "locking.h"
  94. #include "check-integrity.h"
  95. #include "rcu-string.h"
  96. #define BTRFSIC_BLOCK_HASHTABLE_SIZE 0x10000
  97. #define BTRFSIC_BLOCK_LINK_HASHTABLE_SIZE 0x10000
  98. #define BTRFSIC_DEV2STATE_HASHTABLE_SIZE 0x100
  99. #define BTRFSIC_BLOCK_MAGIC_NUMBER 0x14491051
  100. #define BTRFSIC_BLOCK_LINK_MAGIC_NUMBER 0x11070807
  101. #define BTRFSIC_DEV2STATE_MAGIC_NUMBER 0x20111530
  102. #define BTRFSIC_BLOCK_STACK_FRAME_MAGIC_NUMBER 20111300
  103. #define BTRFSIC_TREE_DUMP_MAX_INDENT_LEVEL (200 - 6) /* in characters,
  104. * excluding " [...]" */
  105. #define BTRFSIC_GENERATION_UNKNOWN ((u64)-1)
  106. /*
  107. * The definition of the bitmask fields for the print_mask.
  108. * They are specified with the mount option check_integrity_print_mask.
  109. */
  110. #define BTRFSIC_PRINT_MASK_SUPERBLOCK_WRITE 0x00000001
  111. #define BTRFSIC_PRINT_MASK_ROOT_CHUNK_LOG_TREE_LOCATION 0x00000002
  112. #define BTRFSIC_PRINT_MASK_TREE_AFTER_SB_WRITE 0x00000004
  113. #define BTRFSIC_PRINT_MASK_TREE_BEFORE_SB_WRITE 0x00000008
  114. #define BTRFSIC_PRINT_MASK_SUBMIT_BIO_BH 0x00000010
  115. #define BTRFSIC_PRINT_MASK_END_IO_BIO_BH 0x00000020
  116. #define BTRFSIC_PRINT_MASK_VERBOSE 0x00000040
  117. #define BTRFSIC_PRINT_MASK_VERY_VERBOSE 0x00000080
  118. #define BTRFSIC_PRINT_MASK_INITIAL_TREE 0x00000100
  119. #define BTRFSIC_PRINT_MASK_INITIAL_ALL_TREES 0x00000200
  120. #define BTRFSIC_PRINT_MASK_INITIAL_DATABASE 0x00000400
  121. #define BTRFSIC_PRINT_MASK_NUM_COPIES 0x00000800
  122. #define BTRFSIC_PRINT_MASK_TREE_WITH_ALL_MIRRORS 0x00001000
  123. struct btrfsic_dev_state;
  124. struct btrfsic_state;
  125. struct btrfsic_block {
  126. u32 magic_num; /* only used for debug purposes */
  127. unsigned int is_metadata:1; /* if it is meta-data, not data-data */
  128. unsigned int is_superblock:1; /* if it is one of the superblocks */
  129. unsigned int is_iodone:1; /* if is done by lower subsystem */
  130. unsigned int iodone_w_error:1; /* error was indicated to endio */
  131. unsigned int never_written:1; /* block was added because it was
  132. * referenced, not because it was
  133. * written */
  134. unsigned int mirror_num; /* large enough to hold
  135. * BTRFS_SUPER_MIRROR_MAX */
  136. struct btrfsic_dev_state *dev_state;
  137. u64 dev_bytenr; /* key, physical byte num on disk */
  138. u64 logical_bytenr; /* logical byte num on disk */
  139. u64 generation;
  140. struct btrfs_disk_key disk_key; /* extra info to print in case of
  141. * issues, will not always be correct */
  142. struct list_head collision_resolving_node; /* list node */
  143. struct list_head all_blocks_node; /* list node */
  144. /* the following two lists contain block_link items */
  145. struct list_head ref_to_list; /* list */
  146. struct list_head ref_from_list; /* list */
  147. struct btrfsic_block *next_in_same_bio;
  148. void *orig_bio_bh_private;
  149. union {
  150. bio_end_io_t *bio;
  151. bh_end_io_t *bh;
  152. } orig_bio_bh_end_io;
  153. int submit_bio_bh_rw;
  154. u64 flush_gen; /* only valid if !never_written */
  155. };
  156. /*
  157. * Elements of this type are allocated dynamically and required because
  158. * each block object can refer to and can be ref from multiple blocks.
  159. * The key to lookup them in the hashtable is the dev_bytenr of
  160. * the block ref to plus the one from the block refered from.
  161. * The fact that they are searchable via a hashtable and that a
  162. * ref_cnt is maintained is not required for the btrfs integrity
  163. * check algorithm itself, it is only used to make the output more
  164. * beautiful in case that an error is detected (an error is defined
  165. * as a write operation to a block while that block is still referenced).
  166. */
  167. struct btrfsic_block_link {
  168. u32 magic_num; /* only used for debug purposes */
  169. u32 ref_cnt;
  170. struct list_head node_ref_to; /* list node */
  171. struct list_head node_ref_from; /* list node */
  172. struct list_head collision_resolving_node; /* list node */
  173. struct btrfsic_block *block_ref_to;
  174. struct btrfsic_block *block_ref_from;
  175. u64 parent_generation;
  176. };
  177. struct btrfsic_dev_state {
  178. u32 magic_num; /* only used for debug purposes */
  179. struct block_device *bdev;
  180. struct btrfsic_state *state;
  181. struct list_head collision_resolving_node; /* list node */
  182. struct btrfsic_block dummy_block_for_bio_bh_flush;
  183. u64 last_flush_gen;
  184. char name[BDEVNAME_SIZE];
  185. };
  186. struct btrfsic_block_hashtable {
  187. struct list_head table[BTRFSIC_BLOCK_HASHTABLE_SIZE];
  188. };
  189. struct btrfsic_block_link_hashtable {
  190. struct list_head table[BTRFSIC_BLOCK_LINK_HASHTABLE_SIZE];
  191. };
  192. struct btrfsic_dev_state_hashtable {
  193. struct list_head table[BTRFSIC_DEV2STATE_HASHTABLE_SIZE];
  194. };
  195. struct btrfsic_block_data_ctx {
  196. u64 start; /* virtual bytenr */
  197. u64 dev_bytenr; /* physical bytenr on device */
  198. u32 len;
  199. struct btrfsic_dev_state *dev;
  200. char **datav;
  201. struct page **pagev;
  202. void *mem_to_free;
  203. };
  204. /* This structure is used to implement recursion without occupying
  205. * any stack space, refer to btrfsic_process_metablock() */
  206. struct btrfsic_stack_frame {
  207. u32 magic;
  208. u32 nr;
  209. int error;
  210. int i;
  211. int limit_nesting;
  212. int num_copies;
  213. int mirror_num;
  214. struct btrfsic_block *block;
  215. struct btrfsic_block_data_ctx *block_ctx;
  216. struct btrfsic_block *next_block;
  217. struct btrfsic_block_data_ctx next_block_ctx;
  218. struct btrfs_header *hdr;
  219. struct btrfsic_stack_frame *prev;
  220. };
  221. /* Some state per mounted filesystem */
  222. struct btrfsic_state {
  223. u32 print_mask;
  224. int include_extent_data;
  225. int csum_size;
  226. struct list_head all_blocks_list;
  227. struct btrfsic_block_hashtable block_hashtable;
  228. struct btrfsic_block_link_hashtable block_link_hashtable;
  229. struct btrfs_root *root;
  230. u64 max_superblock_generation;
  231. struct btrfsic_block *latest_superblock;
  232. u32 metablock_size;
  233. u32 datablock_size;
  234. };
  235. static void btrfsic_block_init(struct btrfsic_block *b);
  236. static struct btrfsic_block *btrfsic_block_alloc(void);
  237. static void btrfsic_block_free(struct btrfsic_block *b);
  238. static void btrfsic_block_link_init(struct btrfsic_block_link *n);
  239. static struct btrfsic_block_link *btrfsic_block_link_alloc(void);
  240. static void btrfsic_block_link_free(struct btrfsic_block_link *n);
  241. static void btrfsic_dev_state_init(struct btrfsic_dev_state *ds);
  242. static struct btrfsic_dev_state *btrfsic_dev_state_alloc(void);
  243. static void btrfsic_dev_state_free(struct btrfsic_dev_state *ds);
  244. static void btrfsic_block_hashtable_init(struct btrfsic_block_hashtable *h);
  245. static void btrfsic_block_hashtable_add(struct btrfsic_block *b,
  246. struct btrfsic_block_hashtable *h);
  247. static void btrfsic_block_hashtable_remove(struct btrfsic_block *b);
  248. static struct btrfsic_block *btrfsic_block_hashtable_lookup(
  249. struct block_device *bdev,
  250. u64 dev_bytenr,
  251. struct btrfsic_block_hashtable *h);
  252. static void btrfsic_block_link_hashtable_init(
  253. struct btrfsic_block_link_hashtable *h);
  254. static void btrfsic_block_link_hashtable_add(
  255. struct btrfsic_block_link *l,
  256. struct btrfsic_block_link_hashtable *h);
  257. static void btrfsic_block_link_hashtable_remove(struct btrfsic_block_link *l);
  258. static struct btrfsic_block_link *btrfsic_block_link_hashtable_lookup(
  259. struct block_device *bdev_ref_to,
  260. u64 dev_bytenr_ref_to,
  261. struct block_device *bdev_ref_from,
  262. u64 dev_bytenr_ref_from,
  263. struct btrfsic_block_link_hashtable *h);
  264. static void btrfsic_dev_state_hashtable_init(
  265. struct btrfsic_dev_state_hashtable *h);
  266. static void btrfsic_dev_state_hashtable_add(
  267. struct btrfsic_dev_state *ds,
  268. struct btrfsic_dev_state_hashtable *h);
  269. static void btrfsic_dev_state_hashtable_remove(struct btrfsic_dev_state *ds);
  270. static struct btrfsic_dev_state *btrfsic_dev_state_hashtable_lookup(
  271. struct block_device *bdev,
  272. struct btrfsic_dev_state_hashtable *h);
  273. static struct btrfsic_stack_frame *btrfsic_stack_frame_alloc(void);
  274. static void btrfsic_stack_frame_free(struct btrfsic_stack_frame *sf);
  275. static int btrfsic_process_superblock(struct btrfsic_state *state,
  276. struct btrfs_fs_devices *fs_devices);
  277. static int btrfsic_process_metablock(struct btrfsic_state *state,
  278. struct btrfsic_block *block,
  279. struct btrfsic_block_data_ctx *block_ctx,
  280. int limit_nesting, int force_iodone_flag);
  281. static void btrfsic_read_from_block_data(
  282. struct btrfsic_block_data_ctx *block_ctx,
  283. void *dst, u32 offset, size_t len);
  284. static int btrfsic_create_link_to_next_block(
  285. struct btrfsic_state *state,
  286. struct btrfsic_block *block,
  287. struct btrfsic_block_data_ctx
  288. *block_ctx, u64 next_bytenr,
  289. int limit_nesting,
  290. struct btrfsic_block_data_ctx *next_block_ctx,
  291. struct btrfsic_block **next_blockp,
  292. int force_iodone_flag,
  293. int *num_copiesp, int *mirror_nump,
  294. struct btrfs_disk_key *disk_key,
  295. u64 parent_generation);
  296. static int btrfsic_handle_extent_data(struct btrfsic_state *state,
  297. struct btrfsic_block *block,
  298. struct btrfsic_block_data_ctx *block_ctx,
  299. u32 item_offset, int force_iodone_flag);
  300. static int btrfsic_map_block(struct btrfsic_state *state, u64 bytenr, u32 len,
  301. struct btrfsic_block_data_ctx *block_ctx_out,
  302. int mirror_num);
  303. static int btrfsic_map_superblock(struct btrfsic_state *state, u64 bytenr,
  304. u32 len, struct block_device *bdev,
  305. struct btrfsic_block_data_ctx *block_ctx_out);
  306. static void btrfsic_release_block_ctx(struct btrfsic_block_data_ctx *block_ctx);
  307. static int btrfsic_read_block(struct btrfsic_state *state,
  308. struct btrfsic_block_data_ctx *block_ctx);
  309. static void btrfsic_dump_database(struct btrfsic_state *state);
  310. static int btrfsic_test_for_metadata(struct btrfsic_state *state,
  311. char **datav, unsigned int num_pages);
  312. static void btrfsic_process_written_block(struct btrfsic_dev_state *dev_state,
  313. u64 dev_bytenr, char **mapped_datav,
  314. unsigned int num_pages,
  315. struct bio *bio, int *bio_is_patched,
  316. struct buffer_head *bh,
  317. int submit_bio_bh_rw);
  318. static int btrfsic_process_written_superblock(
  319. struct btrfsic_state *state,
  320. struct btrfsic_block *const block,
  321. struct btrfs_super_block *const super_hdr);
  322. static void btrfsic_bio_end_io(struct bio *bp, int bio_error_status);
  323. static void btrfsic_bh_end_io(struct buffer_head *bh, int uptodate);
  324. static int btrfsic_is_block_ref_by_superblock(const struct btrfsic_state *state,
  325. const struct btrfsic_block *block,
  326. int recursion_level);
  327. static int btrfsic_check_all_ref_blocks(struct btrfsic_state *state,
  328. struct btrfsic_block *const block,
  329. int recursion_level);
  330. static void btrfsic_print_add_link(const struct btrfsic_state *state,
  331. const struct btrfsic_block_link *l);
  332. static void btrfsic_print_rem_link(const struct btrfsic_state *state,
  333. const struct btrfsic_block_link *l);
  334. static char btrfsic_get_block_type(const struct btrfsic_state *state,
  335. const struct btrfsic_block *block);
  336. static void btrfsic_dump_tree(const struct btrfsic_state *state);
  337. static void btrfsic_dump_tree_sub(const struct btrfsic_state *state,
  338. const struct btrfsic_block *block,
  339. int indent_level);
  340. static struct btrfsic_block_link *btrfsic_block_link_lookup_or_add(
  341. struct btrfsic_state *state,
  342. struct btrfsic_block_data_ctx *next_block_ctx,
  343. struct btrfsic_block *next_block,
  344. struct btrfsic_block *from_block,
  345. u64 parent_generation);
  346. static struct btrfsic_block *btrfsic_block_lookup_or_add(
  347. struct btrfsic_state *state,
  348. struct btrfsic_block_data_ctx *block_ctx,
  349. const char *additional_string,
  350. int is_metadata,
  351. int is_iodone,
  352. int never_written,
  353. int mirror_num,
  354. int *was_created);
  355. static int btrfsic_process_superblock_dev_mirror(
  356. struct btrfsic_state *state,
  357. struct btrfsic_dev_state *dev_state,
  358. struct btrfs_device *device,
  359. int superblock_mirror_num,
  360. struct btrfsic_dev_state **selected_dev_state,
  361. struct btrfs_super_block *selected_super);
  362. static struct btrfsic_dev_state *btrfsic_dev_state_lookup(
  363. struct block_device *bdev);
  364. static void btrfsic_cmp_log_and_dev_bytenr(struct btrfsic_state *state,
  365. u64 bytenr,
  366. struct btrfsic_dev_state *dev_state,
  367. u64 dev_bytenr);
  368. static struct mutex btrfsic_mutex;
  369. static int btrfsic_is_initialized;
  370. static struct btrfsic_dev_state_hashtable btrfsic_dev_state_hashtable;
  371. static void btrfsic_block_init(struct btrfsic_block *b)
  372. {
  373. b->magic_num = BTRFSIC_BLOCK_MAGIC_NUMBER;
  374. b->dev_state = NULL;
  375. b->dev_bytenr = 0;
  376. b->logical_bytenr = 0;
  377. b->generation = BTRFSIC_GENERATION_UNKNOWN;
  378. b->disk_key.objectid = 0;
  379. b->disk_key.type = 0;
  380. b->disk_key.offset = 0;
  381. b->is_metadata = 0;
  382. b->is_superblock = 0;
  383. b->is_iodone = 0;
  384. b->iodone_w_error = 0;
  385. b->never_written = 0;
  386. b->mirror_num = 0;
  387. b->next_in_same_bio = NULL;
  388. b->orig_bio_bh_private = NULL;
  389. b->orig_bio_bh_end_io.bio = NULL;
  390. INIT_LIST_HEAD(&b->collision_resolving_node);
  391. INIT_LIST_HEAD(&b->all_blocks_node);
  392. INIT_LIST_HEAD(&b->ref_to_list);
  393. INIT_LIST_HEAD(&b->ref_from_list);
  394. b->submit_bio_bh_rw = 0;
  395. b->flush_gen = 0;
  396. }
  397. static struct btrfsic_block *btrfsic_block_alloc(void)
  398. {
  399. struct btrfsic_block *b;
  400. b = kzalloc(sizeof(*b), GFP_NOFS);
  401. if (NULL != b)
  402. btrfsic_block_init(b);
  403. return b;
  404. }
  405. static void btrfsic_block_free(struct btrfsic_block *b)
  406. {
  407. BUG_ON(!(NULL == b || BTRFSIC_BLOCK_MAGIC_NUMBER == b->magic_num));
  408. kfree(b);
  409. }
  410. static void btrfsic_block_link_init(struct btrfsic_block_link *l)
  411. {
  412. l->magic_num = BTRFSIC_BLOCK_LINK_MAGIC_NUMBER;
  413. l->ref_cnt = 1;
  414. INIT_LIST_HEAD(&l->node_ref_to);
  415. INIT_LIST_HEAD(&l->node_ref_from);
  416. INIT_LIST_HEAD(&l->collision_resolving_node);
  417. l->block_ref_to = NULL;
  418. l->block_ref_from = NULL;
  419. }
  420. static struct btrfsic_block_link *btrfsic_block_link_alloc(void)
  421. {
  422. struct btrfsic_block_link *l;
  423. l = kzalloc(sizeof(*l), GFP_NOFS);
  424. if (NULL != l)
  425. btrfsic_block_link_init(l);
  426. return l;
  427. }
  428. static void btrfsic_block_link_free(struct btrfsic_block_link *l)
  429. {
  430. BUG_ON(!(NULL == l || BTRFSIC_BLOCK_LINK_MAGIC_NUMBER == l->magic_num));
  431. kfree(l);
  432. }
  433. static void btrfsic_dev_state_init(struct btrfsic_dev_state *ds)
  434. {
  435. ds->magic_num = BTRFSIC_DEV2STATE_MAGIC_NUMBER;
  436. ds->bdev = NULL;
  437. ds->state = NULL;
  438. ds->name[0] = '\0';
  439. INIT_LIST_HEAD(&ds->collision_resolving_node);
  440. ds->last_flush_gen = 0;
  441. btrfsic_block_init(&ds->dummy_block_for_bio_bh_flush);
  442. ds->dummy_block_for_bio_bh_flush.is_iodone = 1;
  443. ds->dummy_block_for_bio_bh_flush.dev_state = ds;
  444. }
  445. static struct btrfsic_dev_state *btrfsic_dev_state_alloc(void)
  446. {
  447. struct btrfsic_dev_state *ds;
  448. ds = kzalloc(sizeof(*ds), GFP_NOFS);
  449. if (NULL != ds)
  450. btrfsic_dev_state_init(ds);
  451. return ds;
  452. }
  453. static void btrfsic_dev_state_free(struct btrfsic_dev_state *ds)
  454. {
  455. BUG_ON(!(NULL == ds ||
  456. BTRFSIC_DEV2STATE_MAGIC_NUMBER == ds->magic_num));
  457. kfree(ds);
  458. }
  459. static void btrfsic_block_hashtable_init(struct btrfsic_block_hashtable *h)
  460. {
  461. int i;
  462. for (i = 0; i < BTRFSIC_BLOCK_HASHTABLE_SIZE; i++)
  463. INIT_LIST_HEAD(h->table + i);
  464. }
  465. static void btrfsic_block_hashtable_add(struct btrfsic_block *b,
  466. struct btrfsic_block_hashtable *h)
  467. {
  468. const unsigned int hashval =
  469. (((unsigned int)(b->dev_bytenr >> 16)) ^
  470. ((unsigned int)((uintptr_t)b->dev_state->bdev))) &
  471. (BTRFSIC_BLOCK_HASHTABLE_SIZE - 1);
  472. list_add(&b->collision_resolving_node, h->table + hashval);
  473. }
  474. static void btrfsic_block_hashtable_remove(struct btrfsic_block *b)
  475. {
  476. list_del(&b->collision_resolving_node);
  477. }
  478. static struct btrfsic_block *btrfsic_block_hashtable_lookup(
  479. struct block_device *bdev,
  480. u64 dev_bytenr,
  481. struct btrfsic_block_hashtable *h)
  482. {
  483. const unsigned int hashval =
  484. (((unsigned int)(dev_bytenr >> 16)) ^
  485. ((unsigned int)((uintptr_t)bdev))) &
  486. (BTRFSIC_BLOCK_HASHTABLE_SIZE - 1);
  487. struct list_head *elem;
  488. list_for_each(elem, h->table + hashval) {
  489. struct btrfsic_block *const b =
  490. list_entry(elem, struct btrfsic_block,
  491. collision_resolving_node);
  492. if (b->dev_state->bdev == bdev && b->dev_bytenr == dev_bytenr)
  493. return b;
  494. }
  495. return NULL;
  496. }
  497. static void btrfsic_block_link_hashtable_init(
  498. struct btrfsic_block_link_hashtable *h)
  499. {
  500. int i;
  501. for (i = 0; i < BTRFSIC_BLOCK_LINK_HASHTABLE_SIZE; i++)
  502. INIT_LIST_HEAD(h->table + i);
  503. }
  504. static void btrfsic_block_link_hashtable_add(
  505. struct btrfsic_block_link *l,
  506. struct btrfsic_block_link_hashtable *h)
  507. {
  508. const unsigned int hashval =
  509. (((unsigned int)(l->block_ref_to->dev_bytenr >> 16)) ^
  510. ((unsigned int)(l->block_ref_from->dev_bytenr >> 16)) ^
  511. ((unsigned int)((uintptr_t)l->block_ref_to->dev_state->bdev)) ^
  512. ((unsigned int)((uintptr_t)l->block_ref_from->dev_state->bdev)))
  513. & (BTRFSIC_BLOCK_LINK_HASHTABLE_SIZE - 1);
  514. BUG_ON(NULL == l->block_ref_to);
  515. BUG_ON(NULL == l->block_ref_from);
  516. list_add(&l->collision_resolving_node, h->table + hashval);
  517. }
  518. static void btrfsic_block_link_hashtable_remove(struct btrfsic_block_link *l)
  519. {
  520. list_del(&l->collision_resolving_node);
  521. }
  522. static struct btrfsic_block_link *btrfsic_block_link_hashtable_lookup(
  523. struct block_device *bdev_ref_to,
  524. u64 dev_bytenr_ref_to,
  525. struct block_device *bdev_ref_from,
  526. u64 dev_bytenr_ref_from,
  527. struct btrfsic_block_link_hashtable *h)
  528. {
  529. const unsigned int hashval =
  530. (((unsigned int)(dev_bytenr_ref_to >> 16)) ^
  531. ((unsigned int)(dev_bytenr_ref_from >> 16)) ^
  532. ((unsigned int)((uintptr_t)bdev_ref_to)) ^
  533. ((unsigned int)((uintptr_t)bdev_ref_from))) &
  534. (BTRFSIC_BLOCK_LINK_HASHTABLE_SIZE - 1);
  535. struct list_head *elem;
  536. list_for_each(elem, h->table + hashval) {
  537. struct btrfsic_block_link *const l =
  538. list_entry(elem, struct btrfsic_block_link,
  539. collision_resolving_node);
  540. BUG_ON(NULL == l->block_ref_to);
  541. BUG_ON(NULL == l->block_ref_from);
  542. if (l->block_ref_to->dev_state->bdev == bdev_ref_to &&
  543. l->block_ref_to->dev_bytenr == dev_bytenr_ref_to &&
  544. l->block_ref_from->dev_state->bdev == bdev_ref_from &&
  545. l->block_ref_from->dev_bytenr == dev_bytenr_ref_from)
  546. return l;
  547. }
  548. return NULL;
  549. }
  550. static void btrfsic_dev_state_hashtable_init(
  551. struct btrfsic_dev_state_hashtable *h)
  552. {
  553. int i;
  554. for (i = 0; i < BTRFSIC_DEV2STATE_HASHTABLE_SIZE; i++)
  555. INIT_LIST_HEAD(h->table + i);
  556. }
  557. static void btrfsic_dev_state_hashtable_add(
  558. struct btrfsic_dev_state *ds,
  559. struct btrfsic_dev_state_hashtable *h)
  560. {
  561. const unsigned int hashval =
  562. (((unsigned int)((uintptr_t)ds->bdev)) &
  563. (BTRFSIC_DEV2STATE_HASHTABLE_SIZE - 1));
  564. list_add(&ds->collision_resolving_node, h->table + hashval);
  565. }
  566. static void btrfsic_dev_state_hashtable_remove(struct btrfsic_dev_state *ds)
  567. {
  568. list_del(&ds->collision_resolving_node);
  569. }
  570. static struct btrfsic_dev_state *btrfsic_dev_state_hashtable_lookup(
  571. struct block_device *bdev,
  572. struct btrfsic_dev_state_hashtable *h)
  573. {
  574. const unsigned int hashval =
  575. (((unsigned int)((uintptr_t)bdev)) &
  576. (BTRFSIC_DEV2STATE_HASHTABLE_SIZE - 1));
  577. struct list_head *elem;
  578. list_for_each(elem, h->table + hashval) {
  579. struct btrfsic_dev_state *const ds =
  580. list_entry(elem, struct btrfsic_dev_state,
  581. collision_resolving_node);
  582. if (ds->bdev == bdev)
  583. return ds;
  584. }
  585. return NULL;
  586. }
  587. static int btrfsic_process_superblock(struct btrfsic_state *state,
  588. struct btrfs_fs_devices *fs_devices)
  589. {
  590. int ret = 0;
  591. struct btrfs_super_block *selected_super;
  592. struct list_head *dev_head = &fs_devices->devices;
  593. struct btrfs_device *device;
  594. struct btrfsic_dev_state *selected_dev_state = NULL;
  595. int pass;
  596. BUG_ON(NULL == state);
  597. selected_super = kzalloc(sizeof(*selected_super), GFP_NOFS);
  598. if (NULL == selected_super) {
  599. printk(KERN_INFO "btrfsic: error, kmalloc failed!\n");
  600. return -1;
  601. }
  602. list_for_each_entry(device, dev_head, dev_list) {
  603. int i;
  604. struct btrfsic_dev_state *dev_state;
  605. if (!device->bdev || !device->name)
  606. continue;
  607. dev_state = btrfsic_dev_state_lookup(device->bdev);
  608. BUG_ON(NULL == dev_state);
  609. for (i = 0; i < BTRFS_SUPER_MIRROR_MAX; i++) {
  610. ret = btrfsic_process_superblock_dev_mirror(
  611. state, dev_state, device, i,
  612. &selected_dev_state, selected_super);
  613. if (0 != ret && 0 == i) {
  614. kfree(selected_super);
  615. return ret;
  616. }
  617. }
  618. }
  619. if (NULL == state->latest_superblock) {
  620. printk(KERN_INFO "btrfsic: no superblock found!\n");
  621. kfree(selected_super);
  622. return -1;
  623. }
  624. state->csum_size = btrfs_super_csum_size(selected_super);
  625. for (pass = 0; pass < 3; pass++) {
  626. int num_copies;
  627. int mirror_num;
  628. u64 next_bytenr;
  629. switch (pass) {
  630. case 0:
  631. next_bytenr = btrfs_super_root(selected_super);
  632. if (state->print_mask &
  633. BTRFSIC_PRINT_MASK_ROOT_CHUNK_LOG_TREE_LOCATION)
  634. printk(KERN_INFO "root@%llu\n", next_bytenr);
  635. break;
  636. case 1:
  637. next_bytenr = btrfs_super_chunk_root(selected_super);
  638. if (state->print_mask &
  639. BTRFSIC_PRINT_MASK_ROOT_CHUNK_LOG_TREE_LOCATION)
  640. printk(KERN_INFO "chunk@%llu\n", next_bytenr);
  641. break;
  642. case 2:
  643. next_bytenr = btrfs_super_log_root(selected_super);
  644. if (0 == next_bytenr)
  645. continue;
  646. if (state->print_mask &
  647. BTRFSIC_PRINT_MASK_ROOT_CHUNK_LOG_TREE_LOCATION)
  648. printk(KERN_INFO "log@%llu\n", next_bytenr);
  649. break;
  650. }
  651. num_copies =
  652. btrfs_num_copies(state->root->fs_info,
  653. next_bytenr, state->metablock_size);
  654. if (state->print_mask & BTRFSIC_PRINT_MASK_NUM_COPIES)
  655. printk(KERN_INFO "num_copies(log_bytenr=%llu) = %d\n",
  656. next_bytenr, num_copies);
  657. for (mirror_num = 1; mirror_num <= num_copies; mirror_num++) {
  658. struct btrfsic_block *next_block;
  659. struct btrfsic_block_data_ctx tmp_next_block_ctx;
  660. struct btrfsic_block_link *l;
  661. ret = btrfsic_map_block(state, next_bytenr,
  662. state->metablock_size,
  663. &tmp_next_block_ctx,
  664. mirror_num);
  665. if (ret) {
  666. printk(KERN_INFO "btrfsic:"
  667. " btrfsic_map_block(root @%llu,"
  668. " mirror %d) failed!\n",
  669. next_bytenr, mirror_num);
  670. kfree(selected_super);
  671. return -1;
  672. }
  673. next_block = btrfsic_block_hashtable_lookup(
  674. tmp_next_block_ctx.dev->bdev,
  675. tmp_next_block_ctx.dev_bytenr,
  676. &state->block_hashtable);
  677. BUG_ON(NULL == next_block);
  678. l = btrfsic_block_link_hashtable_lookup(
  679. tmp_next_block_ctx.dev->bdev,
  680. tmp_next_block_ctx.dev_bytenr,
  681. state->latest_superblock->dev_state->
  682. bdev,
  683. state->latest_superblock->dev_bytenr,
  684. &state->block_link_hashtable);
  685. BUG_ON(NULL == l);
  686. ret = btrfsic_read_block(state, &tmp_next_block_ctx);
  687. if (ret < (int)PAGE_CACHE_SIZE) {
  688. printk(KERN_INFO
  689. "btrfsic: read @logical %llu failed!\n",
  690. tmp_next_block_ctx.start);
  691. btrfsic_release_block_ctx(&tmp_next_block_ctx);
  692. kfree(selected_super);
  693. return -1;
  694. }
  695. ret = btrfsic_process_metablock(state,
  696. next_block,
  697. &tmp_next_block_ctx,
  698. BTRFS_MAX_LEVEL + 3, 1);
  699. btrfsic_release_block_ctx(&tmp_next_block_ctx);
  700. }
  701. }
  702. kfree(selected_super);
  703. return ret;
  704. }
  705. static int btrfsic_process_superblock_dev_mirror(
  706. struct btrfsic_state *state,
  707. struct btrfsic_dev_state *dev_state,
  708. struct btrfs_device *device,
  709. int superblock_mirror_num,
  710. struct btrfsic_dev_state **selected_dev_state,
  711. struct btrfs_super_block *selected_super)
  712. {
  713. struct btrfs_super_block *super_tmp;
  714. u64 dev_bytenr;
  715. struct buffer_head *bh;
  716. struct btrfsic_block *superblock_tmp;
  717. int pass;
  718. struct block_device *const superblock_bdev = device->bdev;
  719. /* super block bytenr is always the unmapped device bytenr */
  720. dev_bytenr = btrfs_sb_offset(superblock_mirror_num);
  721. if (dev_bytenr + BTRFS_SUPER_INFO_SIZE > device->total_bytes)
  722. return -1;
  723. bh = __bread(superblock_bdev, dev_bytenr / 4096,
  724. BTRFS_SUPER_INFO_SIZE);
  725. if (NULL == bh)
  726. return -1;
  727. super_tmp = (struct btrfs_super_block *)
  728. (bh->b_data + (dev_bytenr & 4095));
  729. if (btrfs_super_bytenr(super_tmp) != dev_bytenr ||
  730. btrfs_super_magic(super_tmp) != BTRFS_MAGIC ||
  731. memcmp(device->uuid, super_tmp->dev_item.uuid, BTRFS_UUID_SIZE) ||
  732. btrfs_super_nodesize(super_tmp) != state->metablock_size ||
  733. btrfs_super_leafsize(super_tmp) != state->metablock_size ||
  734. btrfs_super_sectorsize(super_tmp) != state->datablock_size) {
  735. brelse(bh);
  736. return 0;
  737. }
  738. superblock_tmp =
  739. btrfsic_block_hashtable_lookup(superblock_bdev,
  740. dev_bytenr,
  741. &state->block_hashtable);
  742. if (NULL == superblock_tmp) {
  743. superblock_tmp = btrfsic_block_alloc();
  744. if (NULL == superblock_tmp) {
  745. printk(KERN_INFO "btrfsic: error, kmalloc failed!\n");
  746. brelse(bh);
  747. return -1;
  748. }
  749. /* for superblock, only the dev_bytenr makes sense */
  750. superblock_tmp->dev_bytenr = dev_bytenr;
  751. superblock_tmp->dev_state = dev_state;
  752. superblock_tmp->logical_bytenr = dev_bytenr;
  753. superblock_tmp->generation = btrfs_super_generation(super_tmp);
  754. superblock_tmp->is_metadata = 1;
  755. superblock_tmp->is_superblock = 1;
  756. superblock_tmp->is_iodone = 1;
  757. superblock_tmp->never_written = 0;
  758. superblock_tmp->mirror_num = 1 + superblock_mirror_num;
  759. if (state->print_mask & BTRFSIC_PRINT_MASK_SUPERBLOCK_WRITE)
  760. printk_in_rcu(KERN_INFO "New initial S-block (bdev %p, %s)"
  761. " @%llu (%s/%llu/%d)\n",
  762. superblock_bdev,
  763. rcu_str_deref(device->name), dev_bytenr,
  764. dev_state->name, dev_bytenr,
  765. superblock_mirror_num);
  766. list_add(&superblock_tmp->all_blocks_node,
  767. &state->all_blocks_list);
  768. btrfsic_block_hashtable_add(superblock_tmp,
  769. &state->block_hashtable);
  770. }
  771. /* select the one with the highest generation field */
  772. if (btrfs_super_generation(super_tmp) >
  773. state->max_superblock_generation ||
  774. 0 == state->max_superblock_generation) {
  775. memcpy(selected_super, super_tmp, sizeof(*selected_super));
  776. *selected_dev_state = dev_state;
  777. state->max_superblock_generation =
  778. btrfs_super_generation(super_tmp);
  779. state->latest_superblock = superblock_tmp;
  780. }
  781. for (pass = 0; pass < 3; pass++) {
  782. u64 next_bytenr;
  783. int num_copies;
  784. int mirror_num;
  785. const char *additional_string = NULL;
  786. struct btrfs_disk_key tmp_disk_key;
  787. tmp_disk_key.type = BTRFS_ROOT_ITEM_KEY;
  788. tmp_disk_key.offset = 0;
  789. switch (pass) {
  790. case 0:
  791. btrfs_set_disk_key_objectid(&tmp_disk_key,
  792. BTRFS_ROOT_TREE_OBJECTID);
  793. additional_string = "initial root ";
  794. next_bytenr = btrfs_super_root(super_tmp);
  795. break;
  796. case 1:
  797. btrfs_set_disk_key_objectid(&tmp_disk_key,
  798. BTRFS_CHUNK_TREE_OBJECTID);
  799. additional_string = "initial chunk ";
  800. next_bytenr = btrfs_super_chunk_root(super_tmp);
  801. break;
  802. case 2:
  803. btrfs_set_disk_key_objectid(&tmp_disk_key,
  804. BTRFS_TREE_LOG_OBJECTID);
  805. additional_string = "initial log ";
  806. next_bytenr = btrfs_super_log_root(super_tmp);
  807. if (0 == next_bytenr)
  808. continue;
  809. break;
  810. }
  811. num_copies =
  812. btrfs_num_copies(state->root->fs_info,
  813. next_bytenr, state->metablock_size);
  814. if (state->print_mask & BTRFSIC_PRINT_MASK_NUM_COPIES)
  815. printk(KERN_INFO "num_copies(log_bytenr=%llu) = %d\n",
  816. next_bytenr, num_copies);
  817. for (mirror_num = 1; mirror_num <= num_copies; mirror_num++) {
  818. struct btrfsic_block *next_block;
  819. struct btrfsic_block_data_ctx tmp_next_block_ctx;
  820. struct btrfsic_block_link *l;
  821. if (btrfsic_map_block(state, next_bytenr,
  822. state->metablock_size,
  823. &tmp_next_block_ctx,
  824. mirror_num)) {
  825. printk(KERN_INFO "btrfsic: btrfsic_map_block("
  826. "bytenr @%llu, mirror %d) failed!\n",
  827. next_bytenr, mirror_num);
  828. brelse(bh);
  829. return -1;
  830. }
  831. next_block = btrfsic_block_lookup_or_add(
  832. state, &tmp_next_block_ctx,
  833. additional_string, 1, 1, 0,
  834. mirror_num, NULL);
  835. if (NULL == next_block) {
  836. btrfsic_release_block_ctx(&tmp_next_block_ctx);
  837. brelse(bh);
  838. return -1;
  839. }
  840. next_block->disk_key = tmp_disk_key;
  841. next_block->generation = BTRFSIC_GENERATION_UNKNOWN;
  842. l = btrfsic_block_link_lookup_or_add(
  843. state, &tmp_next_block_ctx,
  844. next_block, superblock_tmp,
  845. BTRFSIC_GENERATION_UNKNOWN);
  846. btrfsic_release_block_ctx(&tmp_next_block_ctx);
  847. if (NULL == l) {
  848. brelse(bh);
  849. return -1;
  850. }
  851. }
  852. }
  853. if (state->print_mask & BTRFSIC_PRINT_MASK_INITIAL_ALL_TREES)
  854. btrfsic_dump_tree_sub(state, superblock_tmp, 0);
  855. brelse(bh);
  856. return 0;
  857. }
  858. static struct btrfsic_stack_frame *btrfsic_stack_frame_alloc(void)
  859. {
  860. struct btrfsic_stack_frame *sf;
  861. sf = kzalloc(sizeof(*sf), GFP_NOFS);
  862. if (NULL == sf)
  863. printk(KERN_INFO "btrfsic: alloc memory failed!\n");
  864. else
  865. sf->magic = BTRFSIC_BLOCK_STACK_FRAME_MAGIC_NUMBER;
  866. return sf;
  867. }
  868. static void btrfsic_stack_frame_free(struct btrfsic_stack_frame *sf)
  869. {
  870. BUG_ON(!(NULL == sf ||
  871. BTRFSIC_BLOCK_STACK_FRAME_MAGIC_NUMBER == sf->magic));
  872. kfree(sf);
  873. }
  874. static int btrfsic_process_metablock(
  875. struct btrfsic_state *state,
  876. struct btrfsic_block *const first_block,
  877. struct btrfsic_block_data_ctx *const first_block_ctx,
  878. int first_limit_nesting, int force_iodone_flag)
  879. {
  880. struct btrfsic_stack_frame initial_stack_frame = { 0 };
  881. struct btrfsic_stack_frame *sf;
  882. struct btrfsic_stack_frame *next_stack;
  883. struct btrfs_header *const first_hdr =
  884. (struct btrfs_header *)first_block_ctx->datav[0];
  885. BUG_ON(!first_hdr);
  886. sf = &initial_stack_frame;
  887. sf->error = 0;
  888. sf->i = -1;
  889. sf->limit_nesting = first_limit_nesting;
  890. sf->block = first_block;
  891. sf->block_ctx = first_block_ctx;
  892. sf->next_block = NULL;
  893. sf->hdr = first_hdr;
  894. sf->prev = NULL;
  895. continue_with_new_stack_frame:
  896. sf->block->generation = le64_to_cpu(sf->hdr->generation);
  897. if (0 == sf->hdr->level) {
  898. struct btrfs_leaf *const leafhdr =
  899. (struct btrfs_leaf *)sf->hdr;
  900. if (-1 == sf->i) {
  901. sf->nr = btrfs_stack_header_nritems(&leafhdr->header);
  902. if (state->print_mask & BTRFSIC_PRINT_MASK_VERBOSE)
  903. printk(KERN_INFO
  904. "leaf %llu items %d generation %llu"
  905. " owner %llu\n",
  906. sf->block_ctx->start, sf->nr,
  907. btrfs_stack_header_generation(
  908. &leafhdr->header),
  909. btrfs_stack_header_owner(
  910. &leafhdr->header));
  911. }
  912. continue_with_current_leaf_stack_frame:
  913. if (0 == sf->num_copies || sf->mirror_num > sf->num_copies) {
  914. sf->i++;
  915. sf->num_copies = 0;
  916. }
  917. if (sf->i < sf->nr) {
  918. struct btrfs_item disk_item;
  919. u32 disk_item_offset =
  920. (uintptr_t)(leafhdr->items + sf->i) -
  921. (uintptr_t)leafhdr;
  922. struct btrfs_disk_key *disk_key;
  923. u8 type;
  924. u32 item_offset;
  925. u32 item_size;