Browse Source

xfs: update for v3.12-rc1

For 3.12-rc1 there are a number of bugfixes in addition to work to ease usage
 of shared code between libxfs and the kernel, the rest of the work to enable
 project and group quotas to be used simultaneously, performance optimisations
 in the log and the CIL, directory entry file type support, fixes for log space
 reservations, some spelling/grammar cleanups, and the addition of user
 namespace support.
 
 - introduce readahead to log recovery
 - add directory entry file type support
 - fix a number of spelling errors in comments
 - introduce new Q_XGETQSTATV quotactl for project quotas
 - add USER_NS support
 - log space reservation rework
 - CIL optimisations
 - kernel/userspace libxfs rework
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v1.4.10 (GNU/Linux)
 
 iQIcBAABAgAGBQJSLeikAAoJENaLyazVq6ZOciEP/3tc850sQsPlNwP9aqd1l2Wk
 S1RJ8i+MUQ2W/PlbswCXvdUCT8DIwXWxL31tGvi8vtaLhh6t8ICSZwqNil+/GCIJ
 BErVvY4oXhEMHhlbIRRvpxblTfJGiYy3puUEz9VI0yDdUVnC33+DuEeLTQ/0mibo
 /UUqKFmM3KYpOc8vIQvH5K5i8PkjtMt9yge0k4l9COD30gtY2okkaD4b1voOsKc+
 5YFqulq7zcXBUYti+EFCQeV8aUBTGEPN4PJRdcS12/ylzsTzZivAOO+QREu7qBW8
 x+Gj8fOC+yYWCttmJlfa1n8taxge3ndEuzKN97nvvfQgjvvunMvwJ499skryYVdB
 EcPnBnpDUQuz/y7exKBT9uROK817vZBtfHzSova29ayQSWC+qDpNE4xXeDIqeCtT
 CPxdHuWMOvIdZg41E4x7je0elaZl8EAZ8hycc2WuRhtukEkIdE1O8aD7IVrMYee8
 kg+aVHG5nmYRInO1WuMinbtiCzwvVoBJToWM3y4cbfgW0dILASRyL53HDd+eCr1j
 kOpPIVgXlBZgiPMmdYahWxyVVWcE7zyex0w4frzWVlJMZ4lP5brppD6qfQg1JwOB
 z21Y95F5C2GxSyN/Lwps0G6jujHrpe6GVeYK7uKCtnqTD83nSShv5Naln7pQ3AUs
 qUMsqmJob4+bwt94Xgbx
 =V4s4
 -----END PGP SIGNATURE-----

Merge tag 'xfs-for-linus-v3.12-rc1' of git://oss.sgi.com/xfs/xfs

Pull xfs updates from Ben Myers:
 "For 3.12-rc1 there are a number of bugfixes in addition to work to
  ease usage of shared code between libxfs and the kernel, the rest of
  the work to enable project and group quotas to be used simultaneously,
  performance optimisations in the log and the CIL, directory entry file
  type support, fixes for log space reservations, some spelling/grammar
  cleanups, and the addition of user namespace support.

   - introduce readahead to log recovery
   - add directory entry file type support
   - fix a number of spelling errors in comments
   - introduce new Q_XGETQSTATV quotactl for project quotas
   - add USER_NS support
   - log space reservation rework
   - CIL optimisations
  - kernel/userspace libxfs rework"

* tag 'xfs-for-linus-v3.12-rc1' of git://oss.sgi.com/xfs/xfs: (112 commits)
  xfs: XFS_MOUNT_QUOTA_ALL needed by userspace
  xfs: dtype changed xfs_dir2_sfe_put_ino to xfs_dir3_sfe_put_ino
  Fix wrong flag ASSERT in xfs_attr_shortform_getvalue
  xfs: finish removing IOP_* macros.
  xfs: inode log reservations are too small
  xfs: check correct status variable for xfs_inobt_get_rec() call
  xfs: inode buffers may not be valid during recovery readahead
  xfs: check LSN ordering for v5 superblocks during recovery
  xfs: btree block LSN escaping to disk uninitialised
  XFS: Assertion failed: first <= last && last < BBTOB(bp->b_length), file: fs/xfs/xfs_trans_buf.c, line: 568
  xfs: fix bad dquot buffer size in log recovery readahead
  xfs: don't account buffer cancellation during log recovery readahead
  xfs: check for underflow in xfs_iformat_fork()
  xfs: xfs_dir3_sfe_put_ino can be static
  xfs: introduce object readahead to log recovery
  xfs: Simplify xfs_ail_min() with list_first_entry_or_null()
  xfs: Register hotcpu notifier after initialization
  xfs: add xfs sb v4 support for dirent filetype field
  xfs: Add write support for dirent filetype field
  xfs: Add read-only support for dirent filetype field
  ...
master
Linus Torvalds 9 years ago
parent
commit
300893b08f
  1. 8
      arch/powerpc/platforms/cell/spufs/inode.c
  2. 29
      fs/quota/quota.c
  3. 20
      fs/xfs/Makefile
  4. 24
      fs/xfs/xfs_acl.c
  5. 53
      fs/xfs/xfs_ag.h
  6. 6
      fs/xfs/xfs_alloc.c
  7. 23
      fs/xfs/xfs_aops.c
  8. 427
      fs/xfs/xfs_attr.c
  9. 9
      fs/xfs/xfs_attr.h
  10. 453
      fs/xfs/xfs_attr_inactive.c
  11. 657
      fs/xfs/xfs_attr_leaf.c
  12. 2
      fs/xfs/xfs_attr_leaf.h
  13. 655
      fs/xfs/xfs_attr_list.c
  14. 18
      fs/xfs/xfs_attr_remote.c
  15. 823
      fs/xfs/xfs_bmap.c
  16. 56
      fs/xfs/xfs_bmap.h
  17. 6
      fs/xfs/xfs_bmap_btree.c
  18. 2026
      fs/xfs/xfs_bmap_util.c
  19. 110
      fs/xfs/xfs_bmap_util.h
  20. 7
      fs/xfs/xfs_btree.c
  21. 2
      fs/xfs/xfs_btree.h
  22. 5
      fs/xfs/xfs_buf.c
  23. 58
      fs/xfs/xfs_buf_item.c
  24. 100
      fs/xfs/xfs_buf_item.h
  25. 8
      fs/xfs/xfs_da_btree.c
  26. 12
      fs/xfs/xfs_da_btree.h
  27. 459
      fs/xfs/xfs_dfrag.c
  28. 53
      fs/xfs/xfs_dfrag.h
  29. 58
      fs/xfs/xfs_dir2.c
  30. 46
      fs/xfs/xfs_dir2.h
  31. 122
      fs/xfs/xfs_dir2_block.c
  32. 25
      fs/xfs/xfs_dir2_data.c
  33. 186
      fs/xfs/xfs_dir2_format.h
  34. 404
      fs/xfs/xfs_dir2_leaf.c
  35. 14
      fs/xfs/xfs_dir2_node.c
  36. 49
      fs/xfs/xfs_dir2_priv.h
  37. 695
      fs/xfs/xfs_dir2_readdir.c
  38. 240
      fs/xfs/xfs_dir2_sf.c
  39. 5
      fs/xfs/xfs_discard.c
  40. 8
      fs/xfs/xfs_dquot.c
  41. 23
      fs/xfs/xfs_dquot_item.c
  42. 1
      fs/xfs/xfs_error.c
  43. 5
      fs/xfs/xfs_export.c
  44. 2
      fs/xfs/xfs_extent_busy.c
  45. 50
      fs/xfs/xfs_extfree_item.c
  46. 88
      fs/xfs/xfs_extfree_item.h
  47. 3
      fs/xfs/xfs_file.c
  48. 8
      fs/xfs/xfs_filestream.c
  49. 4
      fs/xfs/xfs_filestream.h
  50. 169
      fs/xfs/xfs_format.h
  51. 38
      fs/xfs/xfs_fs.h
  52. 8
      fs/xfs/xfs_fsops.c
  53. 7
      fs/xfs/xfs_ialloc.c
  54. 15
      fs/xfs/xfs_icache.c
  55. 50
      fs/xfs/xfs_icache.h
  56. 21
      fs/xfs/xfs_icreate_item.c
  57. 18
      fs/xfs/xfs_icreate_item.h
  58. 3833
      fs/xfs/xfs_inode.c
  59. 312
      fs/xfs/xfs_inode.h
  60. 483
      fs/xfs/xfs_inode_buf.c
  61. 53
      fs/xfs/xfs_inode_buf.h
  62. 1920
      fs/xfs/xfs_inode_fork.c
  63. 171
      fs/xfs/xfs_inode_fork.h
  64. 53
      fs/xfs/xfs_inode_item.c
  65. 115
      fs/xfs/xfs_inode_item.h
  66. 148
      fs/xfs/xfs_ioctl.c
  67. 10
      fs/xfs/xfs_ioctl.h
  68. 4
      fs/xfs/xfs_ioctl32.c
  69. 21
      fs/xfs/xfs_iomap.c
  70. 78
      fs/xfs/xfs_iops.c
  71. 13
      fs/xfs/xfs_iops.h
  72. 60
      fs/xfs/xfs_linux.h
  73. 113
      fs/xfs/xfs_log.c
  74. 90
      fs/xfs/xfs_log.h
  75. 371
      fs/xfs/xfs_log_cil.c
  76. 852
      fs/xfs/xfs_log_format.h
  77. 155
      fs/xfs/xfs_log_priv.h
  78. 407
      fs/xfs/xfs_log_recover.c
  79. 147
      fs/xfs/xfs_log_rlimit.c
  80. 755
      fs/xfs/xfs_mount.c
  81. 113
      fs/xfs/xfs_mount.h
  82. 95
      fs/xfs/xfs_qm.c
  83. 2
      fs/xfs/xfs_qm.h
  84. 1
      fs/xfs/xfs_qm_bhv.c
  85. 126
      fs/xfs/xfs_qm_syscalls.c
  86. 278
      fs/xfs/xfs_quota.h
  87. 157
      fs/xfs/xfs_quota_defs.h
  88. 17
      fs/xfs/xfs_quotaops.c
  89. 346
      fs/xfs/xfs_rename.c
  90. 28
      fs/xfs/xfs_rtalloc.c
  91. 53
      fs/xfs/xfs_rtalloc.h
  92. 834
      fs/xfs/xfs_sb.c
  93. 72
      fs/xfs/xfs_sb.h
  94. 31
      fs/xfs/xfs_super.c
  95. 196
      fs/xfs/xfs_symlink.c
  96. 41
      fs/xfs/xfs_symlink.h
  97. 200
      fs/xfs/xfs_symlink_remote.c
  98. 1
      fs/xfs/xfs_trace.c
  99. 732
      fs/xfs/xfs_trans.c
  100. 301
      fs/xfs/xfs_trans.h
  101. Some files were not shown because too many files have changed in this diff Show More

8
arch/powerpc/platforms/cell/spufs/inode.c

@ -620,12 +620,16 @@ spufs_parse_options(struct super_block *sb, char *options, struct inode *root)
case Opt_uid:
if (match_int(&args[0], &option))
return 0;
root->i_uid = option;
root->i_uid = make_kuid(current_user_ns(), option);
if (!uid_valid(root->i_uid))
return 0;
break;
case Opt_gid:
if (match_int(&args[0], &option))
return 0;
root->i_gid = option;
root->i_gid = make_kgid(current_user_ns(), option);
if (!gid_valid(root->i_gid))
return 0;
break;
case Opt_mode:
if (match_octal(&args[0], &option))

29
fs/quota/quota.c

@ -27,6 +27,7 @@ static int check_quotactl_permission(struct super_block *sb, int type, int cmd,
case Q_SYNC:
case Q_GETINFO:
case Q_XGETQSTAT:
case Q_XGETQSTATV:
case Q_XQUOTASYNC:
break;
/* allow to query information for dquots we "own" */
@ -217,6 +218,31 @@ static int quota_getxstate(struct super_block *sb, void __user *addr)
return ret;
}
static int quota_getxstatev(struct super_block *sb, void __user *addr)
{
struct fs_quota_statv fqs;
int ret;
if (!sb->s_qcop->get_xstatev)
return -ENOSYS;
memset(&fqs, 0, sizeof(fqs));
if (copy_from_user(&fqs, addr, 1)) /* Just read qs_version */
return -EFAULT;
/* If this kernel doesn't support user specified version, fail */
switch (fqs.qs_version) {
case FS_QSTATV_VERSION1:
break;
default:
return -EINVAL;
}
ret = sb->s_qcop->get_xstatev(sb, &fqs);
if (!ret && copy_to_user(addr, &fqs, sizeof(fqs)))
return -EFAULT;
return ret;
}
static int quota_setxquota(struct super_block *sb, int type, qid_t id,
void __user *addr)
{
@ -293,6 +319,8 @@ static int do_quotactl(struct super_block *sb, int type, int cmd, qid_t id,
return quota_setxstate(sb, cmd, addr);
case Q_XGETQSTAT:
return quota_getxstate(sb, addr);
case Q_XGETQSTATV:
return quota_getxstatev(sb, addr);
case Q_XSETQLIM:
return quota_setxquota(sb, type, id, addr);
case Q_XGETQUOTA:
@ -317,6 +345,7 @@ static int quotactl_cmd_write(int cmd)
case Q_GETINFO:
case Q_SYNC:
case Q_XGETQSTAT:
case Q_XGETQSTATV:
case Q_XGETQUOTA:
case Q_XQUOTASYNC:
return 0;

20
fs/xfs/Makefile

@ -27,9 +27,12 @@ xfs-y += xfs_trace.o
# highlevel code
xfs-y += xfs_aops.o \
xfs_attr_inactive.o \
xfs_attr_list.o \
xfs_bit.o \
xfs_bmap_util.o \
xfs_buf.o \
xfs_dfrag.o \
xfs_dir2_readdir.o \
xfs_discard.o \
xfs_error.o \
xfs_export.o \
@ -44,11 +47,11 @@ xfs-y += xfs_aops.o \
xfs_iops.o \
xfs_itable.o \
xfs_message.o \
xfs_mount.o \
xfs_mru_cache.o \
xfs_rename.o \
xfs_super.o \
xfs_utils.o \
xfs_vnodeops.o \
xfs_symlink.o \
xfs_trans.o \
xfs_xattr.o \
kmem.o \
uuid.o
@ -73,10 +76,13 @@ xfs-y += xfs_alloc.o \
xfs_ialloc_btree.o \
xfs_icreate_item.o \
xfs_inode.o \
xfs_inode_fork.o \
xfs_inode_buf.o \
xfs_log_recover.o \
xfs_mount.o \
xfs_symlink.o \
xfs_trans.o
xfs_log_rlimit.o \
xfs_sb.o \
xfs_symlink_remote.o \
xfs_trans_resv.o
# low-level transaction/log code
xfs-y += xfs_log.o \

24
fs/xfs/xfs_acl.c

@ -16,11 +16,13 @@
* Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
*/
#include "xfs.h"
#include "xfs_log_format.h"
#include "xfs_trans_resv.h"
#include "xfs_acl.h"
#include "xfs_attr.h"
#include "xfs_bmap_btree.h"
#include "xfs_inode.h"
#include "xfs_vnodeops.h"
#include "xfs_ag.h"
#include "xfs_sb.h"
#include "xfs_mount.h"
#include "xfs_trace.h"
@ -68,14 +70,15 @@ xfs_acl_from_disk(
switch (acl_e->e_tag) {
case ACL_USER:
acl_e->e_uid = xfs_uid_to_kuid(be32_to_cpu(ace->ae_id));
break;
case ACL_GROUP:
acl_e->e_id = be32_to_cpu(ace->ae_id);
acl_e->e_gid = xfs_gid_to_kgid(be32_to_cpu(ace->ae_id));
break;
case ACL_USER_OBJ:
case ACL_GROUP_OBJ:
case ACL_MASK:
case ACL_OTHER:
acl_e->e_id = ACL_UNDEFINED_ID;
break;
default:
goto fail;
@ -101,7 +104,18 @@ xfs_acl_to_disk(struct xfs_acl *aclp, const struct posix_acl *acl)
acl_e = &acl->a_entries[i];
ace->ae_tag = cpu_to_be32(acl_e->e_tag);
ace->ae_id = cpu_to_be32(acl_e->e_id);
switch (acl_e->e_tag) {
case ACL_USER:
ace->ae_id = cpu_to_be32(xfs_kuid_to_uid(acl_e->e_uid));
break;
case ACL_GROUP:
ace->ae_id = cpu_to_be32(xfs_kgid_to_gid(acl_e->e_gid));
break;
default:
ace->ae_id = cpu_to_be32(ACL_UNDEFINED_ID);
break;
}
ace->ae_perm = cpu_to_be16(acl_e->e_perm);
}
}
@ -360,7 +374,7 @@ xfs_xattr_acl_set(struct dentry *dentry, const char *name,
return -EINVAL;
if (type == ACL_TYPE_DEFAULT && !S_ISDIR(inode->i_mode))
return value ? -EACCES : 0;
if ((current_fsuid() != inode->i_uid) && !capable(CAP_FOWNER))
if (!inode_owner_or_capable(inode))
return -EPERM;
if (!value)

53
fs/xfs/xfs_ag.h

@ -226,59 +226,6 @@ typedef struct xfs_agfl {
__be32 agfl_bno[]; /* actually XFS_AGFL_SIZE(mp) */
} xfs_agfl_t;
/*
* Per-ag incore structure, copies of information in agf and agi,
* to improve the performance of allocation group selection.
*/
#define XFS_PAGB_NUM_SLOTS 128
typedef struct xfs_perag {
struct xfs_mount *pag_mount; /* owner filesystem */
xfs_agnumber_t pag_agno; /* AG this structure belongs to */
atomic_t pag_ref; /* perag reference count */
char pagf_init; /* this agf's entry is initialized */
char pagi_init; /* this agi's entry is initialized */
char pagf_metadata; /* the agf is preferred to be metadata */
char pagi_inodeok; /* The agi is ok for inodes */
__uint8_t pagf_levels[XFS_BTNUM_AGF];
/* # of levels in bno & cnt btree */
__uint32_t pagf_flcount; /* count of blocks in freelist */
xfs_extlen_t pagf_freeblks; /* total free blocks */
xfs_extlen_t pagf_longest; /* longest free space */
__uint32_t pagf_btreeblks; /* # of blocks held in AGF btrees */
xfs_agino_t pagi_freecount; /* number of free inodes */
xfs_agino_t pagi_count; /* number of allocated inodes */
/*
* Inode allocation search lookup optimisation.
* If the pagino matches, the search for new inodes
* doesn't need to search the near ones again straight away
*/
xfs_agino_t pagl_pagino;
xfs_agino_t pagl_leftrec;
xfs_agino_t pagl_rightrec;
#ifdef __KERNEL__
spinlock_t pagb_lock; /* lock for pagb_tree */
struct rb_root pagb_tree; /* ordered tree of busy extents */
atomic_t pagf_fstrms; /* # of filestreams active in this AG */
spinlock_t pag_ici_lock; /* incore inode cache lock */
struct radix_tree_root pag_ici_root; /* incore inode cache root */
int pag_ici_reclaimable; /* reclaimable inodes */
struct mutex pag_ici_reclaim_lock; /* serialisation point */
unsigned long pag_ici_reclaim_cursor; /* reclaim restart point */
/* buffer cache index */
spinlock_t pag_buf_lock; /* lock for pag_buf_tree */
struct rb_root pag_buf_tree; /* ordered tree of active buffers */
/* for rcu-safe freeing */
struct rcu_head rcu_head;
#endif
int pagb_count; /* pagb slots in use */
} xfs_perag_t;
/*
* tags for inode radix tree
*/

6
fs/xfs/xfs_alloc.c

@ -878,7 +878,7 @@ xfs_alloc_ag_vextent_near(
xfs_agblock_t ltnew; /* useful start bno of left side */
xfs_extlen_t rlen; /* length of returned extent */
int forced = 0;
#if defined(DEBUG) && defined(__KERNEL__)
#ifdef DEBUG
/*
* Randomly don't execute the first algorithm.
*/
@ -938,8 +938,8 @@ restart:
xfs_extlen_t blen=0;
xfs_agblock_t bnew=0;
#if defined(DEBUG) && defined(__KERNEL__)
if (!dofirst)
#ifdef DEBUG
if (dofirst)
break;
#endif
/*

23
fs/xfs/xfs_aops.c

@ -28,9 +28,9 @@
#include "xfs_alloc.h"
#include "xfs_error.h"
#include "xfs_iomap.h"
#include "xfs_vnodeops.h"
#include "xfs_trace.h"
#include "xfs_bmap.h"
#include "xfs_bmap_util.h"
#include <linux/aio.h>
#include <linux/gfp.h>
#include <linux/mpage.h>
@ -108,7 +108,7 @@ xfs_setfilesize_trans_alloc(
tp = xfs_trans_alloc(mp, XFS_TRANS_FSYNC_TS);
error = xfs_trans_reserve(tp, 0, XFS_FSYNC_TS_LOG_RES(mp), 0, 0, 0);
error = xfs_trans_reserve(tp, &M_RES(mp)->tr_fsyncts, 0, 0);
if (error) {
xfs_trans_cancel(tp, 0);
return error;
@ -440,7 +440,7 @@ xfs_start_page_writeback(
end_page_writeback(page);
}
static inline int bio_add_buffer(struct bio *bio, struct buffer_head *bh)
static inline int xfs_bio_add_buffer(struct bio *bio, struct buffer_head *bh)
{
return bio_add_page(bio, bh->b_page, bh->b_size, bh_offset(bh));
}
@ -514,7 +514,7 @@ xfs_submit_ioend(
goto retry;
}
if (bio_add_buffer(bio, bh) != bh->b_size) {
if (xfs_bio_add_buffer(bio, bh) != bh->b_size) {
xfs_submit_ioend_bio(wbc, ioend, bio);
goto retry;
}
@ -1498,13 +1498,26 @@ xfs_vm_write_failed(
loff_t pos,
unsigned len)
{
loff_t block_offset = pos & PAGE_MASK;
loff_t block_offset;
loff_t block_start;
loff_t block_end;
loff_t from = pos & (PAGE_CACHE_SIZE - 1);
loff_t to = from + len;
struct buffer_head *bh, *head;
/*
* The request pos offset might be 32 or 64 bit, this is all fine
* on 64-bit platform. However, for 64-bit pos request on 32-bit
* platform, the high 32-bit will be masked off if we evaluate the
* block_offset via (pos & PAGE_MASK) because the PAGE_MASK is
* 0xfffff000 as an unsigned long, hence the result is incorrect
* which could cause the following ASSERT failed in most cases.
* In order to avoid this, we can evaluate the block_offset of the
* start of the page by using shifts rather than masks the mismatch
* problem.
*/
block_offset = (pos >> PAGE_CACHE_SHIFT) << PAGE_CACHE_SHIFT;
ASSERT(block_offset + from == pos);
head = page_buffers(page);

427
fs/xfs/xfs_attr.c

@ -17,10 +17,11 @@
*/
#include "xfs.h"
#include "xfs_fs.h"
#include "xfs_types.h"
#include "xfs_format.h"
#include "xfs_bit.h"
#include "xfs_log.h"
#include "xfs_trans.h"
#include "xfs_trans_priv.h"
#include "xfs_sb.h"
#include "xfs_ag.h"
#include "xfs_mount.h"
@ -32,13 +33,13 @@
#include "xfs_alloc.h"
#include "xfs_inode_item.h"
#include "xfs_bmap.h"
#include "xfs_bmap_util.h"
#include "xfs_attr.h"
#include "xfs_attr_leaf.h"
#include "xfs_attr_remote.h"
#include "xfs_error.h"
#include "xfs_quota.h"
#include "xfs_trans_space.h"
#include "xfs_vnodeops.h"
#include "xfs_trace.h"
/*
@ -62,7 +63,6 @@ STATIC int xfs_attr_shortform_addname(xfs_da_args_t *args);
STATIC int xfs_attr_leaf_get(xfs_da_args_t *args);
STATIC int xfs_attr_leaf_addname(xfs_da_args_t *args);
STATIC int xfs_attr_leaf_removename(xfs_da_args_t *args);
STATIC int xfs_attr_leaf_list(xfs_attr_list_context_t *context);
/*
* Internal routines when attribute list is more than one block.
@ -70,7 +70,6 @@ STATIC int xfs_attr_leaf_list(xfs_attr_list_context_t *context);
STATIC int xfs_attr_node_get(xfs_da_args_t *args);
STATIC int xfs_attr_node_addname(xfs_da_args_t *args);
STATIC int xfs_attr_node_removename(xfs_da_args_t *args);
STATIC int xfs_attr_node_list(xfs_attr_list_context_t *context);
STATIC int xfs_attr_fillstate(xfs_da_state_t *state);
STATIC int xfs_attr_refillstate(xfs_da_state_t *state);
@ -90,7 +89,7 @@ xfs_attr_name_to_xname(
return 0;
}
STATIC int
int
xfs_inode_hasattr(
struct xfs_inode *ip)
{
@ -227,13 +226,14 @@ xfs_attr_set_int(
int valuelen,
int flags)
{
xfs_da_args_t args;
xfs_fsblock_t firstblock;
xfs_bmap_free_t flist;
int error, err2, committed;
xfs_mount_t *mp = dp->i_mount;
int rsvd = (flags & ATTR_ROOT) != 0;
int local;
xfs_da_args_t args;
xfs_fsblock_t firstblock;
xfs_bmap_free_t flist;
int error, err2, committed;
struct xfs_mount *mp = dp->i_mount;
struct xfs_trans_res tres;
int rsvd = (flags & ATTR_ROOT) != 0;
int local;
/*
* Attach the dquots to the inode.
@ -293,11 +293,11 @@ xfs_attr_set_int(
if (rsvd)
args.trans->t_flags |= XFS_TRANS_RESERVE;
error = xfs_trans_reserve(args.trans, args.total,
XFS_ATTRSETM_LOG_RES(mp) +
XFS_ATTRSETRT_LOG_RES(mp) * args.total,
0, XFS_TRANS_PERM_LOG_RES,
XFS_ATTRSET_LOG_COUNT);
tres.tr_logres = M_RES(mp)->tr_attrsetm.tr_logres +
M_RES(mp)->tr_attrsetrt.tr_logres * args.total;
tres.tr_logcount = XFS_ATTRSET_LOG_COUNT;
tres.tr_logflags = XFS_TRANS_PERM_LOG_RES;
error = xfs_trans_reserve(args.trans, &tres, args.total, 0);
if (error) {
xfs_trans_cancel(args.trans, 0);
return(error);
@ -517,11 +517,9 @@ xfs_attr_remove_int(xfs_inode_t *dp, struct xfs_name *name, int flags)
if (flags & ATTR_ROOT)
args.trans->t_flags |= XFS_TRANS_RESERVE;
if ((error = xfs_trans_reserve(args.trans,
XFS_ATTRRM_SPACE_RES(mp),
XFS_ATTRRM_LOG_RES(mp),
0, XFS_TRANS_PERM_LOG_RES,
XFS_ATTRRM_LOG_COUNT))) {
error = xfs_trans_reserve(args.trans, &M_RES(mp)->tr_attrrm,
XFS_ATTRRM_SPACE_RES(mp), 0);
if (error) {
xfs_trans_cancel(args.trans, 0);
return(error);
}
@ -611,228 +609,6 @@ xfs_attr_remove(
return xfs_attr_remove_int(dp, &xname, flags);
}
int
xfs_attr_list_int(xfs_attr_list_context_t *context)
{
int error;
xfs_inode_t *dp = context->dp;
XFS_STATS_INC(xs_attr_list);
if (XFS_FORCED_SHUTDOWN(dp->i_mount))
return EIO;
xfs_ilock(dp, XFS_ILOCK_SHARED);
/*
* Decide on what work routines to call based on the inode size.
*/
if (!xfs_inode_hasattr(dp)) {
error = 0;
} else if (dp->i_d.di_aformat == XFS_DINODE_FMT_LOCAL) {
error = xfs_attr_shortform_list(context);
} else if (xfs_bmap_one_block(dp, XFS_ATTR_FORK)) {
error = xfs_attr_leaf_list(context);
} else {
error = xfs_attr_node_list(context);
}
xfs_iunlock(dp, XFS_ILOCK_SHARED);
return error;
}
#define ATTR_ENTBASESIZE /* minimum bytes used by an attr */ \
(((struct attrlist_ent *) 0)->a_name - (char *) 0)
#define ATTR_ENTSIZE(namelen) /* actual bytes used by an attr */ \
((ATTR_ENTBASESIZE + (namelen) + 1 + sizeof(u_int32_t)-1) \
& ~(sizeof(u_int32_t)-1))
/*
* Format an attribute and copy it out to the user's buffer.
* Take care to check values and protect against them changing later,
* we may be reading them directly out of a user buffer.
*/
/*ARGSUSED*/
STATIC int
xfs_attr_put_listent(
xfs_attr_list_context_t *context,
int flags,
unsigned char *name,
int namelen,
int valuelen,
unsigned char *value)
{
struct attrlist *alist = (struct attrlist *)context->alist;
attrlist_ent_t *aep;
int arraytop;
ASSERT(!(context->flags & ATTR_KERNOVAL));
ASSERT(context->count >= 0);
ASSERT(context->count < (ATTR_MAX_VALUELEN/8));
ASSERT(context->firstu >= sizeof(*alist));
ASSERT(context->firstu <= context->bufsize);
/*
* Only list entries in the right namespace.
*/
if (((context->flags & ATTR_SECURE) == 0) !=
((flags & XFS_ATTR_SECURE) == 0))
return 0;
if (((context->flags & ATTR_ROOT) == 0) !=
((flags & XFS_ATTR_ROOT) == 0))
return 0;
arraytop = sizeof(*alist) +
context->count * sizeof(alist->al_offset[0]);
context->firstu -= ATTR_ENTSIZE(namelen);
if (context->firstu < arraytop) {
trace_xfs_attr_list_full(context);
alist->al_more = 1;
context->seen_enough = 1;
return 1;
}
aep = (attrlist_ent_t *)&context->alist[context->firstu];
aep->a_valuelen = valuelen;
memcpy(aep->a_name, name, namelen);
aep->a_name[namelen] = 0;
alist->al_offset[context->count++] = context->firstu;
alist->al_count = context->count;
trace_xfs_attr_list_add(context);
return 0;
}
/*
* Generate a list of extended attribute names and optionally
* also value lengths. Positive return value follows the XFS
* convention of being an error, zero or negative return code
* is the length of the buffer returned (negated), indicating
* success.
*/
int
xfs_attr_list(
xfs_inode_t *dp,
char *buffer,
int bufsize,
int flags,
attrlist_cursor_kern_t *cursor)
{
xfs_attr_list_context_t context;
struct attrlist *alist;
int error;
/*
* Validate the cursor.
*/
if (cursor->pad1 || cursor->pad2)
return(XFS_ERROR(EINVAL));
if ((cursor->initted == 0) &&
(cursor->hashval || cursor->blkno || cursor->offset))
return XFS_ERROR(EINVAL);
/*
* Check for a properly aligned buffer.
*/
if (((long)buffer) & (sizeof(int)-1))
return XFS_ERROR(EFAULT);
if (flags & ATTR_KERNOVAL)
bufsize = 0;
/*
* Initialize the output buffer.
*/
memset(&context, 0, sizeof(context));
context.dp = dp;
context.cursor = cursor;
context.resynch = 1;
context.flags = flags;
context.alist = buffer;
context.bufsize = (bufsize & ~(sizeof(int)-1)); /* align */
context.firstu = context.bufsize;
context.put_listent = xfs_attr_put_listent;
alist = (struct attrlist *)context.alist;
alist->al_count = 0;
alist->al_more = 0;
alist->al_offset[0] = context.bufsize;
error = xfs_attr_list_int(&context);
ASSERT(error >= 0);
return error;
}
int /* error */
xfs_attr_inactive(xfs_inode_t *dp)
{
xfs_trans_t *trans;
xfs_mount_t *mp;
int error;
mp = dp->i_mount;
ASSERT(! XFS_NOT_DQATTACHED(mp, dp));
xfs_ilock(dp, XFS_ILOCK_SHARED);
if (!xfs_inode_hasattr(dp) ||
dp->i_d.di_aformat == XFS_DINODE_FMT_LOCAL) {
xfs_iunlock(dp, XFS_ILOCK_SHARED);
return 0;
}
xfs_iunlock(dp, XFS_ILOCK_SHARED);
/*
* Start our first transaction of the day.
*
* All future transactions during this code must be "chained" off
* this one via the trans_dup() call. All transactions will contain
* the inode, and the inode will always be marked with trans_ihold().
* Since the inode will be locked in all transactions, we must log
* the inode in every transaction to let it float upward through
* the log.
*/
trans = xfs_trans_alloc(mp, XFS_TRANS_ATTRINVAL);
if ((error = xfs_trans_reserve(trans, 0, XFS_ATTRINVAL_LOG_RES(mp), 0,
XFS_TRANS_PERM_LOG_RES,
XFS_ATTRINVAL_LOG_COUNT))) {
xfs_trans_cancel(trans, 0);
return(error);
}
xfs_ilock(dp, XFS_ILOCK_EXCL);
/*
* No need to make quota reservations here. We expect to release some
* blocks, not allocate, in the common case.
*/
xfs_trans_ijoin(trans, dp, 0);
/*
* Decide on what work routines to call based on the inode size.
*/
if (!xfs_inode_hasattr(dp) ||
dp->i_d.di_aformat == XFS_DINODE_FMT_LOCAL) {
error = 0;
goto out;
}
error = xfs_attr3_root_inactive(&trans, dp);
if (error)
goto out;
error = xfs_itruncate_extents(&trans, dp, XFS_ATTR_FORK, 0);
if (error)
goto out;
error = xfs_trans_commit(trans, XFS_TRANS_RELEASE_LOG_RES);
xfs_iunlock(dp, XFS_ILOCK_EXCL);
return(error);
out:
xfs_trans_cancel(trans, XFS_TRANS_RELEASE_LOG_RES|XFS_TRANS_ABORT);
xfs_iunlock(dp, XFS_ILOCK_EXCL);
return(error);
}
/*========================================================================
* External routines when attribute list is inside the inode
@ -1166,28 +942,6 @@ xfs_attr_leaf_get(xfs_da_args_t *args)
return error;
}
/*
* Copy out attribute entries for attr_list(), for leaf attribute lists.
*/
STATIC int
xfs_attr_leaf_list(xfs_attr_list_context_t *context)
{
int error;
struct xfs_buf *bp;
trace_xfs_attr_leaf_list(context);
context->cursor->blkno = 0;
error = xfs_attr3_leaf_read(NULL, context->dp, 0, -1, &bp);
if (error)
return XFS_ERROR(error);
error = xfs_attr3_leaf_list_int(bp, context);
xfs_trans_brelse(NULL, bp);
return XFS_ERROR(error);
}
/*========================================================================
* External routines when attribute list size > XFS_LBSIZE(mp).
*========================================================================*/
@ -1260,6 +1014,7 @@ restart:
* have been a b-tree.
*/
xfs_da_state_free(state);
state = NULL;
xfs_bmap_init(args->flist, args->firstblock);
error = xfs_attr3_leaf_to_node(args);
if (!error) {
@ -1780,143 +1535,3 @@ xfs_attr_node_get(xfs_da_args_t *args)
xfs_da_state_free(state);
return(retval);
}
STATIC int /* error */
xfs_attr_node_list(xfs_attr_list_context_t *context)
{
attrlist_cursor_kern_t *cursor;
xfs_attr_leafblock_t *leaf;
xfs_da_intnode_t *node;
struct xfs_attr3_icleaf_hdr leafhdr;
struct xfs_da3_icnode_hdr nodehdr;
struct xfs_da_node_entry *btree;
int error, i;
struct xfs_buf *bp;
trace_xfs_attr_node_list(context);
cursor = context->cursor;
cursor->initted = 1;
/*
* Do all sorts of validation on the passed-in cursor structure.
* If anything is amiss, ignore the cursor and look up the hashval
* starting from the btree root.
*/
bp = NULL;
if (cursor->blkno > 0) {
error = xfs_da3_node_read(NULL, context->dp, cursor->blkno, -1,
&bp, XFS_ATTR_FORK);
if ((error != 0) && (error != EFSCORRUPTED))
return(error);
if (bp) {
struct xfs_attr_leaf_entry *entries;
node = bp->b_addr;
switch (be16_to_cpu(node->hdr.info.magic)) {
case XFS_DA_NODE_MAGIC:
case XFS_DA3_NODE_MAGIC:
trace_xfs_attr_list_wrong_blk(context);
xfs_trans_brelse(NULL, bp);
bp = NULL;
break;
case XFS_ATTR_LEAF_MAGIC:
case XFS_ATTR3_LEAF_MAGIC:
leaf = bp->b_addr;
xfs_attr3_leaf_hdr_from_disk(&leafhdr, leaf);
entries = xfs_attr3_leaf_entryp(leaf);
if (cursor->hashval > be32_to_cpu(
entries[leafhdr.count - 1].hashval)) {
trace_xfs_attr_list_wrong_blk(context);
xfs_trans_brelse(NULL, bp);
bp = NULL;
} else if (cursor->hashval <= be32_to_cpu(
entries[0].hashval)) {
trace_xfs_attr_list_wrong_blk(context);
xfs_trans_brelse(NULL, bp);
bp = NULL;
}
break;
default:
trace_xfs_attr_list_wrong_blk(context);
xfs_trans_brelse(NULL, bp);
bp = NULL;
}
}
}
/*
* We did not find what we expected given the cursor's contents,
* so we start from the top and work down based on the hash value.
* Note that start of node block is same as start of leaf block.
*/
if (bp == NULL) {
cursor->blkno = 0;
for (;;) {
__uint16_t magic;
error = xfs_da3_node_read(NULL, context->dp,
cursor->blkno, -1, &bp,
XFS_ATTR_FORK);
if (error)
return(error);
node = bp->b_addr;
magic = be16_to_cpu(node->hdr.info.magic);
if (magic == XFS_ATTR_LEAF_MAGIC ||
magic == XFS_ATTR3_LEAF_MAGIC)
break;
if (magic != XFS_DA_NODE_MAGIC &&
magic != XFS_DA3_NODE_MAGIC) {
XFS_CORRUPTION_ERROR("xfs_attr_node_list(3)",
XFS_ERRLEVEL_LOW,
context->dp->i_mount,
node);
xfs_trans_brelse(NULL, bp);
return XFS_ERROR(EFSCORRUPTED);
}
xfs_da3_node_hdr_from_disk(&nodehdr, node);
btree = xfs_da3_node_tree_p(node);
for (i = 0; i < nodehdr.count; btree++, i++) {
if (cursor->hashval
<= be32_to_cpu(btree->hashval)) {
cursor->blkno = be32_to_cpu(btree->before);
trace_xfs_attr_list_node_descend(context,
btree);
break;
}
}
if (i == nodehdr.count) {
xfs_trans_brelse(NULL, bp);
return 0;
}
xfs_trans_brelse(NULL, bp);
}
}
ASSERT(bp != NULL);
/*
* Roll upward through the blocks, processing each leaf block in
* order. As long as there is space in the result buffer, keep
* adding the information.
*/
for (;;) {
leaf = bp->b_addr;
error = xfs_attr3_leaf_list_int(bp, context);
if (error) {
xfs_trans_brelse(NULL, bp);
return error;
}
xfs_attr3_leaf_hdr_from_disk(&leafhdr, leaf);
if (context->seen_enough || leafhdr.forw == 0)
break;
cursor->blkno = leafhdr.forw;
xfs_trans_brelse(NULL, bp);
error = xfs_attr3_leaf_read(NULL, context->dp, cursor->blkno, -1,
&bp);
if (error)
return error;
}
xfs_trans_brelse(NULL, bp);
return 0;
}

9
fs/xfs/xfs_attr.h

@ -141,5 +141,14 @@ typedef struct xfs_attr_list_context {
*/
int xfs_attr_inactive(struct xfs_inode *dp);
int xfs_attr_list_int(struct xfs_attr_list_context *);
int xfs_inode_hasattr(struct xfs_inode *ip);
int xfs_attr_get(struct xfs_inode *ip, const unsigned char *name,
unsigned char *value, int *valuelenp, int flags);
int xfs_attr_set(struct xfs_inode *dp, const unsigned char *name,
unsigned char *value, int valuelen, int flags);
int xfs_attr_remove(struct xfs_inode *dp, const unsigned char *name, int flags);
int xfs_attr_list(struct xfs_inode *dp, char *buffer, int bufsize,
int flags, struct attrlist_cursor_kern *cursor);
#endif /* __XFS_ATTR_H__ */

453
fs/xfs/xfs_attr_inactive.c

@ -0,0 +1,453 @@
/*
* Copyright (c) 2000-2005 Silicon Graphics, Inc.
* Copyright (c) 2013 Red Hat, Inc.
* All Rights Reserved.
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License as
* published by the Free Software Foundation.
*
* This program is distributed in the hope that it would be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write the Free Software Foundation,
* Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
*/
#include "xfs.h"
#include "xfs_fs.h"
#include "xfs_format.h"
#include "xfs_bit.h"
#include "xfs_log.h"
#include "xfs_trans.h"
#include "xfs_sb.h"
#include "xfs_ag.h"
#include "xfs_mount.h"
#include "xfs_da_btree.h"
#include "xfs_bmap_btree.h"
#include "xfs_alloc_btree.h"
#include "xfs_ialloc_btree.h"
#include "xfs_alloc.h"
#include "xfs_btree.h"
#include "xfs_attr_remote.h"
#include "xfs_dinode.h"
#include "xfs_inode.h"
#include "xfs_inode_item.h"
#include "xfs_bmap.h"
#include "xfs_attr.h"
#include "xfs_attr_leaf.h"
#include "xfs_error.h"
#include "xfs_quota.h"
#include "xfs_trace.h"
#include "xfs_trans_priv.h"
/*
* Look at all the extents for this logical region,
* invalidate any buffers that are incore/in transactions.
*/
STATIC int
xfs_attr3_leaf_freextent(
struct xfs_trans **trans,
struct xfs_inode *dp,
xfs_dablk_t blkno,
int blkcnt)
{
struct xfs_bmbt_irec map;
struct xfs_buf *bp;
xfs_dablk_t tblkno;
xfs_daddr_t dblkno;
int tblkcnt;
int dblkcnt;
int nmap;
int error;
/*
* Roll through the "value", invalidating the attribute value's
* blocks.
*/
tblkno = blkno;
tblkcnt = blkcnt;
while (tblkcnt > 0) {
/*
* Try to remember where we decided to put the value.
*/
nmap = 1;
error = xfs_bmapi_read(dp, (xfs_fileoff_t)tblkno, tblkcnt,
&map, &nmap, XFS_BMAPI_ATTRFORK);
if (error) {
return(error);
}
ASSERT(nmap == 1);
ASSERT(map.br_startblock != DELAYSTARTBLOCK);
/*
* If it's a hole, these are already unmapped
* so there's nothing to invalidate.
*/
if (map.br_startblock != HOLESTARTBLOCK) {
dblkno = XFS_FSB_TO_DADDR(dp->i_mount,
map.br_startblock);
dblkcnt = XFS_FSB_TO_BB(dp->i_mount,
map.br_blockcount);
bp = xfs_trans_get_buf(*trans,
dp->i_mount->m_ddev_targp,
dblkno, dblkcnt, 0);
if (!bp)
return ENOMEM;
xfs_trans_binval(*trans, bp);
/*
* Roll to next transaction.
*/
error = xfs_trans_roll(trans, dp);
if (error)
return (error);
}
tblkno += map.br_blockcount;
tblkcnt -= map.br_blockcount;
}
return(0);
}
/*
* Invalidate all of the "remote" value regions pointed to by a particular
* leaf block.
* Note that we must release the lock on the buffer so that we are not
* caught holding something that the logging code wants to flush to disk.
*/
STATIC int
xfs_attr3_leaf_inactive(
struct xfs_trans **trans,
struct xfs_inode *dp,
struct xfs_buf *bp)
{
struct xfs_attr_leafblock *leaf;
struct xfs_attr3_icleaf_hdr ichdr;
struct xfs_attr_leaf_entry *entry;
struct xfs_attr_leaf_name_remote *name_rmt;
struct xfs_attr_inactive_list *list;
struct xfs_attr_inactive_list *lp;
int error;
int count;
int size;
int tmp;
int i;
leaf = bp->b_addr;
xfs_attr3_leaf_hdr_from_disk(&ichdr, leaf);
/*
* Count the number of "remote" value extents.
*/
count = 0;
entry = xfs_attr3_leaf_entryp(leaf);
for (i = 0; i < ichdr.count; entry++, i++) {
if (be16_to_cpu(entry->nameidx) &&
((entry->flags & XFS_ATTR_LOCAL) == 0)) {
name_rmt = xfs_attr3_leaf_name_remote(leaf, i);
if (name_rmt->valueblk)
count++;
}
}
/*
* If there are no "remote" values, we're done.
*/
if (count == 0) {
xfs_trans_brelse(*trans, bp);
return 0;
}
/*
* Allocate storage for a list of all the "remote" value extents.
*/
size = count * sizeof(xfs_attr_inactive_list_t);
list = kmem_alloc(size, KM_SLEEP);
/*
* Identify each of the "remote" value extents.
*/
lp = list;
entry = xfs_attr3_leaf_entryp(leaf);
for (i = 0; i < ichdr.count; entry++, i++) {
if (be16_to_cpu(entry->nameidx) &&
((entry->flags & XFS_ATTR_LOCAL) == 0)) {
name_rmt = xfs_attr3_leaf_name_remote(leaf, i);
if (name_rmt->valueblk) {
lp->valueblk = be32_to_cpu(name_rmt->valueblk);
lp->valuelen = xfs_attr3_rmt_blocks(dp->i_mount,
be32_to_cpu(name_rmt->valuelen));
lp++;
}
}
}
xfs_trans_brelse(*trans, bp); /* unlock for trans. in freextent() */
/*
* Invalidate each of the "remote" value extents.
*/
error = 0;
for (lp = list, i = 0; i < count; i++, lp++) {
tmp = xfs_attr3_leaf_freextent(trans, dp,
lp->valueblk, lp->valuelen);
if (error == 0)
error = tmp; /* save only the 1st errno */
}
kmem_free(list);
return error;
}
/*
* Recurse (gasp!) through the attribute nodes until we find leaves.
* We're doing a depth-first traversal in order to invalidate everything.
*/
STATIC int
xfs_attr3_node_inactive(
struct xfs_trans **trans,
struct xfs_inode *dp,
struct xfs_buf *bp,
int level)
{
xfs_da_blkinfo_t *info;
xfs_da_intnode_t *node;
xfs_dablk_t child_fsb;
xfs_daddr_t parent_blkno, child_blkno;
int error, i;
struct xfs_buf *child_bp;
struct xfs_da_node_entry *btree;
struct xfs_da3_icnode_hdr ichdr;
/*
* Since this code is recursive (gasp!) we must protect ourselves.
*/
if (level > XFS_DA_NODE_MAXDEPTH) {
xfs_trans_brelse(*trans, bp); /* no locks for later trans */
return XFS_ERROR(EIO);
}
node = bp->b_addr;
xfs_da3_node_hdr_from_disk(&ichdr, node);
parent_blkno = bp->b_bn;
if (!ichdr.count) {
xfs_trans_brelse(*trans, bp);
return 0;
}
btree = xfs_da3_node_tree_p(node);
child_fsb = be32_to_cpu(btree[0].before);
xfs_trans_brelse(*trans, bp); /* no locks for later trans */
/*
* If this is the node level just above the leaves, simply loop
* over the leaves removing all of them. If this is higher up
* in the tree, recurse downward.
*/
for (i = 0; i < ichdr.count; i++) {
/*
* Read the subsidiary block to see what we have to work with.
* Don't do this in a transaction. This is a depth-first
* traversal of the tree so we may deal with many blocks
* before we come back to this one.
*/
error = xfs_da3_node_read(*trans, dp, child_fsb, -2, &child_bp,
XFS_ATTR_FORK);
if (error)
return(error);
if (child_bp) {
/* save for re-read later */
child_blkno = XFS_BUF_ADDR(child_bp);
/*
* Invalidate the subtree, however we have to.
*/
info = child_bp->b_addr;
switch (info->magic) {
case cpu_to_be16(XFS_DA_NODE_MAGIC):
case cpu_to_be16(XFS_DA3_NODE_MAGIC):
error = xfs_attr3_node_inactive(trans, dp,
child_bp, level + 1);
break;
case cpu_to_be16(XFS_ATTR_LEAF_MAGIC):
case cpu_to_be16(XFS_ATTR3_LEAF_MAGIC):
error = xfs_attr3_leaf_inactive(trans, dp,
child_bp);
break;
default:
error = XFS_ERROR(EIO);
xfs_trans_brelse(*trans, child_bp);
break;
}
if (error)
return error;
/*
* Remove the subsidiary block from the cache
* and from the log.
*/
error = xfs_da_get_buf(*trans, dp, 0, child_blkno,
&child_bp, XFS_ATTR_FORK);
if (error)
return error;
xfs_trans_binval(*trans, child_bp);
}
/*
* If we're not done, re-read the parent to get the next
* child block number.
*/
if (i + 1 < ichdr.count) {
error = xfs_da3_node_read(*trans, dp, 0, parent_blkno,
&bp, XFS_ATTR_FORK);
if (error)
return error;
child_fsb = be32_to_cpu(btree[i + 1].before);
xfs_trans_brelse(*trans, bp);
}
/*
* Atomically commit the whole invalidate stuff.
*/
error = xfs_trans_roll(trans, dp);
if (error)
return error;
}
return 0;
}
/*
* Indiscriminately delete the entire attribute fork
*
* Recurse (gasp!) through the attribute nodes until we find leaves.
* We're doing a depth-first traversal in order to invalidate everything.
*/
int
xfs_attr3_root_inactive(
struct xfs_trans **trans,
struct xfs_inode *dp)
{
struct xfs_da_blkinfo *info;
struct xfs_buf *bp;
xfs_daddr_t blkno;
int error;
/*
* Read block 0 to see what we have to work with.
* We only get here if we have extents, since we remove
* the extents in reverse order the extent containing
* block 0 must still be there.
*/
error = xfs_da3_node_read(*trans, dp, 0, -1, &bp, XFS_ATTR_FORK);
if (error)
return error;
blkno = bp->b_bn;
/*
* Invalidate the tree, even if the "tree" is only a single leaf block.
* This is a depth-first traversal!
*/
info = bp->b_addr;
switch (info->magic) {
case cpu_to_be16(XFS_DA_NODE_MAGIC):
case cpu_to_be16(XFS_DA3_NODE_MAGIC):
error = xfs_attr3_node_inactive(trans, dp, bp, 1);
break;
case cpu_to_be16(XFS_ATTR_LEAF_MAGIC):
case cpu_to_be16(XFS_ATTR3_LEAF_MAGIC):
error = xfs_attr3_leaf_inactive(trans, dp, bp);
break;
default:
error = XFS_ERROR(EIO);
xfs_trans_brelse(*trans, bp);
break;
}
if (error)
return error;
/*
* Invalidate the incore copy of the root block.
*/
error = xfs_da_get_buf(*trans, dp, 0, blkno, &bp, XFS_ATTR_FORK);
if (error)
return error;
xfs_trans_binval(*trans, bp); /* remove from cache */
/*
* Commit the invalidate and start the next transaction.
*/
error = xfs_trans_roll(trans, dp);
return error;
}
int
xfs_attr_inactive(xfs_inode_t *dp)
{