original development tree for Linux kernel GTP module; now long in mainline.
You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

1281 lines
28 KiB

[MTD] Rework the out of band handling completely Hopefully the last iteration on this! The handling of out of band data on NAND was accompanied by tons of fruitless discussions and halfarsed patches to make it work for a particular problem. Sufficiently annoyed by I all those "I know it better" mails and the resonable amount of discarded "it solves my problem" patches, I finally decided to go for the big rework. After removing the _ecc variants of mtd read/write functions the solution to satisfy the various requirements was to refactor the read/write _oob functions in mtd. The major change is that read/write_oob now takes a pointer to an operation descriptor structure "struct mtd_oob_ops".instead of having a function with at least seven arguments. read/write_oob which should probably renamed to a more descriptive name, can do the following tasks: - read/write out of band data - read/write data content and out of band data - read/write raw data content and out of band data (ecc disabled) struct mtd_oob_ops has a mode field, which determines the oob handling mode. Aside of the MTD_OOB_RAW mode, which is intended to be especially for diagnostic purposes and some internal functions e.g. bad block table creation, the other two modes are for mtd clients: MTD_OOB_PLACE puts/gets the given oob data exactly to/from the place which is described by the ooboffs and ooblen fields of the mtd_oob_ops strcuture. It's up to the caller to make sure that the byte positions are not used by the ECC placement algorithms. MTD_OOB_AUTO puts/gets the given oob data automaticaly to/from the places in the out of band area which are described by the oobfree tuples in the ecclayout data structre which is associated to the devicee. The decision whether data plus oob or oob only handling is done depends on the setting of the datbuf member of the data structure. When datbuf == NULL then the internal read/write_oob functions are selected, otherwise the read/write data routines are invoked. Tested on a few platforms with all variants. Please be aware of possible regressions for your particular device / application scenario Disclaimer: Any whining will be ignored from those who just contributed "hot air blurb" and never sat down to tackle the underlying problem of the mess in the NAND driver grown over time and the big chunk of work to fix up the existing users. The problem was not the holiness of the existing MTD interfaces. The problems was the lack of time to go for the big overhaul. It's easy to add more mess to the existing one, but it takes alot of effort to go for a real solution. Improvements and bugfixes are welcome! Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
16 years ago
mm: kill vma flag VM_RESERVED and mm->reserved_vm counter A long time ago, in v2.4, VM_RESERVED kept swapout process off VMA, currently it lost original meaning but still has some effects: | effect | alternative flags -+------------------------+--------------------------------------------- 1| account as reserved_vm | VM_IO 2| skip in core dump | VM_IO, VM_DONTDUMP 3| do not merge or expand | VM_IO, VM_DONTEXPAND, VM_HUGETLB, VM_PFNMAP 4| do not mlock | VM_IO, VM_DONTEXPAND, VM_HUGETLB, VM_PFNMAP This patch removes reserved_vm counter from mm_struct. Seems like nobody cares about it, it does not exported into userspace directly, it only reduces total_vm showed in proc. Thus VM_RESERVED can be replaced with VM_IO or pair VM_DONTEXPAND | VM_DONTDUMP. remap_pfn_range() and io_remap_pfn_range() set VM_IO|VM_DONTEXPAND|VM_DONTDUMP. remap_vmalloc_range() set VM_DONTEXPAND | VM_DONTDUMP. [akpm@linux-foundation.org: drivers/vfio/pci/vfio_pci.c fixup] Signed-off-by: Konstantin Khlebnikov <khlebnikov@openvz.org> Cc: Alexander Viro <viro@zeniv.linux.org.uk> Cc: Carsten Otte <cotte@de.ibm.com> Cc: Chris Metcalf <cmetcalf@tilera.com> Cc: Cyrill Gorcunov <gorcunov@openvz.org> Cc: Eric Paris <eparis@redhat.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Hugh Dickins <hughd@google.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: James Morris <james.l.morris@oracle.com> Cc: Jason Baron <jbaron@redhat.com> Cc: Kentaro Takeda <takedakn@nttdata.co.jp> Cc: Matt Helsley <matthltc@us.ibm.com> Cc: Nick Piggin <npiggin@kernel.dk> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Robert Richter <robert.richter@amd.com> Cc: Suresh Siddha <suresh.b.siddha@intel.com> Cc: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp> Cc: Venkatesh Pallipadi <venki@google.com> Acked-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
9 years ago
fs: Limit sys_mount to only request filesystem modules. Modify the request_module to prefix the file system type with "fs-" and add aliases to all of the filesystems that can be built as modules to match. A common practice is to build all of the kernel code and leave code that is not commonly needed as modules, with the result that many users are exposed to any bug anywhere in the kernel. Looking for filesystems with a fs- prefix limits the pool of possible modules that can be loaded by mount to just filesystems trivially making things safer with no real cost. Using aliases means user space can control the policy of which filesystem modules are auto-loaded by editing /etc/modprobe.d/*.conf with blacklist and alias directives. Allowing simple, safe, well understood work-arounds to known problematic software. This also addresses a rare but unfortunate problem where the filesystem name is not the same as it's module name and module auto-loading would not work. While writing this patch I saw a handful of such cases. The most significant being autofs that lives in the module autofs4. This is relevant to user namespaces because we can reach the request module in get_fs_type() without having any special permissions, and people get uncomfortable when a user specified string (in this case the filesystem type) goes all of the way to request_module. After having looked at this issue I don't think there is any particular reason to perform any filtering or permission checks beyond making it clear in the module request that we want a filesystem module. The common pattern in the kernel is to call request_module() without regards to the users permissions. In general all a filesystem module does once loaded is call register_filesystem() and go to sleep. Which means there is not much attack surface exposed by loading a filesytem module unless the filesystem is mounted. In a user namespace filesystems are not mounted unless .fs_flags = FS_USERNS_MOUNT, which most filesystems do not set today. Acked-by: Serge Hallyn <serge.hallyn@canonical.com> Acked-by: Kees Cook <keescook@chromium.org> Reported-by: Kees Cook <keescook@google.com> Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
9 years ago
  1. /*
  2. * Copyright © 1999-2010 David Woodhouse <dwmw2@infradead.org>
  3. *
  4. * This program is free software; you can redistribute it and/or modify
  5. * it under the terms of the GNU General Public License as published by
  6. * the Free Software Foundation; either version 2 of the License, or
  7. * (at your option) any later version.
  8. *
  9. * This program is distributed in the hope that it will be useful,
  10. * but WITHOUT ANY WARRANTY; without even the implied warranty of
  11. * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
  12. * GNU General Public License for more details.
  13. *
  14. * You should have received a copy of the GNU General Public License
  15. * along with this program; if not, write to the Free Software
  16. * Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
  17. *
  18. */
  19. #include <linux/device.h>
  20. #include <linux/fs.h>
  21. #include <linux/mm.h>
  22. #include <linux/err.h>
  23. #include <linux/init.h>
  24. #include <linux/kernel.h>
  25. #include <linux/module.h>
  26. #include <linux/slab.h>
  27. #include <linux/sched.h>
  28. #include <linux/mutex.h>
  29. #include <linux/backing-dev.h>
  30. #include <linux/compat.h>
  31. #include <linux/mount.h>
  32. #include <linux/blkpg.h>
  33. #include <linux/magic.h>
  34. #include <linux/mtd/mtd.h>
  35. #include <linux/mtd/partitions.h>
  36. #include <linux/mtd/map.h>
  37. #include <asm/uaccess.h>
  38. static DEFINE_MUTEX(mtd_mutex);
  39. /*
  40. * Data structure to hold the pointer to the mtd device as well
  41. * as mode information of various use cases.
  42. */
  43. struct mtd_file_info {
  44. struct mtd_info *mtd;
  45. struct inode *ino;
  46. enum mtd_file_modes mode;
  47. };
  48. static loff_t mtdchar_lseek(struct file *file, loff_t offset, int orig)
  49. {
  50. struct mtd_file_info *mfi = file->private_data;
  51. struct mtd_info *mtd = mfi->mtd;
  52. switch (orig) {
  53. case SEEK_SET:
  54. break;
  55. case SEEK_CUR:
  56. offset += file->f_pos;
  57. break;
  58. case SEEK_END:
  59. offset += mtd->size;
  60. break;
  61. default:
  62. return -EINVAL;
  63. }
  64. if (offset >= 0 && offset <= mtd->size)
  65. return file->f_pos = offset;
  66. return -EINVAL;
  67. }
  68. static int count;
  69. static struct vfsmount *mnt;
  70. static struct file_system_type mtd_inodefs_type;
  71. static int mtdchar_open(struct inode *inode, struct file *file)
  72. {
  73. int minor = iminor(inode);
  74. int devnum = minor >> 1;
  75. int ret = 0;
  76. struct mtd_info *mtd;
  77. struct mtd_file_info *mfi;
  78. struct inode *mtd_ino;
  79. pr_debug("MTD_open\n");
  80. /* You can't open the RO devices RW */
  81. if ((file->f_mode & FMODE_WRITE) && (minor & 1))
  82. return -EACCES;
  83. ret = simple_pin_fs(&mtd_inodefs_type, &mnt, &count);
  84. if (ret)
  85. return ret;
  86. mutex_lock(&mtd_mutex);
  87. mtd = get_mtd_device(NULL, devnum);
  88. if (IS_ERR(mtd)) {
  89. ret = PTR_ERR(mtd);
  90. goto out;
  91. }
  92. if (mtd->type == MTD_ABSENT) {
  93. ret = -ENODEV;
  94. goto out1;
  95. }
  96. mtd_ino = iget_locked(mnt->mnt_sb, devnum);
  97. if (!mtd_ino) {
  98. ret = -ENOMEM;
  99. goto out1;
  100. }
  101. if (mtd_ino->i_state & I_NEW) {
  102. mtd_ino->i_private = mtd;
  103. mtd_ino->i_mode = S_IFCHR;
  104. mtd_ino->i_data.backing_dev_info = mtd->backing_dev_info;
  105. unlock_new_inode(mtd_ino);
  106. }
  107. file->f_mapping = mtd_ino->i_mapping;
  108. /* You can't open it RW if it's not a writeable device */
  109. if ((file->f_mode & FMODE_WRITE) && !(mtd->flags & MTD_WRITEABLE)) {
  110. ret = -EACCES;
  111. goto out2;
  112. }
  113. mfi = kzalloc(sizeof(*mfi), GFP_KERNEL);
  114. if (!mfi) {
  115. ret = -ENOMEM;
  116. goto out2;
  117. }
  118. mfi->ino = mtd_ino;
  119. mfi->mtd = mtd;
  120. file->private_data = mfi;
  121. mutex_unlock(&mtd_mutex);
  122. return 0;
  123. out2:
  124. iput(mtd_ino);
  125. out1:
  126. put_mtd_device(mtd);
  127. out:
  128. mutex_unlock(&mtd_mutex);
  129. simple_release_fs(&mnt, &count);
  130. return ret;
  131. } /* mtdchar_open */
  132. /*====================================================================*/
  133. static int mtdchar_close(struct inode *inode, struct file *file)
  134. {
  135. struct mtd_file_info *mfi = file->private_data;
  136. struct mtd_info *mtd = mfi->mtd;
  137. pr_debug("MTD_close\n");
  138. /* Only sync if opened RW */
  139. if ((file->f_mode & FMODE_WRITE))
  140. mtd_sync(mtd);
  141. iput(mfi->ino);
  142. put_mtd_device(mtd);
  143. file->private_data = NULL;
  144. kfree(mfi);
  145. simple_release_fs(&mnt, &count);
  146. return 0;
  147. } /* mtdchar_close */
  148. /* Back in June 2001, dwmw2 wrote:
  149. *
  150. * FIXME: This _really_ needs to die. In 2.5, we should lock the
  151. * userspace buffer down and use it directly with readv/writev.
  152. *
  153. * The implementation below, using mtd_kmalloc_up_to, mitigates
  154. * allocation failures when the system is under low-memory situations
  155. * or if memory is highly fragmented at the cost of reducing the
  156. * performance of the requested transfer due to a smaller buffer size.
  157. *
  158. * A more complex but more memory-efficient implementation based on
  159. * get_user_pages and iovecs to cover extents of those pages is a
  160. * longer-term goal, as intimated by dwmw2 above. However, for the
  161. * write case, this requires yet more complex head and tail transfer
  162. * handling when those head and tail offsets and sizes are such that
  163. * alignment requirements are not met in the NAND subdriver.
  164. */
  165. static ssize_t mtdchar_read(struct file *file, char __user *buf, size_t count,
  166. loff_t *ppos)
  167. {
  168. struct mtd_file_info *mfi = file->private_data;
  169. struct mtd_info *mtd = mfi->mtd;
  170. size_t retlen;
  171. size_t total_retlen=0;
  172. int ret=0;
  173. int len;
  174. size_t size = count;
  175. char *kbuf;
  176. pr_debug("MTD_read\n");
  177. if (*ppos + count > mtd->size)
  178. count = mtd->size - *ppos;
  179. if (!count)
  180. return 0;
  181. kbuf = mtd_kmalloc_up_to(mtd, &size);
  182. if (!kbuf)
  183. return -ENOMEM;
  184. while (count) {
  185. len = min_t(size_t, count, size);
  186. switch (mfi->mode) {
  187. case MTD_FILE_MODE_OTP_FACTORY:
  188. ret = mtd_read_fact_prot_reg(mtd, *ppos, len,
  189. &retlen, kbuf);
  190. break;
  191. case MTD_FILE_MODE_OTP_USER:
  192. ret = mtd_read_user_prot_reg(mtd, *ppos, len,
  193. &retlen, kbuf);
  194. break;
  195. case MTD_FILE_MODE_RAW:
  196. {
  197. struct mtd_oob_ops ops;
  198. ops.mode = MTD_OPS_RAW;
  199. ops.datbuf = kbuf;
  200. ops.oobbuf = NULL;
  201. ops.len = len;
  202. ret = mtd_read_oob(mtd, *ppos, &ops);
  203. retlen = ops.retlen;
  204. break;
  205. }
  206. default:
  207. ret = mtd_read(mtd, *ppos, len, &retlen, kbuf);
  208. }
  209. /* Nand returns -EBADMSG on ECC errors, but it returns
  210. * the data. For our userspace tools it is important
  211. * to dump areas with ECC errors!
  212. * For kernel internal usage it also might return -EUCLEAN
  213. * to signal the caller that a bitflip has occurred and has
  214. * been corrected by the ECC algorithm.
  215. * Userspace software which accesses NAND this way
  216. * must be aware of the fact that it deals with NAND
  217. */
  218. if (!ret || mtd_is_bitflip_or_eccerr(ret)) {
  219. *ppos += retlen;
  220. if (copy_to_user(buf, kbuf, retlen)) {
  221. kfree(kbuf);
  222. return -EFAULT;
  223. }
  224. else
  225. total_retlen += retlen;
  226. count -= retlen;
  227. buf += retlen;
  228. if (retlen == 0)
  229. count = 0;
  230. }
  231. else {
  232. kfree(kbuf);
  233. return ret;
  234. }
  235. }
  236. kfree(kbuf);
  237. return total_retlen;
  238. } /* mtdchar_read */
  239. static ssize_t mtdchar_write(struct file *file, const char __user *buf, size_t count,
  240. loff_t *ppos)
  241. {
  242. struct mtd_file_info *mfi = file->private_data;
  243. struct mtd_info *mtd = mfi->mtd;
  244. size_t size = count;
  245. char *kbuf;
  246. size_t retlen;
  247. size_t total_retlen=0;
  248. int ret=0;
  249. int len;
  250. pr_debug("MTD_write\n");
  251. if (*ppos == mtd->size)
  252. return -ENOSPC;
  253. if (*ppos + count > mtd->size)
  254. count = mtd->size - *ppos;
  255. if (!count)
  256. return 0;
  257. kbuf = mtd_kmalloc_up_to(mtd, &size);
  258. if (!kbuf)
  259. return -ENOMEM;
  260. while (count) {
  261. len = min_t(size_t, count, size);
  262. if (copy_from_user(kbuf, buf, len)) {
  263. kfree(kbuf);
  264. return -EFAULT;
  265. }
  266. switch (mfi->mode) {
  267. case MTD_FILE_MODE_OTP_FACTORY:
  268. ret = -EROFS;
  269. break;
  270. case MTD_FILE_MODE_OTP_USER:
  271. ret = mtd_write_user_prot_reg(mtd, *ppos, len,
  272. &retlen, kbuf);
  273. break;
  274. case MTD_FILE_MODE_RAW:
  275. {
  276. struct mtd_oob_ops ops;
  277. ops.mode = MTD_OPS_RAW;
  278. ops.datbuf = kbuf;
  279. ops.oobbuf = NULL;
  280. ops.ooboffs = 0;
  281. ops.len = len;
  282. ret = mtd_write_oob(mtd, *ppos, &ops);
  283. retlen = ops.retlen;
  284. break;
  285. }
  286. default:
  287. ret = mtd_write(mtd, *ppos, len, &retlen, kbuf);
  288. }
  289. if (!ret) {
  290. *ppos += retlen;
  291. total_retlen += retlen;
  292. count -= retlen;
  293. buf += retlen;
  294. }
  295. else {
  296. kfree(kbuf);
  297. return ret;
  298. }
  299. }
  300. kfree(kbuf);
  301. return total_retlen;
  302. } /* mtdchar_write */
  303. /*======================================================================
  304. IOCTL calls for getting device parameters.
  305. ======================================================================*/
  306. static void mtdchar_erase_callback (struct erase_info *instr)
  307. {
  308. wake_up((wait_queue_head_t *)instr->priv);
  309. }
  310. #ifdef CONFIG_HAVE_MTD_OTP
  311. static int otp_select_filemode(struct mtd_file_info *mfi, int mode)
  312. {
  313. struct mtd_info *mtd = mfi->mtd;
  314. size_t retlen;
  315. int ret = 0;
  316. /*
  317. * Make a fake call to mtd_read_fact_prot_reg() to check if OTP
  318. * operations are supported.
  319. */
  320. if (mtd_read_fact_prot_reg(mtd, -1, 0, &retlen, NULL) == -EOPNOTSUPP)
  321. return -EOPNOTSUPP;
  322. switch (mode) {
  323. case MTD_OTP_FACTORY:
  324. mfi->mode = MTD_FILE_MODE_OTP_FACTORY;
  325. break;
  326. case MTD_OTP_USER:
  327. mfi->mode = MTD_FILE_MODE_OTP_USER;
  328. break;
  329. default:
  330. ret = -EINVAL;
  331. case MTD_OTP_OFF:
  332. break;
  333. }
  334. return ret;
  335. }
  336. #else
  337. # define otp_select_filemode(f,m) -EOPNOTSUPP
  338. #endif
  339. static int mtdchar_writeoob(struct file *file, struct mtd_info *mtd,
  340. uint64_t start, uint32_t length, void __user *ptr,
  341. uint32_t __user *retp)
  342. {
  343. struct mtd_file_info *mfi = file->private_data;
  344. struct mtd_oob_ops ops;
  345. uint32_t retlen;
  346. int ret = 0;
  347. if (!(file->f_mode & FMODE_WRITE))
  348. return -EPERM;
  349. if (length > 4096)
  350. return -EINVAL;
  351. if (!mtd->_write_oob)
  352. ret = -EOPNOTSUPP;
  353. else
  354. ret = access_ok(VERIFY_READ, ptr, length) ? 0 : -EFAULT;
  355. if (ret)
  356. return ret;
  357. ops.ooblen = length;
  358. ops.ooboffs = start & (mtd->writesize - 1);
  359. ops.datbuf = NULL;
  360. ops.mode = (mfi->mode == MTD_FILE_MODE_RAW) ? MTD_OPS_RAW :
  361. MTD_OPS_PLACE_OOB;
  362. if (ops.ooboffs && ops.ooblen > (mtd->oobsize - ops.ooboffs))
  363. return -EINVAL;
  364. ops.oobbuf = memdup_user(ptr, length);
  365. if (IS_ERR(ops.oobbuf))
  366. return PTR_ERR(ops.oobbuf);
  367. start &= ~((uint64_t)mtd->writesize - 1);
  368. ret = mtd_write_oob(mtd, start, &ops);
  369. if (ops.oobretlen > 0xFFFFFFFFU)
  370. ret = -EOVERFLOW;
  371. retlen = ops.oobretlen;
  372. if (copy_to_user(retp, &retlen, sizeof(length)))
  373. ret = -EFAULT;
  374. kfree(ops.oobbuf);
  375. return ret;
  376. }
  377. static int mtdchar_readoob(struct file *file, struct mtd_info *mtd,
  378. uint64_t start, uint32_t length, void __user *ptr,
  379. uint32_t __user *retp)
  380. {
  381. struct mtd_file_info *mfi = file->private_data;
  382. struct mtd_oob_ops ops;
  383. int ret = 0;
  384. if (length > 4096)
  385. return -EINVAL;
  386. if (!access_ok(VERIFY_WRITE, ptr, length))
  387. return -EFAULT;
  388. ops.ooblen = length;
  389. ops.ooboffs = start & (mtd->writesize - 1);
  390. ops.datbuf = NULL;
  391. ops.mode = (mfi->mode == MTD_FILE_MODE_RAW) ? MTD_OPS_RAW :
  392. MTD_OPS_PLACE_OOB;
  393. if (ops.ooboffs && ops.ooblen > (mtd->oobsize - ops.ooboffs))
  394. return -EINVAL;
  395. ops.oobbuf = kmalloc(length, GFP_KERNEL);
  396. if (!ops.oobbuf)
  397. return -ENOMEM;
  398. start &= ~((uint64_t)mtd->writesize - 1);
  399. ret = mtd_read_oob(mtd, start, &ops);
  400. if (put_user(ops.oobretlen, retp))
  401. ret = -EFAULT;
  402. else if (ops.oobretlen && copy_to_user(ptr, ops.oobbuf,
  403. ops.oobretlen))
  404. ret = -EFAULT;
  405. kfree(ops.oobbuf);
  406. /*
  407. * NAND returns -EBADMSG on ECC errors, but it returns the OOB
  408. * data. For our userspace tools it is important to dump areas
  409. * with ECC errors!
  410. * For kernel internal usage it also might return -EUCLEAN
  411. * to signal the caller that a bitflip has occured and has
  412. * been corrected by the ECC algorithm.
  413. *
  414. * Note: currently the standard NAND function, nand_read_oob_std,
  415. * does not calculate ECC for the OOB area, so do not rely on
  416. * this behavior unless you have replaced it with your own.
  417. */
  418. if (mtd_is_bitflip_or_eccerr(ret))
  419. return 0;
  420. return ret;
  421. }
  422. /*
  423. * Copies (and truncates, if necessary) data from the larger struct,
  424. * nand_ecclayout, to the smaller, deprecated layout struct,
  425. * nand_ecclayout_user. This is necessary only to support the deprecated
  426. * API ioctl ECCGETLAYOUT while allowing all new functionality to use
  427. * nand_ecclayout flexibly (i.e. the struct may change size in new
  428. * releases without requiring major rewrites).
  429. */
  430. static int shrink_ecclayout(const struct nand_ecclayout *from,
  431. struct nand_ecclayout_user *to)
  432. {
  433. int i;
  434. if (!from || !to)
  435. return -EINVAL;
  436. memset(to, 0, sizeof(*to));
  437. to->eccbytes = min((int)from->eccbytes, MTD_MAX_ECCPOS_ENTRIES);
  438. for (i = 0; i < to->eccbytes; i++)
  439. to->eccpos[i] = from->eccpos[i];
  440. for (i = 0; i < MTD_MAX_OOBFREE_ENTRIES; i++) {
  441. if (from->oobfree[i].length == 0 &&
  442. from->oobfree[i].offset == 0)
  443. break;
  444. to->oobavail += from->oobfree[i].length;
  445. to->oobfree[i] = from->oobfree[i];
  446. }
  447. return 0;
  448. }
  449. static int mtdchar_blkpg_ioctl(struct mtd_info *mtd,
  450. struct blkpg_ioctl_arg __user *arg)
  451. {
  452. struct blkpg_ioctl_arg a;
  453. struct blkpg_partition p;
  454. if (!capable(CAP_SYS_ADMIN))
  455. return -EPERM;
  456. if (copy_from_user(&a, arg, sizeof(struct blkpg_ioctl_arg)))
  457. return -EFAULT;
  458. if (copy_from_user(&p, a.data, sizeof(struct blkpg_partition)))
  459. return -EFAULT;
  460. switch (a.op) {
  461. case BLKPG_ADD_PARTITION:
  462. /* Only master mtd device must be used to add partitions */
  463. if (mtd_is_partition(mtd))
  464. return -EINVAL;
  465. return mtd_add_partition(mtd, p.devname, p.start, p.length);
  466. case BLKPG_DEL_PARTITION:
  467. if (p.pno < 0)
  468. return -EINVAL;
  469. return mtd_del_partition(mtd, p.pno);
  470. default:
  471. return -EINVAL;
  472. }
  473. }
  474. static int mtdchar_write_ioctl(struct mtd_info *mtd,
  475. struct mtd_write_req __user *argp)
  476. {
  477. struct mtd_write_req req;
  478. struct mtd_oob_ops ops;
  479. void __user *usr_data, *usr_oob;
  480. int ret;
  481. if (copy_from_user(&req, argp, sizeof(req)) ||
  482. !access_ok(VERIFY_READ, req.usr_data, req.len) ||
  483. !access_ok(VERIFY_READ, req.usr_oob, req.ooblen))
  484. return -EFAULT;
  485. if (!mtd->_write_oob)
  486. return -EOPNOTSUPP;
  487. ops.mode = req.mode;
  488. ops.len = (size_t)req.len;
  489. ops.ooblen = (size_t)req.ooblen;
  490. ops.ooboffs = 0;
  491. usr_data = (void __user *)(uintptr_t)req.usr_data;
  492. usr_oob = (void __user *)(uintptr_t)req.usr_oob;
  493. if (req.usr_data) {
  494. ops.datbuf = memdup_user(usr_data, ops.len);
  495. if (IS_ERR(ops.datbuf))
  496. return PTR_ERR(ops.datbuf);
  497. } else {
  498. ops.datbuf = NULL;
  499. }
  500. if (req.usr_oob) {
  501. ops.oobbuf = memdup_user(usr_oob, ops.ooblen);
  502. if (IS_ERR(ops.oobbuf)) {
  503. kfree(ops.datbuf);
  504. return PTR_ERR(ops.oobbuf);
  505. }
  506. } else {
  507. ops.oobbuf = NULL;
  508. }
  509. ret = mtd_write_oob(mtd, (loff_t)req.start, &ops);
  510. kfree(ops.datbuf);
  511. kfree(ops.oobbuf);
  512. return ret;
  513. }
  514. static int mtdchar_ioctl(struct file *file, u_int cmd, u_long arg)
  515. {
  516. struct mtd_file_info *mfi = file->private_data;
  517. struct mtd_info *mtd = mfi->mtd;
  518. void __user *argp = (void __user *)arg;
  519. int ret = 0;
  520. u_long size;
  521. struct mtd_info_user info;
  522. pr_debug("MTD_ioctl\n");
  523. size = (cmd & IOCSIZE_MASK) >> IOCSIZE_SHIFT;
  524. if (cmd & IOC_IN) {
  525. if (!access_ok(VERIFY_READ, argp, size))
  526. return -EFAULT;
  527. }
  528. if (cmd & IOC_OUT) {
  529. if (!access_ok(VERIFY_WRITE, argp, size))
  530. return -EFAULT;
  531. }
  532. switch (cmd) {
  533. case MEMGETREGIONCOUNT:
  534. if (copy_to_user(argp, &(mtd->numeraseregions), sizeof(int)))
  535. return -EFAULT;
  536. break;
  537. case MEMGETREGIONINFO:
  538. {
  539. uint32_t ur_idx;
  540. struct mtd_erase_region_info *kr;
  541. struct region_info_user __user *ur = argp;
  542. if (get_user(ur_idx, &(ur->regionindex)))
  543. return -EFAULT;
  544. if (ur_idx >= mtd->numeraseregions)
  545. return -EINVAL;
  546. kr = &(mtd->eraseregions[ur_idx]);
  547. if (put_user(kr->offset, &(ur->offset))
  548. || put_user(kr->erasesize, &(ur->erasesize))
  549. || put_user(kr->numblocks, &(ur->numblocks)))
  550. return -EFAULT;
  551. break;
  552. }
  553. case MEMGETINFO: