久久国产成人av_抖音国产毛片_a片网站免费观看_A片无码播放手机在线观看,色五月在线观看,亚洲精品m在线观看,女人自慰的免费网址,悠悠在线观看精品视频,一级日本片免费的,亚洲精品久,国产精品成人久久久久久久

分享

android ion

 昵稱3554661 2020-04-07

android 在linux 4.12 內(nèi)核對ion驅(qū)動的api 進行了修改,,原來的一部分ioctl命令已經(jīng)不存在了,。

谷歌的ion 我個人覺的還是挺大的,system heap 內(nèi)存分配的方式,,其他的還有使用cma 分配等,,不同的分配方式會調(diào)用linux不同的接口。這篇文章值只寫下自己對system heap 的個人理解,。ion相關代碼在內(nèi)核kernel\msm-4.14\drivers\staging\android\ion 路徑下無論Android ion 最后調(diào)用那種heap 來分配內(nèi)存,。分配的buffer 都是放在linux dma-buf 這個結(jié)構中,dma-buf 是linux 中的一個框架,,具體代碼我并沒有仔細去研究,,根據(jù)ion中的使用來看,每個ion在分配的buffer 會存在dma-buf這個結(jié)構中,,然后谷歌對這個buffer還有操作函數(shù)集ops ,,也放到dma-buf中,在使用這個buffer時候?qū)嶋H上是間接調(diào)用dma-buf ops 來對這個buffer操作了,,然后這個ops 函數(shù)在去調(diào)用heap 綁定的ops去實現(xiàn),。比如system heap,heap 創(chuàng)建時綁定了alloc,。mmap,,free,shrink等函數(shù),。dma-buf ops會最終調(diào)用這些函數(shù),。

在ion.c 文件中能夠看到dma-buf ops 谷歌的實現(xiàn)

  1. static const struct dma_buf_ops dma_buf_ops = {
  2. .map_dma_buf = ion_map_dma_buf,
  3. .unmap_dma_buf = ion_unmap_dma_buf,
  4. .mmap = ion_mmap,
  5. .release = ion_dma_buf_release,
  6. .attach = ion_dma_buf_attach,
  7. .detach = ion_dma_buf_detatch,
  8. .begin_cpu_access = ion_dma_buf_begin_cpu_access,
  9. .end_cpu_access = ion_dma_buf_end_cpu_access,
  10. .begin_cpu_access_umapped = ion_dma_buf_begin_cpu_access_umapped,
  11. .end_cpu_access_umapped = ion_dma_buf_end_cpu_access_umapped,
  12. .begin_cpu_access_partial = ion_dma_buf_begin_cpu_access_partial,
  13. .end_cpu_access_partial = ion_dma_buf_end_cpu_access_partial,
  14. .map_atomic = ion_dma_buf_kmap,
  15. .unmap_atomic = ion_dma_buf_kunmap,
  16. .map = ion_dma_buf_kmap,
  17. .unmap = ion_dma_buf_kunmap,
  18. .vmap = ion_dma_buf_vmap,
  19. .vunmap = ion_dma_buf_vunmap,
  20. .get_flags = ion_dma_buf_get_flags,
  21. };

在ion.h 中能夠看到heap 必須實現(xiàn)的函數(shù)的定義

  1. /**
  2. * struct ion_heap_ops - ops to operate on a given heap
  3. * @allocate: allocate memory
  4. * @free: free memory
  5. * @map_kernel map memory to the kernel
  6. * @unmap_kernel unmap memory to the kernel
  7. * @map_user map memory to userspace
  8. *
  9. * allocate, phys, and map_user return 0 on success, -errno on error.
  10. * map_dma and map_kernel return pointer on success, ERR_PTR on
  11. * error. @free will be called with ION_PRIV_FLAG_SHRINKER_FREE set in
  12. * the buffer's private_flags when called from a shrinker. In that
  13. * case, the pages being free'd must be truly free'd back to the
  14. * system, not put in a page pool or otherwise cached.
  15. */
  16. struct ion_heap_ops {
  17. int (*allocate)(struct ion_heap *heap,
  18. struct ion_buffer *buffer, unsigned long len,
  19. unsigned long flags);
  20. void (*free)(struct ion_buffer *buffer);
  21. void * (*map_kernel)(struct ion_heap *heap, struct ion_buffer *buffer);
  22. void (*unmap_kernel)(struct ion_heap *heap, struct ion_buffer *buffer);
  23. int (*map_user)(struct ion_heap *mapper, struct ion_buffer *buffer,
  24. struct vm_area_struct *vma);
  25. int (*shrink)(struct ion_heap *heap, gfp_t gfp_mask, int nr_to_scan);
  26. };

 在正式進入到分配內(nèi)存給ion環(huán)節(jié)前,有一些概念應該時要了解的,,struct sg_table  此結(jié)構時linux中保存物理頁面散列表的,。具體解釋建議看蝸窩科技的這篇文章Linux kernel scatterlist API介紹,簡單的接受就是此結(jié)構保存了物理頁面的散列表,,system 在分配的時候并不是分配出來的時一個連續(xù)的物理頁面,,可以不連續(xù),只要虛擬地址連續(xù)就可以,,比如camera申請了12M的buffer,,此時從伙伴中拿出來的buffer 可能時多個64K的頁面。64k內(nèi)部時連續(xù)的,,當時64k頁面之間并不是連續(xù)的,。

伙伴系統(tǒng): 這個晚上資料很多,概念也比較簡單,,伙伴系統(tǒng)通過哈希表來管理物理內(nèi)存,。分配的時候根據(jù)2的order (幾)次方分配對應的物理頁面數(shù),。

文件描述符fd,ion分配內(nèi)存后最后返回的是fd,,fd通過binder傳輸?shù)讲煌倪M程,,然后在映射成進程的虛擬地址。fd 只能在一個進程內(nèi)使用,,傳遞到其他進程時時通過Android 的binder 機制,,簡單概括就是binder首先從要從其他進程分配個fd,然后讓當前的進程fd對應的內(nèi)核的file 結(jié)構體和其他進程的fd綁定,。

1.內(nèi)存分配

ion 系統(tǒng)分配內(nèi)存時在打開設備后調(diào)用ioctl函數(shù)實現(xiàn)的

  1. case ION_IOC_ALLOC:
  2. {
  3. int fd;
  4. fd = ion_alloc_fd(data.allocation.len,
  5. data.allocation.heap_id_mask,
  6. data.allocation.flags);
  7. if (fd < 0)
  8. return fd;
  9. data.allocation.fd = fd;
  10. break;
  11. }

可以看到調(diào)用了ion_alloc_fd函數(shù)產(chǎn)生了一個fd,,ion_alloc_fd函數(shù)有三個參數(shù),第一個參數(shù)時分配的buffer長度,,第二個時heap的選擇,,ion中有很多heap類型,本文只將system heap(其他heap 代碼看起來比較難),,第三個參數(shù)時標志位,,在分配buffer的時候還有很多屬性通過這個標志位來判斷,比如分配的是否時camer內(nèi)存,,是否需要安全內(nèi)存分配,。函數(shù)ion_alloc_fd 實現(xiàn)如下:

  1. int ion_alloc_fd(size_t len, unsigned int heap_id_mask, unsigned int flags)
  2. {
  3. int fd;
  4. struct dma_buf *dmabuf;
  5. dmabuf = ion_alloc_dmabuf(len, heap_id_mask, flags);
  6. if (IS_ERR(dmabuf)) {
  7. return PTR_ERR(dmabuf);
  8. }
  9. fd = dma_buf_fd(dmabuf, O_CLOEXEC);
  10. if (fd < 0)
  11. dma_buf_put(dmabuf);
  12. return fd;
  13. }

首先是產(chǎn)生產(chǎn)生了一個dma_buf 然后將這個dma-buf 轉(zhuǎn)換成fd。dma-buf  定義位于kernel\msm-4.14\include\linux\dma-buf.h文章將中,,每個變量的含義官方有解釋:

  1. /**
  2. * struct dma_buf - shared buffer object
  3. * @size: size of the buffer
  4. * @file: file pointer used for sharing buffers across, and for refcounting.
  5. * @attachments: list of dma_buf_attachment that denotes all devices attached.
  6. * @ops: dma_buf_ops associated with this buffer object.
  7. * @lock: used internally to serialize list manipulation, attach/detach and vmap/unmap
  8. * @vmapping_counter: used internally to refcnt the vmaps
  9. * @vmap_ptr: the current vmap ptr if vmapping_counter > 0
  10. * @exp_name: name of the exporter; useful for debugging.
  11. * @name: unique name for the buffer
  12. * @ktime: time (in jiffies) at which the buffer was born
  13. * @owner: pointer to exporter module; used for refcounting when exporter is a
  14. * kernel module.
  15. * @list_node: node for dma_buf accounting and debugging.
  16. * @priv: exporter specific private data for this buffer object.
  17. * @resv: reservation object linked to this dma-buf
  18. * @poll: for userspace poll support
  19. * @cb_excl: for userspace poll support
  20. * @cb_shared: for userspace poll support
  21. *
  22. * This represents a shared buffer, created by calling dma_buf_export(). The
  23. * userspace representation is a normal file descriptor, which can be created by
  24. * calling dma_buf_fd().
  25. *
  26. * Shared dma buffers are reference counted using dma_buf_put() and
  27. * get_dma_buf().
  28. *
  29. * Device DMA access is handled by the separate &struct dma_buf_attachment.
  30. */
  31. struct dma_buf {
  32. size_t size;
  33. struct file *file;
  34. struct list_head attachments;
  35. const struct dma_buf_ops *ops;
  36. struct mutex lock;
  37. unsigned vmapping_counter;
  38. void *vmap_ptr;
  39. const char *exp_name;
  40. char *name;
  41. ktime_t ktime;
  42. struct module *owner;
  43. struct list_head list_node;
  44. void *priv;
  45. struct reservation_object *resv;
  46. /* poll support */
  47. wait_queue_head_t poll;
  48. struct dma_buf_poll_cb_t {
  49. struct dma_fence_cb cb;
  50. wait_queue_head_t *poll;
  51. unsigned long active;
  52. } cb_excl, cb_shared;
  53. struct list_head refs;
  54. };

struct file 這個比較重要,,這個會涉及將來的fd,,實際上fd 是和struct file 連接起來的,。 fd可以多個使用同一個struct file 者也是mmap映射fd 時候能夠映射為多個虛擬地址的原因。

ion_alloc_dmabuf函數(shù)位于kernel\msm-4.14\drivers\staging\android\ion\ion.c 文件中:

  1. struct dma_buf *ion_alloc_dmabuf(size_t len, unsigned int heap_id_mask,
  2. unsigned int flags)
  3. {
  4. struct ion_device *dev = internal_dev;
  5. struct ion_buffer *buffer = NULL;
  6. struct ion_heap *heap;
  7. DEFINE_DMA_BUF_EXPORT_INFO(exp_info);
  8. struct dma_buf *dmabuf;
  9. char task_comm[TASK_COMM_LEN];
  10. pr_debug("%s: len %zu heap_id_mask %u flags %x\n", __func__,
  11. len, heap_id_mask, flags);
  12. /*
  13. * traverse the list of heaps available in this system in priority
  14. * order. If the heap type is supported by the client, and matches the
  15. * request of the caller allocate from it. Repeat until allocate has
  16. * succeeded or all heaps have been tried
  17. */
  18. len = PAGE_ALIGN(len);
  19. if (!len)
  20. return ERR_PTR(-EINVAL);
  21. down_read(&dev->lock);
  22. plist_for_each_entry(heap, &dev->heaps, node) {
  23. /* if the caller didn't specify this heap id */
  24. if (!((1 << heap->id) & heap_id_mask))
  25. continue;
  26. buffer = ion_buffer_create(heap, dev, len, flags);
  27. if (!IS_ERR(buffer) || PTR_ERR(buffer) == -EINTR)
  28. break;
  29. }
  30. up_read(&dev->lock);
  31. if (!buffer)
  32. return ERR_PTR(-ENODEV);
  33. if (IS_ERR(buffer))
  34. return ERR_CAST(buffer);
  35. get_task_comm(task_comm, current->group_leader);
  36. exp_info.ops = &dma_buf_ops;
  37. exp_info.size = buffer->size;
  38. exp_info.flags = O_RDWR;
  39. exp_info.priv = buffer;
  40. exp_info.exp_name = kasprintf(GFP_KERNEL, "%s-%s-%d-%s", KBUILD_MODNAME,
  41. heap->name, current->tgid, task_comm);
  42. dmabuf = dma_buf_export(&exp_info);
  43. if (IS_ERR(dmabuf)) {
  44. _ion_buffer_destroy(buffer);
  45. kfree(exp_info.exp_name);
  46. }
  47. return dmabuf;
  48. }

PAGE_ALIGN  這個宏長度的頁面對齊(向上對齊),,分配的buffer的大小假如是5K這里是將轉(zhuǎn)換成8K,,因為頁面時以4k為大小的,與之對應的還有向下對齊,,5k將轉(zhuǎn)換為4k,。

plist_for_each_entry 將從所有的heap中查找對應的heap 類型,并執(zhí)行這個heap對應的分配buffer函數(shù),,這里我們假定這個heap時system heap,。

在手機中查看system heap相關的信息,在adb shell 進入/sys/kernel/debug/ion/heaps

執(zhí)行cat system

  1. uncached pool = 349003776 cached pool = 1063071744 secure pool = 0
  2. pool total (uncached + cached + secure) = 1412075520

可以看到system heap中有三個pool ,,這三個pool是谷歌設置的三個存放物理頁面的池,。也可以自己加pool。

找到對應的heap后開始執(zhí)行ion_buffer_create函數(shù)創(chuàng)建ions buffer,,定義位于kernel\msm-4.14\drivers\staging\android\ion\ion.h

  1. /**
  2. * struct ion_buffer - metadata for a particular buffer
  3. * @ref: reference count
  4. * @node: node in the ion_device buffers tree
  5. * @dev: back pointer to the ion_device
  6. * @heap: back pointer to the heap the buffer came from
  7. * @flags: buffer specific flags
  8. * @private_flags: internal buffer specific flags
  9. * @size: size of the buffer
  10. * @priv_virt: private data to the buffer representable as
  11. * a void *
  12. * @lock: protects the buffers cnt fields
  13. * @kmap_cnt: number of times the buffer is mapped to the kernel
  14. * @vaddr: the kernel mapping if kmap_cnt is not zero
  15. * @sg_table: the sg table for the buffer if dmap_cnt is not zero
  16. * @vmas: list of vma's mapping this buffer
  17. */
  18. struct ion_buffer {
  19. union {
  20. struct rb_node node;
  21. struct list_head list;
  22. };
  23. struct ion_device *dev;
  24. struct ion_heap *heap;
  25. unsigned long flags;
  26. unsigned long private_flags;
  27. size_t size;
  28. void *priv_virt;
  29. /* Protect ion buffer */
  30. struct mutex lock;
  31. int kmap_cnt;
  32. void *vaddr;
  33. struct sg_table *sg_table;
  34. struct list_head attachments;
  35. struct list_head vmas;
  36. };

前面介紹的struct sg_table 就放在ion buffer中,,用來保存物理頁面散列表。

  1. /* this function should only be called while dev->lock is held */
  2. static struct ion_buffer *ion_buffer_create(struct ion_heap *heap,
  3. struct ion_device *dev,
  4. unsigned long len,
  5. unsigned long flags)
  6. {
  7. struct ion_buffer *buffer;
  8. struct sg_table *table;
  9. int ret;
  10. buffer = kzalloc(sizeof(*buffer), GFP_KERNEL);
  11. if (!buffer)
  12. return ERR_PTR(-ENOMEM);
  13. buffer->heap = heap;
  14. buffer->flags = flags;
  15. ret = heap->ops->allocate(heap, buffer, len, flags);
  16. if (ret) {
  17. if (!(heap->flags & ION_HEAP_FLAG_DEFER_FREE))
  18. goto err2;
  19. if (ret == -EINTR)
  20. goto err2;
  21. ion_heap_freelist_drain(heap, 0);
  22. ret = heap->ops->allocate(heap, buffer, len, flags);
  23. if (ret)
  24. goto err2;
  25. }
  26. if (buffer->sg_table == NULL) {
  27. WARN_ONCE(1, "This heap needs to set the sgtable");
  28. ret = -EINVAL;
  29. goto err1;
  30. }
  31. spin_lock(&heap->stat_lock);
  32. heap->num_of_buffers++;
  33. heap->num_of_alloc_bytes += len;
  34. if (heap->num_of_alloc_bytes > heap->alloc_bytes_wm)
  35. heap->alloc_bytes_wm = heap->num_of_alloc_bytes;
  36. spin_unlock(&heap->stat_lock);
  37. table = buffer->sg_table;
  38. buffer->dev = dev;
  39. buffer->size = len;
  40. buffer->dev = dev;
  41. buffer->size = len;
  42. INIT_LIST_HEAD(&buffer->attachments);
  43. INIT_LIST_HEAD(&buffer->vmas);
  44. mutex_init(&buffer->lock);
  45. if (IS_ENABLED(CONFIG_ION_FORCE_DMA_SYNC)) {
  46. int i;
  47. struct scatterlist *sg;
  48. /*
  49. * this will set up dma addresses for the sglist -- it is not
  50. * technically correct as per the dma api -- a specific
  51. * device isn't really taking ownership here. However, in
  52. * practice on our systems the only dma_address space is
  53. * physical addresses.
  54. */
  55. for_each_sg(table->sgl, sg, table->nents, i) {
  56. sg_dma_address(sg) = sg_phys(sg);
  57. sg_dma_len(sg) = sg->length;
  58. }
  59. }
  60. mutex_lock(&dev->buffer_lock);
  61. ion_buffer_add(dev, buffer);
  62. mutex_unlock(&dev->buffer_lock);
  63. atomic_long_add(len, &heap->total_allocated);
  64. return buffer;
  65. err1:
  66. heap->ops->free(buffer);
  67. err2:
  68. kfree(buffer);
  69. return ERR_PTR(ret);
  70. }

此函數(shù)最主要的是通過ret = heap->ops->allocate(heap, buffer, len, flags);函數(shù)調(diào)用heap對應的分配函數(shù)。其他的代碼是一鏈表和sg_table的賦值,。

systeam 的alloc函數(shù)位于kernel\msm-4.14\drivers\staging\android\ion\ion_system_heap.c中

  1. static struct ion_heap_ops system_heap_ops = {
  2. .allocate = ion_system_heap_allocate,
  3. .free = ion_system_heap_free,
  4. .map_kernel = ion_heap_map_kernel,
  5. .unmap_kernel = ion_heap_unmap_kernel,
  6. .map_user = ion_heap_map_user,
  7. .shrink = ion_system_heap_shrink,
  8. };

allocate 實現(xiàn)函數(shù)是ion_system_heap_allocate 源碼如下:

  1. static int ion_system_heap_allocate(struct ion_heap *heap,
  2. struct ion_buffer *buffer,
  3. unsigned long size,
  4. unsigned long flags)
  5. {
  6. struct ion_system_heap *sys_heap = container_of(heap,
  7. struct ion_system_heap,
  8. heap);
  9. struct sg_table *table;
  10. struct sg_table table_sync = {0};
  11. struct scatterlist *sg;
  12. struct scatterlist *sg_sync;
  13. int ret = -ENOMEM;
  14. struct list_head pages;
  15. struct list_head pages_from_pool;
  16. struct page_info *info, *tmp_info;
  17. int i = 0;
  18. unsigned int nents_sync = 0;
  19. unsigned long size_remaining = PAGE_ALIGN(size);
  20. unsigned int max_order = orders[0];
  21. struct pages_mem data;
  22. unsigned int sz;
  23. int vmid = get_secure_vmid(buffer->flags);
  24. if (size / PAGE_SIZE > totalram_pages / 2)
  25. return -ENOMEM;
  26. if (ion_heap_is_system_heap_type(buffer->heap->type) &&
  27. is_secure_vmid_valid(vmid)) {
  28. pr_info("%s: System heap doesn't support secure allocations\n",
  29. __func__);
  30. return -EINVAL;
  31. }
  32. data.size = 0;
  33. INIT_LIST_HEAD(&pages);
  34. INIT_LIST_HEAD(&pages_from_pool);
  35. while (size_remaining > 0) {
  36. if (is_secure_vmid_valid(vmid))
  37. info = alloc_from_pool_preferred(
  38. sys_heap, buffer, size_remaining,
  39. max_order);
  40. else
  41. info = alloc_largest_available(
  42. sys_heap, buffer, size_remaining,
  43. max_order);
  44. if (IS_ERR(info)) {
  45. ret = PTR_ERR(info);
  46. goto err;
  47. }
  48. sz = (1 << info->order) * PAGE_SIZE;
  49. if (info->from_pool) {
  50. list_add_tail(&info->list, &pages_from_pool);
  51. } else {
  52. list_add_tail(&info->list, &pages);
  53. data.size += sz;
  54. ++nents_sync;
  55. }
  56. size_remaining -= sz;
  57. max_order = info->order;
  58. i++;
  59. }
  60. ret = ion_heap_alloc_pages_mem(&data);
  61. if (ret)
  62. goto err;
  63. table = kzalloc(sizeof(*table), GFP_KERNEL);
  64. if (!table) {
  65. ret = -ENOMEM;
  66. goto err_free_data_pages;
  67. }
  68. ret = sg_alloc_table(table, i, GFP_KERNEL);
  69. if (ret)
  70. goto err1;
  71. if (nents_sync) {
  72. ret = sg_alloc_table(&table_sync, nents_sync, GFP_KERNEL);
  73. if (ret)
  74. goto err_free_sg;
  75. }
  76. i = 0;
  77. sg = table->sgl;
  78. sg_sync = table_sync.sgl;
  79. /*
  80. * We now have two separate lists. One list contains pages from the
  81. * pool and the other pages from buddy. We want to merge these
  82. * together while preserving the ordering of the pages (higher order
  83. * first).
  84. */
  85. do {
  86. info = list_first_entry_or_null(&pages, struct page_info, list);
  87. tmp_info = list_first_entry_or_null(&pages_from_pool,
  88. struct page_info, list);
  89. if (info && tmp_info) {
  90. if (info->order >= tmp_info->order) {
  91. i = process_info(info, sg, sg_sync, &data, i);
  92. sg_sync = sg_next(sg_sync);
  93. } else {
  94. i = process_info(tmp_info, sg, 0, 0, i);
  95. }
  96. } else if (info) {
  97. i = process_info(info, sg, sg_sync, &data, i);
  98. sg_sync = sg_next(sg_sync);
  99. } else if (tmp_info) {
  100. i = process_info(tmp_info, sg, 0, 0, i);
  101. }
  102. sg = sg_next(sg);
  103. } while (sg);
  104. if (nents_sync) {
  105. if (vmid > 0) {
  106. ret = ion_hyp_assign_sg(&table_sync, &vmid, 1, true);
  107. if (ret)
  108. goto err_free_sg2;
  109. }
  110. }
  111. buffer->sg_table = table;
  112. if (nents_sync)
  113. sg_free_table(&table_sync);
  114. ion_heap_free_pages_mem(&data);
  115. return 0;
  116. err_free_sg2:
  117. /* We failed to zero buffers. Bypass pool */
  118. buffer->private_flags |= ION_PRIV_FLAG_SHRINKER_FREE;
  119. if (vmid > 0)
  120. ion_hyp_unassign_sg(table, &vmid, 1, true, false);
  121. for_each_sg(table->sgl, sg, table->nents, i)
  122. free_buffer_page(sys_heap, buffer, sg_page(sg),
  123. get_order(sg->length));
  124. if (nents_sync)
  125. sg_free_table(&table_sync);
  126. err_free_sg:
  127. sg_free_table(table);
  128. err1:
  129. kfree(table);
  130. err_free_data_pages:
  131. ion_heap_free_pages_mem(&data);
  132. err:
  133. list_for_each_entry_safe(info, tmp_info, &pages, list) {
  134. free_buffer_page(sys_heap, buffer, info->page, info->order);
  135. kfree(info);
  136. }
  137. list_for_each_entry_safe(info, tmp_info, &pages_from_pool, list) {
  138. free_buffer_page(sys_heap, buffer, info->page, info->order);
  139. kfree(info);
  140. }
  141. return ret;
  142. }

ion_system_heap_allocate 函數(shù)比較長,,此函數(shù)的重點我覺的是 while 這塊代碼

  1. while (size_remaining > 0) {
  2. if (is_secure_vmid_valid(vmid))
  3. info = alloc_from_pool_preferred(
  4. sys_heap, buffer, size_remaining,
  5. max_order);
  6. else
  7. info = alloc_largest_available(
  8. sys_heap, buffer, size_remaining,
  9. max_order);
  10. if (IS_ERR(info)) {
  11. ret = PTR_ERR(info);
  12. goto err;
  13. }
  14. sz = (1 << info->order) * PAGE_SIZE;
  15. if (info->from_pool) {
  16. list_add_tail(&info->list, &pages_from_pool);
  17. } else {
  18. list_add_tail(&info->list, &pages);
  19. data.size += sz;
  20. ++nents_sync;
  21. }
  22. size_remaining -= sz;
  23. max_order = info->order;
  24. i++;
  25. }
  26. ret = ion_heap_alloc_pages_mem(&data);

size_remaining  還是頁對齊的 unsigned long size_remaining = PAGE_ALIGN(size);

整個while函數(shù)就是不斷的從pool或者伙伴系統(tǒng)中取物理頁面,每次取完后size_remaining 減去對應的大小,,不斷的重復直到最后size_remaining 為0,,代表需要的buffer 已經(jīng)全部取出。剛開始分配buffer的時候pool中是沒有buffer進行分配的,,是調(diào)用linux函數(shù)接口從伙伴系統(tǒng)中分配的,。

while中根據(jù)is_secure_vmid_valid 進行了判斷調(diào)用了不同的分配函數(shù)alloc_from_pool_preferred函數(shù)主要是從secure pool 取分配。

  1. static struct page_info *alloc_from_pool_preferred(
  2. struct ion_system_heap *heap, struct ion_buffer *buffer,
  3. unsigned long size, unsigned int max_order)
  4. {
  5. struct page *page;
  6. struct page_info *info;
  7. int i;
  8. if (buffer->flags & ION_FLAG_POOL_FORCE_ALLOC)
  9. goto force_alloc;
  10. info = kmalloc(sizeof(*info), GFP_KERNEL);
  11. if (!info)
  12. return ERR_PTR(-ENOMEM);
  13. for (i = 0; i < NUM_ORDERS; i++) {
  14. if (size < order_to_size(orders[i]))
  15. continue;
  16. if (max_order < orders[i])
  17. continue;
  18. page = alloc_from_secure_pool_order(heap, buffer, orders[i]);
  19. if (IS_ERR(page))
  20. continue;
  21. info->page = page;
  22. info->order = orders[i];
  23. info->from_pool = true;
  24. INIT_LIST_HEAD(&info->list);
  25. return info;
  26. }
  27. page = split_page_from_secure_pool(heap, buffer);
  28. if (!IS_ERR(page)) {
  29. info->page = page;
  30. info->order = 0;
  31. info->from_pool = true;
  32. INIT_LIST_HEAD(&info->list);
  33. return info;
  34. }
  35. kfree(info);
  36. force_alloc:
  37. return alloc_largest_available(heap, buffer, size, max_order);
  38. }

ION_FLAG_POOL_FORCE_ALLOC 判斷了是否調(diào)用強制分配,,如果強制分配會調(diào)用alloc_largest_available函數(shù)最后會直接帶調(diào)用linux 函數(shù)從伙伴系統(tǒng)中分配物理頁面,。關于struct page 這個結(jié)構體的介紹可以參考《Linux 物理內(nèi)存描述》鏈接

alloc_from_pool_preferred 核心是for循環(huán),這里通過for 尋找合理的物理頁面大小取分配,,我們知道在伙伴系統(tǒng)是哈希表維護了2 的order次方的物理頁面,,在所有的pool中頁存在這個原理,不過維護的通過數(shù)組的方式,,通常只有2 的0 次方,,和2的4次方。在

kernel\msm-4.14\drivers\staging\android\ion\ion_system_heap.h 中可以看到具體的定義

  1. #ifndef CONFIG_ALLOC_BUFFERS_IN_4K_CHUNKS
  2. #if defined(CONFIG_IOMMU_IO_PGTABLE_ARMV7S)
  3. static const unsigned int orders[] = {8, 4, 0};
  4. #else
  5. static const unsigned int orders[] = {4, 0};
  6. #endif
  7. #else
  8. static const unsigned int orders[] = {0};
  9. #endif
  10. #define NUM_ORDERS ARRAY_SIZE(orders)

根據(jù)我的測試目前手機應該是走的 orders[] = {4, 0}; 也就是說申請的物理頁面時4k 或者時64k,。

回到alloc_from_pool_preferred函數(shù)中的for循環(huán),,假定時 orders[] = {4, 0}

static inline unsigned int order_to_size(int order)
{
    return PAGE_SIZE << order;
}

PAGE_SIZE 是物理頁面大小,一般默認都是4k,,armv8是支持物理頁面4k,,16k,64k,。假定系統(tǒng)用的4k,,那么開始時候就是2的4次放 乘以16 就是64k。if (size < order_to_size(orders[i])) 這句代碼首先判斷了要分配的頁面大小是否小于64k,,如果小于那就不從這個order對應的數(shù)組分,。因為此order存放的都是連續(xù)的64K 的物理頁面如果分配的buffer比64k小那么以為著必須拆分才行,物理頁面分配都是已經(jīng)找最合適的大小,。所以這里size比order_to_size 小會直接continue 跳過后面?zhèn)兝^續(xù)從order中找,。64k后就是4k頁面理論上通過頁向上對齊不會有比這個頁面還小的了。如果 orders[] 不是4,,0 ,,設置更多的數(shù)16,8,,4,,for循環(huán)會遍歷查找,,如果最后不是2的 0次方,比如是2 的1次方那么還存在for循環(huán)還是找不合適的orders問題,,所以會跳出for循環(huán)進行也頁面分割,,從大的物理頁面中分出合適的。調(diào)用split_page_from_secure_pool函數(shù),。

  1. struct page *split_page_from_secure_pool(struct ion_system_heap *heap,
  2. struct ion_buffer *buffer)
  3. {
  4. int i, j;
  5. struct page *page;
  6. unsigned int order;
  7. mutex_lock(&heap->split_page_mutex);
  8. /*
  9. * Someone may have just split a page and returned the unused portion
  10. * back to the pool, so try allocating from the pool one more time
  11. * before splitting. We want to maintain large pages sizes when
  12. * possible.
  13. */
  14. page = alloc_from_secure_pool_order(heap, buffer, 0);
  15. if (!IS_ERR(page))
  16. goto got_page;
  17. for (i = NUM_ORDERS - 2; i >= 0; i--) {
  18. order = orders[i];
  19. page = alloc_from_secure_pool_order(heap, buffer, order);
  20. if (IS_ERR(page))
  21. continue;
  22. split_page(page, order);
  23. break;
  24. }
  25. /*
  26. * Return the remaining order-0 pages to the pool.
  27. * SetPagePrivate flag to mark memory as secure.
  28. */
  29. if (!IS_ERR(page)) {
  30. for (j = 1; j < (1 << order); j++) {
  31. SetPagePrivate(page + j);
  32. free_buffer_page(heap, buffer, page + j, 0);
  33. }
  34. }
  35. got_page:
  36. mutex_unlock(&heap->split_page_mutex);
  37. return page;
  38. }

page = alloc_from_secure_pool_order(heap, buffer, 0); 從order 數(shù)組0 中分配一個頁,,也就是此時pool中最后的物理頁面。這里的設計思想我猜是如果order[0]都無法分配出來就直接報錯,,下面for 循環(huán)應該是像注釋說的多次嘗試,。split_page 位于

kernel\msm-4.14\mm\page_alloc.c  page_alloc.c 存放伙伴系統(tǒng)的核心的接口函數(shù)后面還會用里面的分配內(nèi)存的函數(shù)。split_page函數(shù)沒太看懂內(nèi)核中的實現(xiàn),。split_page_from_secure_pool 從物理頁面分割出來的出來的頁面會在最后放到info中

  1. page = split_page_from_secure_pool(heap, buffer);
  2. if (!IS_ERR(page)) {
  3. info->page = page;
  4. info->order = 0;
  5. info->from_pool = true;
  6. INIT_LIST_HEAD(&info->list);
  7. return info;
  8. }

 

回到alloc_from_pool_preferred函數(shù)中繼續(xù)看alloc_from_secure_pool_order  函數(shù)的執(zhí)行

  1. struct page *alloc_from_secure_pool_order(struct ion_system_heap *heap,
  2. struct ion_buffer *buffer,
  3. unsigned long order)
  4. {
  5. int vmid = get_secure_vmid(buffer->flags);
  6. struct ion_page_pool *pool;
  7. if (!is_secure_vmid_valid(vmid))
  8. return ERR_PTR(-EINVAL);
  9. pool = heap->secure_pools[vmid][order_to_index(order)];
  10. return ion_page_pool_alloc_pool_only(pool);
  11. }

函數(shù)比較簡單主要是根據(jù)order找到對應的pool,,然后調(diào)用

  1. /*
  2. * Tries to allocate from only the specified Pool and returns NULL otherwise
  3. */
  4. struct page *ion_page_pool_alloc_pool_only(struct ion_page_pool *pool)
  5. {
  6. struct page *page = NULL;
  7. if (!pool)
  8. return ERR_PTR(-EINVAL);
  9. if (mutex_trylock(&pool->mutex)) {
  10. if (pool->high_count)
  11. page = ion_page_pool_remove(pool, true);
  12. else if (pool->low_count)
  13. page = ion_page_pool_remove(pool, false);
  14. mutex_unlock(&pool->mutex);
  15. }
  16. if (!page)
  17. return ERR_PTR(-ENOMEM);
  18. return page;
  19. }

函數(shù)從pool中取page。這里分為高端內(nèi)存和低端,,如果是4G內(nèi)存空間 那么高端內(nèi)存是指系統(tǒng)使用的3G-4G空間,,這里使用高低內(nèi)存是在從linux 伙伴系統(tǒng)取時候賦值給pool的。

 

回到ion_system_heap_allocate 的while函數(shù)中,,如果不是從secure pool分配buffer,。那么會調(diào)用alloc_largest_available函數(shù)

  1. static struct page_info *alloc_largest_available(struct ion_system_heap *heap,
  2. struct ion_buffer *buffer,
  3. unsigned long size,
  4. unsigned int max_order)
  5. {
  6. struct page *page;
  7. struct page_info *info;
  8. int i;
  9. bool from_pool;
  10. info = kmalloc(sizeof(*info), GFP_KERNEL);
  11. if (!info)
  12. return ERR_PTR(-ENOMEM);
  13. for (i = 0; i < NUM_ORDERS; i++) {
  14. if (size < order_to_size(orders[i]))
  15. continue;
  16. if (max_order < orders[i])
  17. continue;
  18. from_pool = !(buffer->flags & ION_FLAG_POOL_FORCE_ALLOC);
  19. page = alloc_buffer_page(heap, buffer, orders[i], &from_pool);
  20. if (IS_ERR(page))
  21. continue;
  22. info->page = page;
  23. info->order = orders[i];
  24. info->from_pool = from_pool;
  25. INIT_LIST_HEAD(&info->list);
  26. return info;
  27. }
  28. kfree(info);
  29. return ERR_PTR(-ENOMEM);
  30. }

這里ION_FLAG_POOL_FORCE_ALLOC也判斷了是否需要強制分配如果需要強制分配那么將不會從pool分配。然后調(diào)用alloc_buffer_page函數(shù)

  1. static struct page *alloc_buffer_page(struct ion_system_heap *heap,
  2. struct ion_buffer *buffer,
  3. unsigned long order,
  4. bool *from_pool)
  5. {
  6. bool cached = ion_buffer_cached(buffer);
  7. struct page *page;
  8. struct ion_page_pool *pool;
  9. int vmid = get_secure_vmid(buffer->flags);
  10. struct device *dev = heap->heap.priv;
  11. if (vmid > 0)
  12. pool = heap->secure_pools[vmid][order_to_index(order)];
  13. else if (!cached)
  14. pool = heap->uncached_pools[order_to_index(order)];
  15. else
  16. pool = heap->cached_pools[order_to_index(order)];
  17. page = ion_page_pool_alloc(pool, from_pool);
  18. if (IS_ERR(page))
  19. return page;
  20. if ((MAKE_ION_ALLOC_DMA_READY && vmid <= 0) || !(*from_pool))
  21. ion_pages_sync_for_device(dev, page, PAGE_SIZE << order,
  22. DMA_BIDIRECTIONAL);
  23. return page;
  24. }

這里根據(jù)從那個pool 中分配獲得了pool 然后調(diào)用了ion_page_pool_alloc函數(shù)同時將pool和是否需要從pool傳遞下去,。

  1. struct page *ion_page_pool_alloc(struct ion_page_pool *pool, bool *from_pool)
  2. {
  3. struct page *page = NULL;
  4. BUG_ON(!pool);
  5. if (fatal_signal_pending(current))
  6. return ERR_PTR(-EINTR);
  7. if (*from_pool && mutex_trylock(&pool->mutex)) {
  8. if (pool->high_count)
  9. page = ion_page_pool_remove(pool, true);
  10. else if (pool->low_count)
  11. page = ion_page_pool_remove(pool, false);
  12. mutex_unlock(&pool->mutex);
  13. }
  14. if (!page) {
  15. page = ion_page_pool_alloc_pages(pool);
  16. *from_pool = false;
  17. }
  18. if (!page)
  19. return ERR_PTR(-ENOMEM);
  20. return page;
  21. }

如果從pool中分配page失敗或者不需要從pool分配那么將會調(diào)用ion_page_pool_alloc_pages函數(shù),。ion_page_pool_alloc_pages實際上是調(diào)用了linux 伙伴系統(tǒng)分配接口

  1. static void *ion_page_pool_alloc_pages(struct ion_page_pool *pool)
  2. {
  3. struct page *page = alloc_pages(pool->gfp_mask, pool->order);
  4. return page;
  5. }

回到ion_system_heap_allocate函數(shù)中的while部分

  1. sz = (1 << info->order) * PAGE_SIZE;
  2. if (info->from_pool) {
  3. list_add_tail(&info->list, &pages_from_pool);
  4. } else {
  5. list_add_tail(&info->list, &pages);
  6. data.size += sz;
  7. ++nents_sync;
  8. }
  9. size_remaining -= sz;
  10. max_order = info->order;
  11. i++;

由于分配出來的page都保存到在info中,根據(jù)是否是從pool中分配的會加入到不同的鏈表中,,info中的order 保存的是2的幾次方,,將它乘以物理頁面大小,就會得到這次分配buffer大小,,然后用總的減去這次分配出來的(size_remaining -= sz;)在while后面就是將page加入到page表中,。

這里第一次使用pool中都是沒有page 的都是從linux 伙伴系統(tǒng)中那出來,,pool 存放的page 是在釋放page 的時候保存到里面的,。

回到ion_alloc_fd 函數(shù),在產(chǎn)生dma-buf 后需要根據(jù)這個dma-buf產(chǎn)生fd調(diào)用

  1. 526int dma_buf_fd(struct dma_buf *dmabuf, int flags)
  2. 527{
  3. 528 int fd;
  4. 529
  5. 530 if (!dmabuf || !dmabuf->file)
  6. 531 return -EINVAL;
  7. 532
  8. 533 fd = get_unused_fd_flags(flags);
  9. 534 if (fd < 0)
  10. 535 return fd;
  11. 536
  12. 537 fd_install(fd, dmabuf->file);
  13. 538
  14. 539 return fd;

 這里調(diào)用了linux 提供的函數(shù) get_unused_fd_flags獲得一個fd號,,然后將dma-buf 的file 和fd綁定,。 

這個struct file 的獲取是在前面ion_alloc_dmabuf函數(shù)中,最后在獲取完成buffer后調(diào)用了dma_buf_export函數(shù),,這個函數(shù)

  1. 87 file = anon_inode_getfile(bufname, &dma_buf_fops, dmabuf,
  2. 488 exp_info->flags);
  3. 489 if (IS_ERR(file)) {
  4. 490 ret = PTR_ERR(file);
  5. 491 goto err_dmabuf;
  6. 492 }
  7. 493

可以看到申請file 并且綁定了前面說道的dma_buf_ops 這樣實際上通過fd就可以調(diào)用dma_buf_ops,。

2.內(nèi)存釋放

  1. void ion_system_heap_free(struct ion_buffer *buffer)
  2. {
  3. struct ion_heap *heap = buffer->heap;
  4. struct ion_system_heap *sys_heap = container_of(heap,
  5. struct ion_system_heap,
  6. heap);
  7. struct sg_table *table = buffer->sg_table;
  8. struct scatterlist *sg;
  9. int i;
  10. int vmid = get_secure_vmid(buffer->flags);
  11. if (!(buffer->private_flags & ION_PRIV_FLAG_SHRINKER_FREE) &&
  12. !(buffer->flags & ION_FLAG_POOL_FORCE_ALLOC)) {
  13. if (vmid < 0)
  14. ion_heap_buffer_zero(buffer);
  15. } else if (vmid > 0) {
  16. if (ion_hyp_unassign_sg(table, &vmid, 1, true, false))
  17. return;
  18. }
  19. for_each_sg(table->sgl, sg, table->nents, i)
  20. free_buffer_page(sys_heap, buffer, sg_page(sg),
  21. get_order(sg->length));
  22. sg_free_table(table);
  23. kfree(table);
  24. }

此函數(shù)前面是一些變量的判斷,重點在for_each_sg  將散列表中的物理頁調(diào)用free_buffer_page 函數(shù)釋放,。

  1. /*
  2. * For secure pages that need to be freed and not added back to the pool; the
  3. * hyp_unassign should be called before calling this function
  4. */
  5. void free_buffer_page(struct ion_system_heap *heap,
  6. struct ion_buffer *buffer, struct page *page,
  7. unsigned int order)
  8. {
  9. bool cached = ion_buffer_cached(buffer);
  10. int vmid = get_secure_vmid(buffer->flags);
  11. if (!(buffer->flags & ION_FLAG_POOL_FORCE_ALLOC)) {
  12. struct ion_page_pool *pool;
  13. if (vmid > 0)
  14. pool = heap->secure_pools[vmid][order_to_index(order)];
  15. else if (cached)
  16. pool = heap->cached_pools[order_to_index(order)];
  17. else
  18. pool = heap->uncached_pools[order_to_index(order)];
  19. if (buffer->private_flags & ION_PRIV_FLAG_SHRINKER_FREE)
  20. ion_page_pool_free_immediate(pool, page);
  21. else
  22. ion_page_pool_free(pool, page);
  23. } else {
  24. __free_pages(page, order);
  25. }
  26. }

獲得對應的pool然后調(diào)用了

  1. void ion_page_pool_free(struct ion_page_pool *pool, struct page *page)
  2. {
  3. int ret;
  4. ret = ion_page_pool_add(pool, page);
  5. if (ret)
  6. ion_page_pool_free_pages(pool, page);
  7. }

這是將page保存到了pool中,,但是如果系統(tǒng)內(nèi)存不夠此時需要ion中的heap 將pool存放的page 還給伙伴系統(tǒng),。執(zhí)行這個回收過程的是shrink函數(shù)

  1. static int ion_system_heap_shrink(struct ion_heap *heap, gfp_t gfp_mask,
  2. int nr_to_scan)
  3. {
  4. struct ion_system_heap *sys_heap;
  5. int nr_total = 0;
  6. int i, j, nr_freed = 0;
  7. int only_scan = 0;
  8. struct ion_page_pool *pool;
  9. sys_heap = container_of(heap, struct ion_system_heap, heap);
  10. if (!nr_to_scan)
  11. only_scan = 1;
  12. for (i = 0; i < NUM_ORDERS; i++) {
  13. nr_freed = 0;
  14. for (j = 0; j < VMID_LAST; j++) {
  15. if (is_secure_vmid_valid(j))
  16. nr_freed += ion_secure_page_pool_shrink(
  17. sys_heap, j, i, nr_to_scan);
  18. }
  19. pool = sys_heap->uncached_pools[i];
  20. nr_freed += ion_page_pool_shrink(pool, gfp_mask, nr_to_scan);
  21. pool = sys_heap->cached_pools[i];
  22. nr_freed += ion_page_pool_shrink(pool, gfp_mask, nr_to_scan);
  23. nr_total += nr_freed;
  24. if (!only_scan) {
  25. nr_to_scan -= nr_freed;
  26. /* shrink completed */
  27. if (nr_to_scan <= 0)
  28. break;
  29. }
  30. }
  31. return nr_total;
  32. }

函數(shù)頁比較簡單,除了一些數(shù)據(jù)統(tǒng)計,,最重要的就是調(diào)用ion_page_pool_shrink函數(shù),,函數(shù)里面原理就是從pool中取page,然后調(diào)用

  1. static void ion_page_pool_free_pages(struct ion_page_pool *pool,
  2. struct page *page)
  3. {
  4. __free_pages(page, pool->order);
  5. }

__free_pages 函數(shù)又是Linux 伙伴系統(tǒng)接口,,位于kernel\msm-4.14\mm\page_alloc.c

system heap的 內(nèi)存映射是在dma-buf 的ops中調(diào)用ion_heap_map_user 函數(shù),,此函數(shù)有個非常重要的參數(shù)struct vm_area_struct,它是進程虛擬內(nèi)存管理的,,其中有一些比較重要的變量,,理解了這些變量的含義,理解下邊的代碼就非常簡單了,,首先看此結(jié)構體的定義,,代碼位于kernel\msm-4.14\include\linux\mm_types.h

  1. /*
  2. * This struct defines a memory VMM memory area. There is one of these
  3. * per VM-area/task. A VM area is any part of the process virtual memory
  4. * space that has a special rule for the page-fault handlers (ie a shared
  5. * library, the executable area etc).
  6. */
  7. struct vm_area_struct {
  8. /* The first cache line has the info for VMA tree walking. */
  9. unsigned long vm_start; /* Our start address within vm_mm. */
  10. unsigned long vm_end; /* The first byte after our end address
  11. within vm_mm. */
  12. /* linked list of VM areas per task, sorted by address */
  13. struct vm_area_struct *vm_next, *vm_prev;
  14. struct rb_node vm_rb;
  15. /*
  16. * Largest free memory gap in bytes to the left of this VMA.
  17. * Either between this VMA and vma->vm_prev, or between one of the
  18. * VMAs below us in the VMA rbtree and its ->vm_prev. This helps
  19. * get_unmapped_area find a free area of the right size.
  20. */
  21. unsigned long rb_subtree_gap;
  22. /* Second cache line starts here. */
  23. struct mm_struct *vm_mm; /* The address space we belong to. */
  24. pgprot_t vm_page_prot; /* Access permissions of this VMA. */
  25. unsigned long vm_flags; /* Flags, see mm.h. */
  26. /*
  27. * For areas with an address space and backing store,
  28. * linkage into the address_space->i_mmap interval tree.
  29. *
  30. * For private anonymous mappings, a pointer to a null terminated string
  31. * in the user process containing the name given to the vma, or NULL
  32. * if unnamed.
  33. */
  34. union {
  35. struct {
  36. struct rb_node rb;
  37. unsigned long rb_subtree_last;
  38. } shared;
  39. const char __user *anon_name;
  40. };
  41. /*
  42. * A file's MAP_PRIVATE vma can be in both i_mmap tree and anon_vma
  43. * list, after a COW of one of the file pages. A MAP_SHARED vma
  44. * can only be in the i_mmap tree. An anonymous MAP_PRIVATE, stack
  45. * or brk vma (with NULL file) can only be in an anon_vma list.
  46. */
  47. struct list_head anon_vma_chain; /* Serialized by mmap_sem &
  48. * page_table_lock */
  49. struct anon_vma *anon_vma; /* Serialized by page_table_lock */
  50. /* Function pointers to deal with this struct. */
  51. const struct vm_operations_struct *vm_ops;
  52. /* Information about our backing store: */
  53. unsigned long vm_pgoff; /* Offset (within vm_file) in PAGE_SIZE
  54. units */
  55. struct file * vm_file; /* File we map to (can be NULL). */
  56. void * vm_private_data; /* was vm_pte (shared mem) */
  57. atomic_long_t swap_readahead_info;
  58. #ifndef CONFIG_MMU
  59. struct vm_region *vm_region; /* NOMMU mapping region */
  60. #endif
  61. #ifdef CONFIG_NUMA
  62. struct mempolicy *vm_policy; /* NUMA policy for the VMA */
  63. #endif
  64. struct vm_userfaultfd_ctx vm_userfaultfd_ctx;
  65. #ifdef CONFIG_SPECULATIVE_PAGE_FAULT
  66. seqcount_t vm_sequence;
  67. atomic_t vm_ref_count; /* see vma_get(), vma_put() */
  68. #endif
  69. } __randomize_layout;

該結(jié)構體體作用可以參考https://linux-kernel-labs./master/labs/memory_mapping.html 文章, 在用戶進程調(diào)用mmap函數(shù)時候會創(chuàng)建這個結(jié)構,。它描述的是物理頁對應的虛擬內(nèi)存,,它描述的是一段連續(xù)的、具有相同訪問屬性的虛存空間,,該虛存空間的大小為物理內(nèi)存頁面的整數(shù)倍,,結(jié)構體中每個成員的含義可以參考文章https://blog.csdn.net/ganggexiongqi/article/details/6746248

vm_start 是在進程中虛擬地址的起始地址。 

  1. int ion_heap_map_user(struct ion_heap *heap, struct ion_buffer *buffer,
  2.               struct vm_area_struct *vma)
  3. {
  4.     struct sg_table *table = buffer->sg_table;
  5.     unsigned long addr = vma->vm_start;
  6.     unsigned long offset = vma->vm_pgoff * PAGE_SIZE;
  7.     struct scatterlist *sg;
  8.     int i;
  9.     int ret;
  10.     for_each_sg(table->sgl, sg, table->nents, i) {
  11.         struct page *page = sg_page(sg);
  12.         unsigned long remainder = vma->vm_end - addr;
  13.         unsigned long len = sg->length;
  14.         if (offset >= sg->length) {
  15.             offset -= sg->length;
  16.             continue;
  17.         } else if (offset) {
  18.             page += offset / PAGE_SIZE;
  19.             len = sg->length - offset;
  20.             offset = 0;
  21.         }
  22.         len = min(len, remainder);
  23.         ret = remap_pfn_range(vma, addr, page_to_pfn(page), len,
  24.                       vma->vm_page_prot);
  25.         if (ret)
  26.             return ret;
  27.         addr += len;
  28.         if (addr >= vma->vm_end)
  29.             return 0;
  30.     }
  31.     return 0;
  32. }

回到代碼中addr = vma->vm_start 保存了虛擬地址的其實地址,,vm_pgoff是該虛存空間起始地址在vm_file文件里面的文件偏移,,單位為物理頁面。比如現(xiàn)在有64個物理頁面,,用戶在映射的時候使用第5個頁面開始映射10個頁面,,那么這個vm_pgoff應該就是5.for_each_sg 代碼主要是將sg散列表中存放的物理頁面拿出來進行映射,首先看offset >= sg->length 這句代碼,,為什么要判斷,,如果offset 是便宜6個物理頁面,當時這個sg只存放了5個物理頁面,,現(xiàn)在我們正??隙ㄊ窃谙乱粋€sg中在取一個頁面構成,6個頁面,,所以

下面相關代碼就是做這部分功能

      if (offset >= sg->length) {
87			offset -= sg->length;
88			continue;
89		} else if (offset) {
90			page += offset / PAGE_SIZE;
91			len = sg->length - offset;
92			offset = 0;
93		}

我們假設下一個sg有三個物理頁面,,那么我們只需要在這個sg上page +1 就可以。現(xiàn)在offset就是1,,在if 執(zhí)行過程中 offset -= sg->length,,這里其實已經(jīng)6-5了。 len 變量就變成了3 -1 變成了2 個,。offfset 因為后面不在需要所以設置為0,, 我們需要將這兩個進行映射,,所以下面調(diào)用了linux 內(nèi)核的remap_pfn_range的函數(shù),此函數(shù)網(wǎng)上資料很多,。映射到用戶函數(shù)這里也就執(zhí)行完成了

    本站是提供個人知識管理的網(wǎng)絡存儲空間,,所有內(nèi)容均由用戶發(fā)布,不代表本站觀點,。請注意甄別內(nèi)容中的聯(lián)系方式,、誘導購買等信息,謹防詐騙,。如發(fā)現(xiàn)有害或侵權內(nèi)容,,請點擊一鍵舉報。
    轉(zhuǎn)藏 分享 獻花(0

    0條評論

    發(fā)表

    請遵守用戶 評論公約

    類似文章 更多