DMA摘录

发布时间:2024年01月17日

系统, 网卡, DMA连通的形式
Data transfer can be triggered in two ways: either the software asks for data (via a function such as read) or the hardware asynchronously pushes data to the system.
In the first case, the steps involved can be summarized as follows:

  1. When a process calls read, the driver method allocates a DMA buffer and instructs the hardware to transfer its data into that buffer. The process is put to sleep.
  2. The hardware writes data to the DMA buffer and raises an interrupt when it’s done.
  3. The interrupt handler gets the input data, acknowledges the interrupt, and awakens the process, which is now able to read data.
    The second case comes about when DMA is used asynchronously. This happens, for example, with data acquisition devices that go on pushing data even if nobody is reading them. In this case, the driver should maintain a buffer so that a subsequent read call will return all the accumulated data to user space. The steps involved in this kind of transfer are slightly different:
  4. The hardware raises an interrupt to announce that new data has arrived.
  5. The interrupt handler allocates a buffer and tells the hardware where to transfer its data.
  6. The peripheral device writes the data to the buffer and raises another interrupt when it’s done.
  7. The handler dispatches the new data, wakes any relevant process, and takes care of housekeeping.

A variant of the asynchronous approach is often seen with network cards. These cards often expect to see a circular buffer (often called a DMA ring buffer) estab- lished in memory shared with the processor; each incoming packet is placed in the next available buffer in the ring, and an interrupt is signaled. The driver then passes the network packets to the rest of the kernel and places a new DMA buffer in the ring.

Drivers that use the following functions should include <linux/dma-mapping.h>.
By default, the kernel assumes that your device can perform DMA to any 32-bit address. If this is not the case, you should inform the kernel of that fact with a call to:
int dma_set_mask(struct device *dev, u64 mask);

static int alloc_rx_buffers(struct xxx *x)
{
	struct queue *queue = x->queues[0];
	int size;

	size = rx_ring_size * x->rx_buffer_size;
	queue->rx_buffers = dma_alloc_coherent(&x->pdev->dev, size,
					    &queue->rx_buffers_dma, GFP_KERNEL);
}

queue->rx_buffers: 内存的虚拟起始地址,在内核要用此地址来操作所分配的内存。
bp->pdev->dev: struct device指针,可以平台初始化里指定,主要是dma_mask之类,可参考framebuffer。
size: 实际分配大小,传入dma_map_size即可。
queue->rx_buffers_dma: 返回的内存物理地址,dma就可以用。
DMA是设备内置的, dma_mask可以控制该设备可以访问的内存范围。
之后, 假如有数据包到了, 网卡就把它放到上述dma分配的内存里(DMA缓冲区)里, 操作系统可以从该dma缓冲区拿走数据, 提交至协议栈。

文章来源:https://blog.csdn.net/weixin_43651292/article/details/135623741
本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。