人生的尽头是什么| 扁平疣用什么药膏管用| 绞丝旁一个奇念什么| 今天冲什么生肖| 丁亥年五行属什么| cnm是什么意思| 性格好是什么意思| 猫咪拉肚子吃什么药| 中国人为什么要学英语| t1w1高信号代表什么| 社招是什么意思| 指甲脱层是什么原因| 舌头黄是什么原因| 河豚是什么意思| 睾丸疼痛吃什么药| 花甲不能和什么一起吃| 人流前需要检查什么项目| 狮子座是什么时候| 乳环是什么| 工程院院士是什么级别| 逍遥丸什么时候吃最好| 10月14日什么星座| 梅毒症状男有什么表现| 什么人不能摆放大象| 哺乳期妈妈感冒了可以吃什么药| 大肠在人体什么位置图| 五指毛桃有什么功效| vre是什么细菌| 无创是什么检查| 做梦钓到大鱼什么意思| 长痘痘涂什么药膏| 什么情况下吃奥司他韦| 九月初四是什么星座| 胃气不通什么症状| 验孕棒一深一浅是什么意思| 佛家思想的核心是什么| 实字五行属什么| 为什么刚吃完饭不能洗澡| 丙肝吃什么药效果好| s2是什么意思| 哥字五行属什么| 大好河山是什么生肖| 刘备和刘表什么关系| 过期的牛奶有什么用途| opec是什么意思| 反复呕吐是什么病症| vd是什么意思| 咳嗽喝什么药| MR医学上是什么意思| 怀疑甲亢需要做什么检查| 排酸是什么意思| 心血管科是看什么病| 白手起家是什么意思| 上海有什么特色美食| pdm是什么意思| 鸿运当头什么意思| 鱿鱼是什么动物| 梦见打死黄鼠狼是什么意思| 中将相当于什么级别| 高粱是什么| 层出不穷是什么意思| 女人梦见仇人代表什么| 慕字五行属什么| 吃什么可以让月经快点来| 流光是什么意思| 小孩办理护照需要什么材料| 戏谑什么意思| 鼻窦炎挂什么科| 桃子不能和什么水果一起吃| 孕初期吃什么对胎儿好| 点读笔什么牌子好| 吃什么补充dha| 细菌性炎症用什么药| 灵泛是什么意思| mers是什么病毒| 脱肛是什么原因引起的| 出汗多吃什么药| 陕西的特产有什么| 什么是hr| 炖鱼放什么调料| 顽固性失眠吃什么药| 襄是什么意思| 丝状疣是什么原因长出来的| 由是什么意思| 腊八有什么讲究| 黄瓜为什么不叫绿瓜| 宝宝病毒性感冒吃什么药效果好| 眼睛长眼屎是什么原因| 脑委缩吃什么药能空制| 无动于衷什么意思| 阴血亏虚吃什么中成药| 禹字五行属什么的| 梦遗太频繁是什么原因造成的| 吃什么能长头发| 发量多的女生适合什么发型| 宫内早孕什么意思| 尽善尽美是什么生肖| 厉鬼是什么意思| 邹字五行属什么| 喝酒脸红是什么原因| 什么是abo文| 困惑什么意思| 什么饮料能解酒| 戾气是什么意思| 白露是什么季节的节气| 喝酸奶有什么好处| 胆囊壁毛糙吃什么药效果好| 主食都有什么| 算计是什么意思| 液化是什么意思| 冥想是什么| cabbeen是什么牌子| 冷面是什么面做的| 腐竹炒什么好吃| 每天坚持跑步有什么好处| 迟缓是什么意思| 失眠有什么办法解决| 利多卡因是什么药| 肝郁化火吃什么药| 籺是什么意思| 孕期小腿抽筋什么原因| 更迭是什么意思| 爱无能是什么意思| 吃什么长胎| hpv有什么危害| 一丘之貉是什么意思| 草莓是什么季节| 秦二世为什么姓胡| 痛经吃什么止疼药| 双侧筛窦粘膜增厚是什么意思| 移徙是什么意思| 榴莲不可以和什么食物一起吃| 美缝剂什么牌子的好| 喜面是什么意思| 手脚冰凉是什么原因| 叶公好龙讽刺了什么| 鬼打墙什么意思| ml代表什么单位| 八哥鸟吃什么饲料最好| 灰指甲什么症状| 红茶适合什么季节喝| 移植后可以吃什么水果| 什么克风| 参保是什么意思| 行尸走肉是什么意思| 床上放什么可以驱蟑螂| 头发一半白一半黑是什么原因| 黑豆不能和什么一起吃| 什么是夜盲症| 梦见吃李子是什么意思| 辟谷期间可以吃什么| 血儿茶酚胺是查什么的| 守是什么生肖| 米白色是什么颜色| 二椅子什么意思| 男人脚底有痣代表什么| 脖子发麻是什么原因| 环切手术是什么| 抑郁看病看什么科| 梦见自己尿裤子了是什么意思| 切除痣挂什么科| 右侧卵巢无回声是什么意思| 什么是阴阳人| 生殖器疱疹是什么原因引起的| 豆芽和什么一起炒好吃| 男生的蛋蛋长什么样| 淋巴细胞比率低是什么意思| atp 是什么| 什么人不能喝咖啡| 一什么菜地| 什么含钾最多| 羊驼吃什么| 阑尾炎挂什么科| 降噪是什么意思| 长溃疡是缺什么维生素| 鲁迅真名叫什么| 来年是什么意思| 泰格豪雅属于什么档次| 女人叫床最好喊什么| 夫人是什么生肖| 俄罗斯信奉什么教| ctm是什么意思| 北京有什么特产好吃| 阿普唑仑是什么药| 待寝什么意思| 小孩长得慢是什么原因| 红颜是什么意思| 倾情是什么意思| 眼睛干涩吃什么中成药| 男生为什么会晨勃| 九牧王男装是什么档次| 正畸和矫正有什么区别| 倒嗓是什么意思| 儿童肺炎吃什么药| 7月出生的是什么星座| 前列腺炎需要做什么检查| 什么叫引产| 小肝癌是什么意思| 菱角是什么意思| 茯苓不能和什么一起吃| 甲状腺结节对身体有什么影响| 什么是预科班| 上位者是什么意思| 夏天有什么植物| 竟无语凝噎什么意思| 来月经是黑色的是什么原因| 活碱是什么| 腊猪蹄炖什么好吃| 绕梁三日是什么意思| 饮食清淡主要吃什么| 鱼刺卡喉咙去医院挂什么科| 骨折后吃什么好| 什么颜色加什么颜色等于紫色| 一号来的月经排卵期是什么时候| 生二胎应该注意什么| 先兆性流产是什么意思| 极端是什么意思| 羊肉炖什么好吃| 稽留热常见于什么病| 火车无座是什么意思| 血沉低是什么意思| 五常指的是什么| 神仙眷侣是什么意思| 支气管发炎用什么药| 月底是什么时候| 桂附地黄丸治什么病| 织女是什么生肖| 柿子什么时候成熟| 什么是直流电| crocs什么意思| 抑郁症什么症状表现| 幼犬可以吃什么| 369是什么意思| 惊厥是什么原因引起的| 取环后月经量少是什么原因| 女人十个簸箕是什么命| 什么筷子不发霉又健康| 甲功是什么意思| 球蛋白偏高说明什么| 看破红尘下一句是什么| 梦见自己得绝症了是什么预兆| 看见双彩虹有什么征兆| gif是什么意思| 什么鱼清蒸好吃| 为什么蛋皮会痒| 头热是什么原因| 练八段锦有什么好处| 腺样体面容是什么意思| 气血两虚吃什么中成药| 前列腺是什么意思| bambi什么意思| 炙的意思是什么| qt什么意思| 阻生牙是什么意思| 感染艾滋病有什么症状| 小丑代表什么生肖| 胚胎停育有什么症状| 什么人不适合普拉提| 防蓝光眼镜有什么好处| 女人喝劲酒有什么好处| 李小龙和丁佩什么关系| 男人做梦梦到蛇是什么意思| 糖尿病能吃什么主食| 青椒是什么意思| 百度Jump to content

What does ‘City of Opportunity’ entail for you

From Wikipedia, the free encyclopedia
百度 (新华社澳门2月8日电记者郭鑫)

Direct memory access (DMA) is a feature of computer systems that allows certain hardware subsystems to access main system memory independently of the central processing unit (CPU).[1]

Without DMA, when the CPU is using programmed input/output, it is typically fully occupied for the entire duration of the read or write operation, and is thus unavailable to perform other work. With DMA, the CPU first initiates the transfer, then it does other operations while the transfer is in progress, and it finally receives an interrupt from the DMA controller (DMAC) when the operation is done. This feature is useful at any time that the CPU cannot keep up with the rate of data transfer, or when the CPU needs to perform work while waiting for a relatively slow I/O data transfer.

Many hardware systems use DMA, including disk drive controllers, graphics cards, network cards and sound cards. DMA is also used for intra-chip data transfer in some multi-core processors. Computers that have DMA channels can transfer data to and from devices with much less CPU overhead than computers without DMA channels. Similarly, a processing circuitry inside a multi-core processor can transfer data to and from its local memory without occupying its processor time, allowing computation and data transfer to proceed in parallel.

DMA can also be used for "memory to memory" copying or moving of data within memory. DMA can offload expensive memory operations, such as large copies or scatter-gather operations, from the CPU to a dedicated DMA engine. An implementation example is the I/O Acceleration Technology. DMA is of interest in network-on-chip and in-memory computing architectures.

Principles

[edit]

Third-party

[edit]
Motherboard of a NeXTcube computer (1990). The two large integrated circuits below the middle of the image are the DMA controller (l.) and - unusual - an extra dedicated DMA controller (r.) for the magneto-optical disc used instead of a hard disk drive in the first series of this computer model.

Standard DMA, also called third-party DMA, uses a DMA controller. A DMA controller can generate memory addresses and initiate memory read or write cycles. It contains several hardware registers that can be written and read by the CPU. These include a memory address register, a byte count register, and one or more control registers. Depending on what features the DMA controller provides, these control registers might specify some combination of the source, the destination, the direction of the transfer (reading from the I/O device or writing to the I/O device), the size of the transfer unit, and/or the number of bytes to transfer in one burst.[2]

To carry out an input, output or memory-to-memory operation, the host processor initializes the DMA controller with a count of the number of words to transfer, and the memory address to use. The CPU then commands the peripheral device to initiate a data transfer. The DMA controller then provides addresses and read/write control lines to the system memory. Each time a byte of data is ready to be transferred between the peripheral device and memory, the DMA controller increments its internal address register until the full block of data is transferred.

Some examples of buses using third-party DMA are PATA, USB (before USB4), and SATA; however, their host controllers use bus mastering.[citation needed]

Bus mastering

[edit]

In a bus mastering system, also known as a first-party DMA system, the CPU and peripherals can each be granted control of the memory bus. Where a peripheral can become a bus master, it can directly write to system memory without the involvement of the CPU, providing memory address and control signals as required. Some measures must be provided to put the processor into a hold condition so that bus contention does not occur.

Modes of operation

[edit]

Burst mode

[edit]

In burst mode, an entire block of data is transferred in one contiguous sequence. Once the DMA controller is granted access to the system bus by the CPU, it transfers all bytes of data in the data block before releasing control of the system buses back to the CPU, but renders the CPU inactive for relatively long periods of time. The mode is also called "Block Transfer Mode".

Cycle stealing mode

[edit]

The cycle stealing mode is used in systems in which the CPU should not be disabled for the length of time needed for burst transfer modes. In the cycle stealing mode, the DMA controller obtains access to the system bus the same way as in burst mode, using BR (Bus Request) and BG (Bus Grant) signals, which are the two signals controlling the interface between the CPU and the DMA controller. However, in cycle stealing mode, after one unit of data transfer, the control of the system bus is deasserted to the CPU via BG. It is then continually requested again via BR, transferring one unit of data per request, until the entire block of data has been transferred.[3] By continually obtaining and releasing the control of the system bus, the DMA controller essentially interleaves instruction and data transfers. The CPU processes an instruction, then the DMA controller transfers one data value, and so on. Data is not transferred as quickly, but CPU is not idled for as long as in burst mode. Cycle stealing mode is useful for controllers that monitor data in real time.

Transparent mode

[edit]

Transparent mode takes the most time to transfer a block of data, yet it is also the most efficient mode in terms of overall system performance. In transparent mode, the DMA controller transfers data only when the CPU is performing operations that do not use the system buses. The primary advantage of transparent mode is that the CPU never stops executing its programs and the DMA transfer is free in terms of time, while the disadvantage is that the hardware needs to determine when the CPU is not using the system buses, which can be complex. This is also called "Hidden DMA data transfer mode".

Cache coherency

[edit]

Cache incoherence due to DMA

DMA can lead to cache coherency problems. Imagine a CPU equipped with a cache and an external memory that can be accessed directly by devices using DMA. When the CPU accesses location X in the memory, the current value will be stored in the cache. Subsequent operations on X will update the cached copy of X, but not the external memory version of X, assuming a write-back cache. If the cache is not flushed to the memory before the next time a device tries to access X, the device will receive a stale value of X.

Similarly, if the cached copy of X is not invalidated when a device writes a new value to the memory, then the CPU will operate on a stale value of X.

This issue can be addressed in one of two ways in system design: Cache-coherent systems implement a method in hardware, called bus snooping, whereby external writes are signaled to the cache controller which then performs a cache invalidation for DMA writes or cache flush for DMA reads. Non-coherent systems leave this to software, where the OS must then ensure that the cache lines are flushed before an outgoing DMA transfer is started and invalidated before a memory range affected by an incoming DMA transfer is accessed. The OS must make sure that the memory range is not accessed by any running threads in the meantime. The latter approach introduces some overhead to the DMA operation, as most hardware requires a loop to invalidate each cache line individually.

Hybrids also exist, where the secondary L2 cache is coherent while the L1 cache (typically on-CPU) is managed by software.

Examples

[edit]

ISA

[edit]

In the original IBM PC (and the follow-up PC/XT), there was only one Intel 8237 DMA controller capable of providing four DMA channels (numbered 0–3). These DMA channels performed 8-bit transfers (as the 8237 was an 8-bit device, ideally matched to the PC's i8088 CPU/bus architecture), could only address the first (i8086/8088-standard) megabyte of RAM, and were limited to addressing single 64 kB segments within that space (although the source and destination channels could address different segments). Additionally, the controller could only be used for transfers to, from or between expansion bus I/O devices, as the 8237 could only perform memory-to-memory transfers using channels 0 & 1, of which channel 0 in the PC (& XT) was dedicated to dynamic memory refresh. This prevented it from being used as a general-purpose "Blitter", and consequently block memory moves in the PC, limited by the general PIO speed of the CPU, were very slow.

With the IBM PC/AT, the enhanced AT bus (more familiarly retronymed as the Industry Standard Architecture (ISA)) added a second 8237 DMA controller to provide three additional, and as highlighted by resource clashes with the XT's additional expandability over the original PC, much-needed channels (5–7; channel 4 is used as a cascade to the first 8237). ISA DMA's extended 24-bit address bus width allows it to access up to 16 MB lower memory.[4] The page register was also rewired to address the full 16 MB memory address space of the 80286 CPU. This second controller was also integrated in a way capable of performing 16-bit transfers when an I/O device is used as the data source and/or destination (as it actually only processes data itself for memory-to-memory transfers, otherwise simply controlling the data flow between other parts of the 16-bit system, making its own data bus width relatively immaterial), doubling data throughput when the upper three channels are used. For compatibility, the lower four DMA channels were still limited to 8-bit transfers only, and whilst memory-to-memory transfers were now technically possible due to the freeing up of channel 0 from having to handle DRAM refresh, from a practical standpoint they were of limited value because of the controller's consequent low throughput compared to what the CPU could now achieve (i.e., a 16-bit, more optimised 80286 running at a minimum of 6 MHz, vs an 8-bit controller locked at 4.77 MHz). In both cases, the 64 kB segment boundary issue remained, with individual transfers unable to cross segments (instead "wrapping around" to the start of the same segment) even in 16-bit mode, although this was in practice more a problem of programming complexity than performance as the continued need for DRAM refresh (however handled) to monopolise the bus approximately every 15 μs prevented use of large (and fast, but uninterruptible) block transfers.

Due to their lagging performance (1.6 MB/s maximum 8-bit transfer capability at 5 MHz,[5] but no more than 0.9 MB/s in the PC/XT and 1.6 MB/s for 16-bit transfers in the AT due to ISA bus overheads and other interference such as memory refresh interruptions[1]) and unavailability of any speed grades that would allow installation of direct replacements operating at speeds higher than the original PC's standard 4.77 MHz clock, these devices have been effectively obsolete since the late 1980s. Particularly, the advent of the 80386 processor in 1985 and its capacity for 32-bit transfers (although great improvements in the efficiency of address calculation and block memory moves in Intel CPUs after the 80186 meant that PIO transfers even by the 16-bit-bus 286 and 386SX could still easily outstrip the 8237), as well as the development of further evolutions to (EISA) or replacements for (MCA, VLB and PCI) the "ISA" bus with their own much higher-performance DMA subsystems (up to a maximum of 33 MB/s for EISA, 40 MB/s MCA, typically 133 MB/s VLB/PCI) made the original DMA controllers seem more of a performance millstone than a booster. They were supported to the extent they are required to support built-in legacy PC hardware on later machines. The pieces of legacy hardware that continued to use ISA DMA after 32-bit expansion buses became common were Sound Blaster cards that needed to maintain full hardware compatibility with the Sound Blaster standard; and Super I/O devices on motherboards that often integrated a built-in floppy disk controller, an IrDA infrared controller when FIR (fast infrared) mode is selected, and an IEEE 1284 parallel port controller when ECP mode is selected. In cases where an original 8237s or direct compatibles were still used, transfer to or from these devices may still be limited to the first 16 MB of main RAM regardless of the system's actual address space or amount of installed memory.

Each DMA channel has a 16-bit address register and a 16-bit count register associated with it. To initiate a data transfer the device driver sets up the DMA channel's address and count registers together with the direction of the data transfer, read or write. It then instructs the DMA hardware to begin the transfer. When the transfer is complete, the device interrupts the CPU.

Scatter-gather or vectored I/O DMA allows the transfer of data to and from multiple memory areas in a single DMA transaction. It is equivalent to the chaining together of multiple simple DMA requests. The motivation is to off-load multiple input/output interrupt and data copy tasks from the CPU.

DRQ stands for Data request; DACK for Data acknowledge. These symbols, seen on hardware schematics of computer systems with DMA functionality, represent electronic signaling lines between the CPU and DMA controller. Each DMA channel has one Request and one Acknowledge line. A device that uses DMA must be configured to use both lines of the assigned DMA channel.

16-bit ISA permitted bus mastering.[6]

Standard ISA DMA assignments:[citation needed]

  1. DRAM refresh (obsolete)
  2. User hardware usually ISA sound card
  3. Floppy disk controller
  4. WDMA for hard disk controller (replaced by UDMA modes), parallel port (ECP capable port), or certain SoundBlaster Clones like the OPTi 928
  5. 8237 DMA controller
  6. Hard disk controller (PS/2 only), or user hardware usually ISA sound card
  7. User hardware
  8. User hardware

PCI

[edit]

A PCI architecture has no central DMA controller, unlike ISA. Instead, A PCI device can request control of the bus ("become the bus master") and request to read from and write to system memory. More precisely, a PCI component requests bus ownership from the PCI bus controller (usually PCI host bridge, and PCI to PCI bridge[7]), which will arbitrate if several devices request bus ownership simultaneously, since there can only be one bus master at one time. When the component is granted ownership, it will issue normal read and write commands on the PCI bus, which will be claimed by the PCI bus controller.

As an example, on an Intel Core-based PC, the southbridge will forward the transactions to the memory controller (which is integrated on the CPU die) using DMI, which will in turn convert them to DDR operations and send them out on the memory bus. As a result, there are quite a number of steps involved in a PCI DMA transfer; however, that poses little problem, since the PCI device or PCI bus itself are an order of magnitude slower than the rest of the components (see list of device bandwidths).

A modern x86 CPU may use more than 4 GB of memory, either utilizing the native 64-bit mode of x86-64 CPU, or the Physical Address Extension (PAE), a 36-bit addressing mode. In such a case, a device using DMA with a 32-bit address bus is unable to address memory above the 4 GB line. The new Double Address Cycle (DAC) mechanism, if implemented on both the PCI bus and the device itself,[8] enables 64-bit DMA addressing. Otherwise, the operating system would need to work around the problem by either using costly double buffers (DOS/Windows nomenclature) also known as bounce buffers (FreeBSD/Linux), or it could use an IOMMU to provide address translation services if one is present.

I/OAT

[edit]

As an example of DMA engine incorporated in a general-purpose CPU, some Intel Xeon chipsets include a DMA engine called I/O Acceleration Technology (I/OAT), which can offload memory copying from the main CPU, freeing it to do other work.[9] In 2006, Intel's Linux kernel developer Andrew Grover performed benchmarks using I/OAT to offload network traffic copies and found no more than 10% improvement in CPU utilization with receiving workloads.[10]

DDIO

[edit]

Further performance-oriented enhancements to the DMA mechanism have been introduced in Intel Xeon E5 processors with their Data Direct I/O (DDIO) feature, allowing the DMA "windows" to reside within CPU caches instead of system RAM. As a result, CPU caches are used as the primary source and destination for I/O, allowing network interface controllers (NICs) to DMA directly to the Last level cache (L3 cache) of local CPUs and avoid costly fetching of the I/O data from system RAM. As a result, DDIO reduces the overall I/O processing latency, allows processing of the I/O to be performed entirely in-cache, prevents the available RAM bandwidth/latency from becoming a performance bottleneck, and may lower the power consumption by allowing RAM to remain longer in low-powered state.[11][12][13][14]

AHB

[edit]

In systems-on-a-chip and embedded systems, typical system bus infrastructure is a complex on-chip bus such as AMBA High-performance Bus. AMBA defines two kinds of AHB components: master and slave. A slave interface is similar to programmed I/O through which the software (running on embedded CPU, e.g. ARM) can write/read I/O registers or (less commonly) local memory blocks inside the device. A master interface can be used by the device to perform DMA transactions to/from system memory without heavily loading the CPU.

Therefore, high bandwidth devices such as network controllers that need to transfer huge amounts of data to/from system memory will have two interface adapters to the AHB: a master and a slave interface. This is because on-chip buses like AHB do not support tri-stating the bus or alternating the direction of any line on the bus. Like PCI, no central DMA controller is required since the DMA is bus-mastering, but an arbiter is required in case of multiple masters present on the system.

Internally, a multichannel DMA engine is usually present in the device to perform multiple concurrent scatter-gather operations as programmed by the software.

Cell

[edit]

As an example usage of DMA in a multiprocessor-system-on-chip, IBM/Sony/Toshiba's Cell processor incorporates a DMA engine for each of its 9 processing elements including one Power processor element (PPE) and eight synergistic processor elements (SPEs). Since the SPE's load/store instructions can read/write only its own local memory, an SPE entirely depends on DMAs to transfer data to and from the main memory and local memories of other SPEs. Thus the DMA acts as a primary means of data transfer among cores inside this CPU (in contrast to cache-coherent CMP architectures such as Intel's cancelled general-purpose GPU, Larrabee).

DMA in Cell is fully cache coherent (note however local stores of SPEs operated upon by DMA do not act as globally coherent cache in the standard sense). In both read ("get") and write ("put"), a DMA command can transfer either a single block area of size up to 16 KB, or a list of 2 to 2048 such blocks. The DMA command is issued by specifying a pair of a local address and a remote address: for example when a SPE program issues a put DMA command, it specifies an address of its own local memory as the source and a virtual memory address (pointing to either the main memory or the local memory of another SPE) as the target, together with a block size. According to an experiment, an effective peak performance of DMA in Cell (3 GHz, under uniform traffic) reaches 200 GB per second.[15]

DMA controllers

[edit]

Pipelining

[edit]

Processors with scratchpad memory and DMA (such as digital signal processors and the Cell processor) may benefit from software overlapping DMA memory operations with processing, via double buffering or multibuffering. For example, the on-chip memory is split into two buffers; the processor may be operating on data in one, while the DMA engine is loading and storing data in the other. This allows the system to avoid memory latency and exploit burst transfers, at the expense of needing a predictable memory access pattern.[citation needed]

See also

[edit]

References

[edit]
  1. ^ a b "DMA Fundamentals on various PC platforms, National Instruments, pages 6 & 7" (PDF). University of Colorado Boulder. Retrieved 26 April 2025.
  2. ^ Osborne, Adam (1980). An Introduction to Microcomputers: Volume 1: Basic Concepts (2nd ed.). Osborne McGraw Hill. pp. 5–64 through 5–93. ISBN 0931988349.
  3. ^ Hayes, John.P (1978). Computer Architecture and Organization. McGraw-Hill International Book Company. p. 426-427. ISBN 0-07-027363-4.
  4. ^ "ISA DMA - OSDev Wiki". wiki.osdev.org. Retrieved 2025-08-05.
  5. ^ "Intel 8237 & 8237-2 Datasheet" (PDF). JKbox RC702 subsite. Retrieved 20 April 2019.
  6. ^ Intel Corp. (2025-08-05), "Chapter 12: ISA Bus" (PDF), PC Architecture for Technicians: Level 1, retrieved 2025-08-05
  7. ^ "Bus Specifics - Writing Device Drivers for Oracle? Solaris 11.3". docs.oracle.com. Retrieved 2025-08-05.
  8. ^ "Physical Address Extension — PAE Memory and Windows". Microsoft Windows Hardware Development Central. 2005. Retrieved 2025-08-05.
  9. ^ Corbet, Jonathan (December 8, 2005). "Memory copies in hardware". LWN.net.
  10. ^ Grover, Andrew (2025-08-05). "I/OAT on LinuxNet wiki". Overview of I/OAT on Linux, with links to several benchmarks. Archived from the original on 2025-08-05. Retrieved 2025-08-05.
  11. ^ "Intel Data Direct I/O (Intel DDIO): Frequently Asked Questions" (PDF). Intel. March 2012. Retrieved 2025-08-05.
  12. ^ Rashid Khan (2025-08-05). "Pushing the Limits of Kernel Networking". redhat.com. Retrieved 2025-08-05.
  13. ^ "Achieving Lowest Latencies at Highest Message Rates with Intel Xeon Processor E5-2600 and Solarflare SFN6122F 10 GbE Server Adapter" (PDF). solarflare.com. 2025-08-05. Retrieved 2025-08-05.
  14. ^ Alexander Duyck (2025-08-05). "Pushing the Limits of Kernel Networking" (PDF). linuxfoundation.org. p. 5. Retrieved 2025-08-05.
  15. ^ Kistler, Michael (May 2006). "Cell Multiprocessor Communication Network: Built for Speed". IEEE Micro. 26 (3): 10–23. doi:10.1109/MM.2006.49. S2CID 7735690.
  16. ^ "Am9517A Multimode DMA Controller" (PDF). Retrieved 2025-08-05.
  17. ^ "Z80? DMA Direct Memory Access Controller" (PDF). Retrieved 2025-08-05.
  18. ^ "Sharp 1986 Semiconductor Data Book" (PDF). p. 255-269. Retrieved 2025-08-05.
  19. ^ "pPD71037 Direct Memory Access (DMA) Controller" (PDF). p. 832(5b1). Retrieved 2025-08-05.
  20. ^ "μPD71071 DMA Controller" (PDF). p. 940(5g1). Retrieved 2025-08-05.

Sources

[edit]
[edit]
釉面是什么意思 油嘴滑舌是什么意思 宫腔灌注是治疗什么的 为什么伴娘要未婚 为什么明星都不戴黄金
肝功能不全是什么意思 解绑是什么意思 药引是什么意思 脾囊肿是什么原因引起的 孕妇吃榴莲对胎儿有什么好处
海虫草是什么 狗尾续貂什么意思 内痔疮用什么药治最好效果最快 来月经头疼是什么原因 花开两朵各表一枝什么意思
孕妇缺维生素D对胎儿有什么影响 穿匡威的都是什么人 臭虫怕什么 嗜血综合症是什么病 晚上睡觉容易醒是什么原因
同归于尽是什么意思sscsqa.com 扁桃体为什么会发炎bjcbxg.com 卵巢低回声是什么意思hcv9jop2ns5r.cn 银耳不能和什么一起吃hcv9jop3ns2r.cn 条条框框是什么意思hcv8jop4ns7r.cn
什么是标准预防hcv9jop5ns0r.cn 多头是什么意思hcv9jop7ns5r.cn 菠菜炒什么好吃hcv9jop0ns9r.cn 跟班是什么意思hcv9jop5ns3r.cn 氟比洛芬是什么药hcv8jop0ns3r.cn
超级碗是什么比赛hcv8jop3ns5r.cn 脖子后面正中间有痣代表什么hcv8jop2ns1r.cn 痱子粉和爽身粉有什么区别hcv7jop6ns2r.cn 为什么腿会肿hcv9jop7ns1r.cn 寄居蟹吃什么食物hcv9jop8ns1r.cn
幸福是什么的经典语录clwhiglsz.com 嘴角上火是什么原因hcv8jop2ns8r.cn 腮腺炎吃什么药好得快hcv9jop8ns3r.cn 中午一点是什么时辰hcv9jop1ns7r.cn 的近义词是什么wzqsfys.com
百度