梦见蝴蝶是什么意思| 情感什么意思| prc是什么意思| 桂皮是什么树的皮| 温州人为什么会做生意| 黄芪什么季节喝最好| 五行缺水戴什么| 头顶秃了一小块是什么原因怎么办| 不亚于是什么意思| 现在执行死刑用什么方法| 什么是像素| 甲流吃什么药| 幽门螺杆菌阳性吃什么药| 吃芒果过敏吃什么药| 转氨酶异常有什么症状| 梭子蟹什么时候上市| 1994年出生属什么| 雍是什么意思| 因果业力是什么意思| 肝火旺是什么症状| 猪油吃多了有什么好处和坏处| 台甫是什么意思| 兔子为什么不吃窝边草| 什么是pc| 性腺六项是查什么的| 取环后月经量少是什么原因| 补骨头吃什么最好| 胖次是什么意思| 酸菜是什么菜做的| 什么的事| 红楼梦贾家为什么被抄家| 什么心什么肺| 最好的减肥方法是什么| 手上长水泡是什么原因| 破气是什么意思| 胰尾显示不清什么意思| 什么祛斑产品效果好| 7月4号什么星座| 海带属于什么类| 人品好是什么意思| 梦见钱是什么预兆| 鳞状上皮增生是什么意思| 什么草药能治痔疮| 烦恼是什么意思| 啪啪啪什么意思| 知了猴是什么| cd138阳性是什么意思| 四月份是什么季节| 养狗的人容易得什么病| 飞机杯有什么用| 谍影重重4为什么换主角| 性侵是什么意思| 订盟是什么意思| 尿路感染吃什么消炎药| 酸奶不能和什么一起吃| exp是什么| 什么样的人可以通灵| 吃什么可以来月经最快最有效| 变态反应科是看什么病的| 小孩拉肚子应该吃什么食物好| 同仁什么意思| 血尿酸偏高是什么原因| 天官是什么意思| 功能性消化不良吃什么药| 老人大小便失禁是什么原因造成的| 自闭症是什么病| 肺大泡是什么病| 凌晨12点是什么时辰| 颈椎不好挂什么科| 莎字五行属什么| 微信上面有个耳朵是什么意思| 血沉是什么意思| 为什么会内分泌失调| 阴阳脸是什么意思| tvt是什么意思| 什么是绿色食品| 饱不洗头饿不洗澡是为什么| 手掌心出汗是什么原因| 经常眨眼睛是什么原因| 痰多吃什么好| 月经刚完同房为什么痛| 和南圣众是什么意思| 69岁属什么| 肺结核的痰是什么颜色| 竹外桃花三两枝的下一句是什么| 5月1号是什么星座| 脖子上长个包挂什么科| oil什么意思| 无创是什么意思| 梦特娇属于什么档次| 早晨起来嘴苦是什么原因| 胆黄素高是怎么回事有什么危害| 闪失是什么意思| 丰都为什么叫鬼城| 环移位了会出现什么症状| 苏州古代叫什么| 吃人参对身体有什么好处| 王字旁的字跟什么有关| 心率高吃什么药| 胃溃疡是什么原因引起的| 六月初一什么日子| un读什么| 扮猪吃老虎什么意思| 什么体质容易怀双胞胎| 地头蛇比喻什么样的人| 烹调是什么意思| 吃什么可以消除淋巴结| 什么是血友病| 梦见抬死人是什么意思| 18年属什么生肖| 犯了痔疮为什么老放屁| 苦瓜有什么功效和作用| 白头发有什么方法变黑| 无情是什么意思| 61是什么意思| 囊腺瘤是什么| 粘液丝是什么| 熹是什么意思| 小孩发育迟缓是什么原因造成的| 十九畏是什么意思| 心口窝疼挂什么科| 换手率是什么意思| 水母吃什么| 白凉粉是什么做的| 正常的心电图是什么样的图形| 不一样的烟火什么意思| 舌苔厚白吃什么药最好| 女人为什么不会怀孕| 羟苯乙酯是什么| 太行山在什么地方| ckd3期是什么意思| 文房四宝是什么| 什么泉水| 崖柏对人体有什么好处| 脸上出汗多是什么原因| 老鼠疣长什么样子图片| 男人为什么喜欢吃奶| 尿道口红肿用什么药| 撕漫男什么意思| 卵泡长得慢是什么原因造成的| 舌根发硬是什么原因| 亚克力是什么材质| 梦见捡硬币是什么预兆| 不可名状的名是什么意思| 警示是什么意思| ab是什么| 肚子胀气吃什么药| 冠状动脉钙化是什么意思| 双侧中耳乳突炎是什么意思| 夏天什么花开| 女人脸黄是什么原因该怎么调理| ua是什么意思| 绝经是什么意思| gfr是什么意思| rsl是什么意思| 病毒性感冒吃什么药效果好| 大泽土是什么生肖| 4.5是什么星座| 心理素质是什么意思| 耳朵为什么老是痒| 百合花什么颜色| ebv病毒是什么| 通马桶的工具叫什么| 什么鸟会说话| 湿热吃什么食物| 婴儿大便有泡沫是什么原因| 润物细无声是什么意思| 排便困难拉不出来是什么原因| 下眼睑红肿是什么原因| 网飞是什么| 阳字属于五行属什么| 为什么不能摸猫的肚子| 平日是什么意思| 梦见狗咬自己是什么意思| 30岁用什么眼霜比较好| 治疗呼吸道感染用什么药最好| 滞纳金是什么| 右肋下疼痛是什么原因| 蚂蚁咬了用什么药| 麻椒和花椒有什么区别| 八哥鸟吃什么饲料最好| 拉肚子吃什么食物| 1117什么星座| 激素是什么意思| 什么什么大叫| 小孩子拉肚子吃什么药| iq是什么意思| 脾大对身体有什么影响| 历法是什么意思| 秋葵炒什么好吃| 为什么会甲减| 牙神经拔了对牙齿有什么影响| 蟑螂屎长什么样| 空调除湿和制冷有什么区别| 姓蓝的是什么民族| 地区和市有什么区别| 脸部神经跳动吃什么药| 毛遂自荐什么意思| 喝苦荞茶对身体有什么好处| 梗塞灶是什么意思| 白带什么味道| 鱼子酱为什么那么贵| 萝卜喝醉了会变成什么| 红色的蛇是什么蛇| 双亲是什么意思| e6是什么意思| 谷丙转氨酶偏低是什么意思| 阴唇萎缩是什么原因| 前列腺炎不能吃什么| 后背疼痛什么原因| 法不传六耳什么意思| 阉鸡是什么鸡| 医疗行业五行属什么| 刷牙出血是什么原因| 经费是什么意思| 肖战什么星座| 荨麻疹用什么药最好| 门齿是指什么地方| 进德勤一般要什么学历| 治类风湿用什么方法好| 1984年属鼠五行属什么| 产后检查挂什么科| 属相是什么| 肺部积液吃什么药| 心累是什么原因| 嘬是什么意思| 舒筋健腰丸主治什么| 柿子什么季节成熟| 隽字五行属什么| 新生儿什么时候剪头发| 白细胞低有什么危险| 偏官是什么意思| 屁的成分是什么气体| 什么叫阈值| 光是什么颜色| 牙疼吃什么止疼药见效快| 吉兰巴雷综合征是什么病| 清明节一般开什么生肖| 肌腱炎有什么症状| 筋膜是什么| ct挂号挂什么科| 为什么会缺铁| k是什么单位| 舒张压和收缩压是什么| 脚环肿是什么原因引起的| 桂鱼吃什么食物| 蛇字五行属什么| 包浆是什么意思| 舌苔厚黄吃什么药| 什么是回避型依恋人格| 女生的下面长什么样| 雷锋属什么生肖| 梦见偷玉米是什么意思| 安排是什么意思| lime是什么水果| 一个三点水一个有读什么字| 小孩经常口腔溃疡是什么原因| 飞花令是什么| 腿麻是什么病的前兆吗| 梦到死人了有什么兆头| 淋巴细胞百分比偏高是什么原因| 嘴甜是什么原因| 周文王叫什么名字| 玻璃是什么做的| 百度Jump to content

女同是什么

From Wikipedia, the free encyclopedia
Technicians working on a large Linux cluster at the Chemnitz University of Technology, Germany
Sun Microsystems Solaris Cluster, with In-Row cooling
Taiwania series uses cluster architecture.
百度   同业存单市场影响可控  业内人士表示,部分债基或需根据要求调整持仓,但同业存单市场受到的影响可控。

A computer cluster is a set of computers that work together so that they can be viewed as a single system. Unlike grid computers, computer clusters have each node set to perform the same task, controlled and scheduled by software. The newest manifestation of cluster computing is cloud computing.

The components of a cluster are usually connected to each other through fast local area networks, with each node (computer used as a server) running its own instance of an operating system. In most circumstances, all of the nodes use the same hardware[1][better source needed] and the same operating system, although in some setups (e.g. using Open Source Cluster Application Resources (OSCAR)), different operating systems can be used on each computer, or different hardware.[2]

Clusters are usually deployed to improve performance and availability over that of a single computer, while typically being much more cost-effective than single computers of comparable speed or availability.[3]

Computer clusters emerged as a result of the convergence of a number of computing trends including the availability of low-cost microprocessors, high-speed networks, and software for high-performance distributed computing.[citation needed] They have a wide range of applicability and deployment, ranging from small business clusters with a handful of nodes to some of the fastest supercomputers in the world such as IBM's Sequoia.[4] Prior to the advent of clusters, single-unit fault tolerant mainframes with modular redundancy were employed; but the lower upfront cost of clusters, and increased speed of network fabric has favoured the adoption of clusters. In contrast to high-reliability mainframes, clusters are cheaper to scale out, but also have increased complexity in error handling, as in clusters error modes are not opaque to running programs.[5]

Basic concepts

[edit]
A simple, home-built Beowulf cluster

The desire to get more computing power and better reliability by orchestrating a number of low-cost commercial off-the-shelf computers has given rise to a variety of architectures and configurations.

The computer clustering approach usually (but not always) connects a number of readily available computing nodes (e.g. personal computers used as servers) via a fast local area network.[6] The activities of the computing nodes are orchestrated by "clustering middleware", a software layer that sits atop the nodes and allows the users to treat the cluster as by and large one cohesive computing unit, e.g. via a single system image concept.[6]

Computer clustering relies on a centralized management approach which makes the nodes available as orchestrated shared servers. It is distinct from other approaches such as peer-to-peer or grid computing which also use many nodes, but with a far more distributed nature.[6]

A computer cluster may be a simple two-node system which just connects two personal computers, or may be a very fast supercomputer. A basic approach to building a cluster is that of a Beowulf cluster which may be built with a few personal computers to produce a cost-effective alternative to traditional high-performance computing. An early project that showed the viability of the concept was the 133-node Stone Soupercomputer.[7] The developers used Linux, the Parallel Virtual Machine toolkit and the Message Passing Interface library to achieve high performance at a relatively low cost.[8]

Although a cluster may consist of just a few personal computers connected by a simple network, the cluster architecture may also be used to achieve very high levels of performance. The TOP500 organization's semiannual list of the 500 fastest supercomputers often includes many clusters, e.g. the world's fastest machine in 2011 was the K computer which has a distributed memory, cluster architecture.[9]

History

[edit]
A VAX 11/780, c. 1977, as used in early VAXcluster development

Greg Pfister has stated that clusters were not invented by any specific vendor but by customers who could not fit all their work on one computer, or needed a backup.[10] Pfister estimates the date as some time in the 1960s. The formal engineering basis of cluster computing as a means of doing parallel work of any sort was arguably invented by Gene Amdahl of IBM, who in 1967 published what has come to be regarded as the seminal paper on parallel processing: Amdahl's Law.

The history of early computer clusters is more or less directly tied to the history of early networks, as one of the primary motivations for the development of a network was to link computing resources, creating a de facto computer cluster.

The first production system designed as a cluster was the Burroughs B5700 in the mid-1960s. This allowed up to four computers, each with either one or two processors, to be tightly coupled to a common disk storage subsystem in order to distribute the workload. Unlike standard multiprocessor systems, each computer could be restarted without disrupting overall operation.

Tandem NonStop II circa 1980

The first commercial loosely coupled clustering product was Datapoint Corporation's "Attached Resource Computer" (ARC) system, developed in 1977, and using ARCnet as the cluster interface. Clustering per se did not really take off until Digital Equipment Corporation released their VAXcluster product in 1984 for the VMS operating system. The ARC and VAXcluster products not only supported parallel computing, but also shared file systems and peripheral devices. The idea was to provide the advantages of parallel processing, while maintaining data reliability and uniqueness. Two other noteworthy early commercial clusters were the Tandem NonStop (a 1976 high-availability commercial product)[11][12] and the IBM S/390 Parallel Sysplex (circa 1994, primarily for business use).

Within the same time frame, while computer clusters used parallelism outside the computer on a commodity network, supercomputers began to use them within the same computer. Following the success of the CDC 6600 in 1964, the Cray 1 was delivered in 1976, and introduced internal parallelism via vector processing.[13] While early supercomputers excluded clusters and relied on shared memory, in time some of the fastest supercomputers (e.g. the K computer) relied on cluster architectures.

Attributes of clusters

[edit]
A load balancing cluster with two servers and N user stations

Computer clusters may be configured for different purposes ranging from general purpose business needs such as web-service support, to computation-intensive scientific calculations. In either case, the cluster may use a high-availability approach. Note that the attributes described below are not exclusive and a "computer cluster" may also use a high-availability approach, etc.

"Load-balancing" clusters are configurations in which cluster-nodes share computational workload to provide better overall performance. For example, a web server cluster may assign different queries to different nodes, so the overall response time will be optimized.[14] However, approaches to load-balancing may significantly differ among applications, e.g. a high-performance cluster used for scientific computations would balance load with different algorithms from a web-server cluster which may just use a simple round-robin method by assigning each new request to a different node.[14]

Computer clusters are used for computation-intensive purposes, rather than handling IO-oriented operations such as web service or databases.[15] For instance, a computer cluster might support computational simulations of vehicle crashes or weather. Very tightly coupled computer clusters are designed for work that may approach "supercomputing".

"High-availability clusters" (also known as failover clusters, or HA clusters) improve the availability of the cluster approach. They operate by having redundant nodes, which are then used to provide service when system components fail. HA cluster implementations attempt to use redundancy of cluster components to eliminate single points of failure. There are commercial implementations of High-Availability clusters for many operating systems. The Linux-HA project is one commonly used free software HA package for the Linux operating system.

Benefits

[edit]

Clusters are primarily designed with performance in mind, but installations are based on many other factors. Fault tolerance (the ability of a system to continue operating despite a malfunctioning node) enables scalability, and in high-performance situations, allows for a low frequency of maintenance routines, resource consolidation (e.g., RAID), and centralized management. Advantages include enabling data recovery in the event of a disaster and providing parallel data processing and high processing capacity.[16][17]

In terms of scalability, clusters provide this in their ability to add nodes horizontally. This means that more computers may be added to the cluster, to improve its performance, redundancy and fault tolerance. This can be an inexpensive solution for a higher performing cluster compared to scaling up a single node in the cluster. This property of computer clusters can allow for larger computational loads to be executed by a larger number of lower performing computers.

When adding a new node to a cluster, reliability increases because the entire cluster does not need to be taken down. A single node can be taken down for maintenance, while the rest of the cluster takes on the load of that individual node.

If you have a large number of computers clustered together, this lends itself to the use of distributed file systems and RAID, both of which can increase the reliability and speed of a cluster.

Design and configuration

[edit]
A typical Beowulf configuration

One of the issues in designing a cluster is how tightly coupled the individual nodes may be. For instance, a single computer job may require frequent communication among nodes: this implies that the cluster shares a dedicated network, is densely located, and probably has homogeneous nodes. The other extreme is where a computer job uses one or few nodes, and needs little or no inter-node communication, approaching grid computing.

In a Beowulf cluster, the application programs never see the computational nodes (also called slave computers) but only interact with the "Master" which is a specific computer handling the scheduling and management of the slaves.[15] In a typical implementation the Master has two network interfaces, one that communicates with the private Beowulf network for the slaves, the other for the general purpose network of the organization.[15] The slave computers typically have their own version of the same operating system, and local memory and disk space. However, the private slave network may also have a large and shared file server that stores global persistent data, accessed by the slaves as needed.[15]

A special purpose 144-node DEGIMA cluster is tuned to running astrophysical N-body simulations using the Multiple-Walk parallel tree code, rather than general purpose scientific computations.[18]

Due to the increasing computing power of each generation of game consoles, a novel use has emerged where they are repurposed into High-performance computing (HPC) clusters. Some examples of game console clusters are Sony PlayStation clusters and Microsoft Xbox clusters. Another example of consumer game product is the Nvidia Tesla Personal Supercomputer workstation, which uses multiple graphics accelerator processor chips. Besides game consoles, high-end graphics cards too can be used instead. The use of graphics cards (or rather their GPU's) to do calculations for grid computing is vastly more economical than using CPU's, despite being less precise. However, when using double-precision values, they become as precise to work with as CPU's and are still much less costly (purchase cost).[2]

Computer clusters have historically run on separate physical computers with the same operating system. With the advent of virtualization, the cluster nodes may run on separate physical computers with different operating systems which are painted above with a virtual layer to look similar.[19][citation needed][clarification needed] The cluster may also be virtualized on various configurations as maintenance takes place; an example implementation is Xen as the virtualization manager with Linux-HA.[19]

Data sharing and communication

[edit]

Data sharing

[edit]
A NEC Nehalem cluster

As the computer clusters were appearing during the 1980s, so were supercomputers. One of the elements that distinguished the three classes at that time was that the early supercomputers relied on shared memory. Clusters do not typically use physically shared memory, while many supercomputer architectures have also abandoned it.

However, the use of a clustered file system is essential in modern computer clusters.[citation needed] Examples include the IBM General Parallel File System, Microsoft's Cluster Shared Volumes or the Oracle Cluster File System.

Message passing and communication

[edit]

Two widely used approaches for communication between cluster nodes are MPI (Message Passing Interface) and PVM (Parallel Virtual Machine).[20]

PVM was developed at the Oak Ridge National Laboratory around 1989 before MPI was available. PVM must be directly installed on every cluster node and provides a set of software libraries that paint the node as a "parallel virtual machine". PVM provides a run-time environment for message-passing, task and resource management, and fault notification. PVM can be used by user programs written in C, C++, or Fortran, etc.[20][21]

MPI emerged in the early 1990s out of discussions among 40 organizations. The initial effort was supported by ARPA and National Science Foundation. Rather than starting anew, the design of MPI drew on various features available in commercial systems of the time. The MPI specifications then gave rise to specific implementations. MPI implementations typically use TCP/IP and socket connections.[20] MPI is now a widely available communications model that enables parallel programs to be written in languages such as C, Fortran, Python, etc.[21] Thus, unlike PVM which provides a concrete implementation, MPI is a specification which has been implemented in systems such as MPICH and Open MPI.[21][22]

Cluster management

[edit]
Low-cost and low energy tiny-cluster of Cubieboards, using Apache Hadoop on Lubuntu
A pre-release sample of the Ground Electronics/AB Open Circumference C25 cluster computer system, fitted with 8x Raspberry Pi 3 Model B+ and 1x UDOO x86 boards

One of the challenges in the use of a computer cluster is the cost of administrating it which can at times be as high as the cost of administrating N independent machines, if the cluster has N nodes.[23] In some cases this provides an advantage to shared memory architectures with lower administration costs.[23] This has also made virtual machines popular, due to the ease of administration.[23]

Task scheduling

[edit]

When a large multi-user cluster needs to access very large amounts of data, task scheduling becomes a challenge. In a heterogeneous CPU-GPU cluster with a complex application environment, the performance of each job depends on the characteristics of the underlying cluster. Therefore, mapping tasks onto CPU cores and GPU devices provides significant challenges.[24] This is an area of ongoing research; algorithms that combine and extend MapReduce and Hadoop have been proposed and studied.[24]

Node failure management

[edit]

When a node in a cluster fails, strategies such as "fencing" may be employed to keep the rest of the system operational.[25][26] Fencing is the process of isolating a node or protecting shared resources when a node appears to be malfunctioning. There are two classes of fencing methods; one disables a node itself, and the other disallows access to resources such as shared disks.[25]

The STONITH method stands for "Shoot The Other Node In The Head", meaning that the suspected node is disabled or powered off. For instance, power fencing uses a power controller to turn off an inoperable node.[25]

The resources fencing approach disallows access to resources without powering off the node. This may include persistent reservation fencing via the SCSI3, fibre channel fencing to disable the fibre channel port, or global network block device (GNBD) fencing to disable access to the GNBD server.

Software development and administration

[edit]

Parallel programming

[edit]

Load balancing clusters such as web servers use cluster architectures to support a large number of users and typically each user request is routed to a specific node, achieving task parallelism without multi-node cooperation, given that the main goal of the system is providing rapid user access to shared data. However, "computer clusters" which perform complex computations for a small number of users need to take advantage of the parallel processing capabilities of the cluster and partition "the same computation" among several nodes.[27]

Automatic parallelization of programs remains a technical challenge, but parallel programming models can be used to effectuate a higher degree of parallelism via the simultaneous execution of separate portions of a program on different processors.[27][28]

Debugging and monitoring

[edit]

Developing and debugging parallel programs on a cluster requires parallel language primitives and suitable tools such as those discussed by the High Performance Debugging Forum (HPDF) which resulted in the HPD specifications.[21][29] Tools such as TotalView were then developed to debug parallel implementations on computer clusters which use Message Passing Interface (MPI) or Parallel Virtual Machine (PVM) for message passing.

The University of California, Berkeley Network of Workstations (NOW) system gathers cluster data and stores them in a database, while a system such as PARMON, developed in India, allows visually observing and managing large clusters.[21]

Application checkpointing can be used to restore a given state of the system when a node fails during a long multi-node computation.[30] This is essential in large clusters, given that as the number of nodes increases, so does the likelihood of node failure under heavy computational loads. Checkpointing can restore the system to a stable state so that processing can resume without needing to recompute results.[30]

Implementations

[edit]

The Linux world supports various cluster software; for application clustering, there is distcc, and MPICH. Linux Virtual Server, Linux-HA – director-based clusters that allow incoming requests for services to be distributed across multiple cluster nodes. MOSIX, LinuxPMI, Kerrighed, OpenSSI are full-blown clusters integrated into the kernel that provide for automatic process migration among homogeneous nodes. OpenSSI, openMosix and Kerrighed are single-system image implementations.

Microsoft Windows computer cluster Server 2003 based on the Windows Server platform provides pieces for high-performance computing like the job scheduler, MSMPI library and management tools.

gLite is a set of middleware technologies created by the Enabling Grids for E-sciencE (EGEE) project.

slurm is also used to schedule and manage some of the largest supercomputer clusters (see top500 list).

Other approaches

[edit]

Although most computer clusters are permanent fixtures, attempts at flash mob computing have been made to build short-lived clusters for specific computations. However, larger-scale volunteer computing systems such as BOINC-based systems have had more followers.

See also

[edit]

Basic concepts

Distributed computing

Specific systems

Computer farms

References

[edit]
  1. ^ "Cluster vs grid computing". Stack Overflow.
  2. ^ a b Graham-Smith, Darien (29 June 2012). "Weekend Project: Build your own supercomputer". PC & Tech Authority. Retrieved 2 June 2017.
  3. ^ Bader, David; Pennington, Robert (May 2001). "Cluster Computing: Applications". Georgia Tech College of Computing. Archived from the original on 2025-08-06. Retrieved 2025-08-06.
  4. ^ "Nuclear weapons supercomputer reclaims world speed record for US". The Telegraph. 18 Jun 2012. Archived from the original on 2025-08-06. Retrieved 18 Jun 2012.
  5. ^ Gray, Jim; Rueter, Andreas (1993). Transaction processing : concepts and techniques. Morgan Kaufmann Publishers. ISBN 978-1558601901.
  6. ^ a b c Enokido, Tomoya; Barolli, Leonhard; Takizawa, Makoto (23 August 2007). Network-Based Information Systems: First International Conference, NBIS 2007. p. 375. ISBN 978-3-540-74572-3.
  7. ^ William W. Hargrove, Forrest M. Hoffman and Thomas Sterling (August 16, 2001). "The Do-It-Yourself Supercomputer". Scientific American. Vol. 265, no. 2. pp. 72–79. Retrieved October 18, 2011.
  8. ^ Hargrove, William W.; Hoffman, Forrest M. (1999). "Cluster Computing: Linux Taken to the Extreme". Linux Magazine. Archived from the original on October 18, 2011. Retrieved October 18, 2011.
  9. ^ Yokokawa, Mitsuo; et al. (1–3 August 2011). The K computer: Japanese next-generation supercomputer development project. International Symposium on Low Power Electronics and Design (ISLPED). pp. 371–372. doi:10.1109/ISLPED.2011.5993668.
  10. ^ Pfister, Gregory (1998). In Search of Clusters (2nd ed.). Upper Saddle River, NJ: Prentice Hall PTR. p. 36. ISBN 978-0-13-899709-0.
  11. ^ Katzman, James A. (1982). "Chapter 29, The Tandem 16: A Fault-Tolerant Computing System". In Siewiorek, Donald P. (ed.). Computer Structure: Principles and Examples. U.S.A.: McGraw-Hill Book Company. pp. 470–485.
  12. ^ "History of TANDEM COMPUTERS, INC. – FundingUniverse". www.fundinguniverse.com. Retrieved 2025-08-06.
  13. ^ Hill, Mark Donald; Jouppi, Norman Paul; Sohi, Gurindar (1999). Readings in computer architecture. Gulf Professional. pp. 41–48. ISBN 978-1-55860-539-8.
  14. ^ a b Sloan, Joseph D. (2004). High Performance Linux Clusters. "O'Reilly Media, Inc.". ISBN 978-0-596-00570-2.
  15. ^ a b c d Daydé, Michel; Dongarra, Jack (2005). High Performance Computing for Computational Science – VECPAR 2004. Springer. pp. 120–121. ISBN 978-3-540-25424-9.
  16. ^ "IBM Cluster System : Benefits". IBM. Archived from the original on 29 April 2016. Retrieved 8 September 2014.
  17. ^ "Evaluating the Benefits of Clustering". Microsoft. 28 March 2003. Archived from the original on 22 April 2016. Retrieved 8 September 2014.
  18. ^ Hamada, Tsuyoshi; et al. (2009). "A novel multiple-walk parallel algorithm for the Barnes–Hut treecode on GPUs – towards cost effective, high performance N-body simulation". Computer Science – Research and Development. 24 (1–2): 21–31. doi:10.1007/s00450-009-0089-1. S2CID 31071570.
  19. ^ a b Mauer, Ryan (12 Jan 2006). "Xen Virtualization and Linux Clustering, Part 1". Linux Journal. Retrieved 2 Jun 2017.
  20. ^ a b c Milicchio, Franco; Gehrke, Wolfgang Alexander (2007). Distributed services with OpenAFS: for enterprise and education. Springer. pp. 339–341. ISBN 9783540366348.
  21. ^ a b c d e Prabhu, C.S.R. (2008). Grid and Cluster Computing. PHI Learning Pvt. pp. 109–112. ISBN 978-8120334281.
  22. ^ Gropp, William; Lusk, Ewing; Skjellum, Anthony (1996). "A High-Performance, Portable Implementation of the MPI Message Passing Interface". Parallel Computing. 22 (6): 789–828. CiteSeerX 10.1.1.102.9485. doi:10.1016/0167-8191(96)00024-5.
  23. ^ a b c Patterson, David A.; Hennessy, John L. (2011). Computer Organization and Design. Elsevier. pp. 641–642. ISBN 978-0-12-374750-1.
  24. ^ a b K. Shirahata; et al. (30 Nov – 3 Dec 2010). Hybrid Map Task Scheduling for GPU-Based Heterogeneous Clusters. Cloud Computing Technology and Science (CloudCom). pp. 733–740. doi:10.1109/CloudCom.2010.55. ISBN 978-1-4244-9405-7.
  25. ^ a b c "Alan Robertson Resource fencing using STONITH" (PDF). IBM Linux Research Center, 2010. Archived from the original (PDF) on 2025-08-06.
  26. ^ Vargas, Enrique; Bianco, Joseph; Deeths, David (2001). Sun Cluster environment: Sun Cluster 2.2. Prentice Hall Professional. p. 58. ISBN 9780130418708.
  27. ^ a b Aho, Alfred V.; Blum, Edward K. (2011). Computer Science: The Hardware, Software and Heart of It. Springer. pp. 156–166. ISBN 978-1-4614-1167-3.
  28. ^ Rauber, Thomas; Rünger, Gudula (2010). Parallel Programming: For Multicore and Cluster Systems. Springer. pp. 94–95. ISBN 978-3-642-04817-3.
  29. ^ Francioni, Joan M.; Pancake, Cherri M. (April 2000). "A Debugging Standard for High-performance computing". Scientific Programming. 8 (2). Amsterdam, Netherlands: IOS Press: 95–108. doi:10.1155/2000/971291. ISSN 1058-9244.
  30. ^ a b Sloot, Peter, ed. (2003). Computational Science: ICCS 2003: International Conference. pp. 291–292. ISBN 3-540-40195-4.

Further reading

[edit]
[edit]
发烧吃什么食物最好 一个月没有来月经是什么原因 浙江有什么旅游景点 右边小腹疼是什么原因 树大招风的意思是什么
二本是什么学历 1999年出生的属什么 天天喝绿茶有什么好处和坏处 女性分泌物像豆腐渣用什么药 肝衰竭是什么原因引起的
b超检查前要注意什么 亲子鉴定去医院挂什么科 检查肺部应该挂什么科 肺结核是什么 月经不来挂什么科
八月二十六是什么星座 霉菌阴道炎用什么药 贡眉是什么茶 皲裂什么意思 1966年属马的是什么命
草果长什么样hcv9jop4ns2r.cn 低压偏高有什么危害hcv9jop1ns5r.cn 胳膊上的花是打了什么疫苗hcv8jop6ns9r.cn 腮腺炎是什么hcv7jop9ns2r.cn 警告处分有什么影响hcv8jop5ns6r.cn
海底轮是什么意思hcv7jop7ns3r.cn 什么是命运hcv9jop4ns8r.cn 磨皮是什么意思hcv7jop9ns7r.cn 104是什么意思hcv8jop5ns3r.cn 晕倒是什么原因引起的hcv9jop5ns8r.cn
田字出头是什么字cj623037.com 冻顶乌龙茶属于什么茶hcv8jop0ns8r.cn 萝莉控是什么意思0735v.com 婴儿吃不饱有什么危害shenchushe.com 三加一是什么意思hcv9jop2ns4r.cn
川字属于五行属什么shenchushe.com 手刃是什么意思luyiluode.com 毓婷是什么药hcv9jop5ns1r.cn 罢黜百家独尊儒术是什么意思hcv9jop4ns5r.cn 绿豆什么人不能吃jinxinzhichuang.com
百度