Chapter 1. Red Hat Cluster Suite Overview41.3. Cluster InfrastructureThe Red Hat Cluster Suite cluster infrastructure provides the basic functions for a group of computers(called nodes or members) to work together as a cluster. Once a cluster is formed using the clusterinfrastructure, you can use other Red Hat Cluster Suite components to suit your clustering needs (forexample, setting up a cluster for sharing files on a GFS file system or setting up service failover). Thecluster infrastructure performs the following functions:• Cluster management• Lock management• Fencing• Cluster configuration management1.3.1. Cluster ManagementCluster management manages cluster quorum and cluster membership. One of the following RedHat Cluster Suite components performs cluster management: CMAN (an abbreviation for clustermanager) or GULM (Grand Unified Lock Manager). CMAN operates as the cluster manager if a clusteris configured to use DLM (Distributed Lock Manager) as the lock manager. GULM operates as thecluster manager if a cluster is configured to use GULM as the lock manager. The major differencebetween the two cluster managers is that CMAN is a distributed cluster manager and GULM is aclient-server cluster manager. CMAN runs in each cluster node; cluster management is distributedacross all nodes in the cluster (refer to Figure 1.2, “CMAN/DLM Overview”). GULM runs in nodesdesignated as GULM server nodes; cluster management is centralized in the nodes designated asGULM server nodes (refer to Figure 1.3, “GULM Overview”). GULM server nodes manage the clusterthrough GULM clients in the cluster nodes. With GULM, cluster management operates in a limitednumber of nodes: either one, three, or five nodes configured as GULM servers.The cluster manager keeps track of cluster quorum by monitoring the count of cluster nodes thatrun cluster manager. (In a CMAN cluster, all cluster nodes run cluster manager; in a GULM clusteronly the GULM servers run cluster manager.) If more than half the nodes that run cluster managerare active, the cluster has quorum. If half the nodes that run cluster manager (or fewer) are active,the cluster does not have quorum, and all cluster activity is stopped. Cluster quorum prevents theoccurrence of a "split-brain" condition — a condition where two instances of the same cluster arerunning. A split-brain condition would allow each cluster instance to access cluster resources withoutknowledge of the other cluster instance, resulting in corrupted cluster integrity.In a CMAN cluster, quorum is determined by communication of heartbeats among cluster nodes viaEthernet. Optionally, quorum can be determined by a combination of communicating heartbeats viaEthernet and through a quorum disk. For quorum via Ethernet, quorum consists of 50 percent of thenode votes plus 1. For quorum via quorum disk, quorum consists of user-specified conditions.NoteIn a CMAN cluster, by default each node has one quorum vote for establishing quorum.Optionally, you can configure each node to have more than one vote.In a GULM cluster, the quorum consists of a majority of nodes designated as GULM servers accordingto the number of GULM servers configured: