/var/log/messages:
Jan 26 20:25:28 node2 corosync[2426]: [CMAN ] Unable to load new config in corosync: New configuration version ha s to be newer than current running configuration
Jan 26 20:25:28 node2 corosync[2426]: [CMAN ] Can't get updated config version 32: New configuration version has to be newer than current running configuration#012.
Jan 26 20:25:28 node2 corosync[2426]: [CMAN ] Activity suspended on this node
Jan 26 20:25:28 node2 corosync[2426]: [CMAN ] Error reloading the configuration, will retry every second
Jan 26 20:25:28 node2 corosync[2426]: [CMAN ] Checking for startup failure: time=1
Jan 26 20:25:28 node2 corosync[2426]: [CMAN ] Failed to get an up-to-date config file, wanted 32, only got 31. Wi ll exit
Jan 26 20:25:28 node2 corosync[2426]: [CMAN ] Unable to load new config in corosync: New configuration version ha s to be newer than current running configuration
Jan 26 20:25:28 node2 corosync[2426]: [CMAN ] Can't get updated config version 32: New configuration version has to be newer than current running configuration#012.
Jan 26 20:25:28 node2 corosync[2426]: [CMAN ] Activity suspended on this node
Jan 26 20:25:28 node2 corosync[2426]: [CMAN ] Error reloading the configuration, will retry every second
Jan 26 20:25:28 node2 corosync[2426]: [CMAN ] Checking for startup failure: time=1
Jan 26 20:25:28 node2 corosync[2426]: [CMAN ] Failed to get an up-to-date config file, wanted 32, only got 31. Wi ll exit
Looks like the issue with the cluster version and that the reason the node is if offline state.
Member Name ID Status
------ ---- ---- ------
node1 1 Online, Local
node2 2 Offline
[root@node1 ~]# ccs -h node1 --sync : Command to sync the cluster.conf file across the nodes.
[root@node2 ~]# service cman start
Starting cluster:
Checking if cluster has been disabled at boot... [ OK ]
Checking Network Manager... [ OK ]
Global setup... [ OK ]
Loading kernel modules... [ OK ]
Mounting configfs... [ OK ]
Starting cman... [ OK ]
Waiting for quorum... [ OK ]
Starting fenced... [ OK ]
Starting dlm_controld... [ OK ]
Starting gfs_controld... [ OK ]
Unfencing self... [ OK ]
Joining fence domain... [ OK ]
[root@node2 ~]#
Member Name ID Status
------ ---- ---- ------
node1 1 Online, Local, rgmanager
node2 2 Online
Thanks! This fixed my issue with cman no wanting to start on my second node. It started with me jacking up the config file, but even after I restored it, I was still having this issue. Never would have found the fix if I hadn't read this. Now if I only knew how to get my 2 node cluster to serve out our Xserve RAID as a windows share, and keep it up in H/A mode.... *sigh* My kingdom for an easy "how to"
ReplyDeleteThanks For The Positive Comments. Join Us On
ReplyDeletehttp://facebook.com/RSInfoMinds
you can also scp the cluster.conf file from node1 to node2 and start the cman.
ReplyDeleteWe are now online on www.rsinfominds.com
ReplyDelete