About Us

RSInfoMinds, a web based IT Training and Consultancy firm. It is established with high dreams in training people in IT Infrastructure Field. We provide Online and Class Room training in various fields of IT Infrastructure Management.

Join Us: http://www.facebook.com/RSInfoMinds
Mail Us: rsinfominds@gmail.com
Twitter: @RSInfoMinds

We are specialized in the below courses:

Redhat Linux Admin Redhat Linux Cluster
Redhat Virutualization IBM AIX Admin
IBM AIX Virtualization IBM AIX Cluster
HP Unix Admin HP Unix Cluster
HP Unix Virtualization Shell Scripting
Veritas Volume Manager Veritas Cluster
Oracle Core DBA VMWare


We provide training in such a way, So that you get in depth knowledge on the Courses you look for.

And we ensure you are very confident from each and every Techincal aspect that the IT Industry needs and expects from you.

We also conduct Workshops on the latest technology and the real time faculties sharing their work experiences to make you the best.

Monday, 27 January 2014

Starting cman... cman_tool: Cannot open connection to cman, is it running ? Check cluster logs for details

/var/log/messages:

Jan 26 20:25:28 node2 corosync[2426]:   [CMAN  ] Unable to load new config in corosync: New configuration version ha     s to be newer than current running configuration
Jan 26 20:25:28 node2 corosync[2426]:   [CMAN  ] Can't get updated config version 32: New configuration version has      to be newer than current running configuration#012.
Jan 26 20:25:28 node2 corosync[2426]:   [CMAN  ] Activity suspended on this node
Jan 26 20:25:28 node2 corosync[2426]:   [CMAN  ] Error reloading the configuration, will retry every second
Jan 26 20:25:28 node2 corosync[2426]:   [CMAN  ] Checking for startup failure: time=1
Jan 26 20:25:28 node2 corosync[2426]:   [CMAN  ] Failed to get an up-to-date config file, wanted 32, only got 31. Wi     ll exit


Looks like the issue with the cluster version and that the reason the node is if offline state.

 Member Name                                                     ID   Status
 ------ ----                                                     ---- ------
 node1                                                               1 Online, Local
 node2                                                               2 Offline

[root@node1 ~]# ccs -h node1 --sync : Command to sync the cluster.conf file across the nodes.

[root@node2 ~]# service cman start
Starting cluster:
   Checking if cluster has been disabled at boot...        [  OK  ]
   Checking Network Manager...                             [  OK  ]
   Global setup...                                         [  OK  ]
   Loading kernel modules...                               [  OK  ]
   Mounting configfs...                                    [  OK  ]
   Starting cman...                                        [  OK  ]
   Waiting for quorum...                                   [  OK  ]
   Starting fenced...                                      [  OK  ]
   Starting dlm_controld...                                [  OK  ]
   Starting gfs_controld...                                [  OK  ]
   Unfencing self...                                       [  OK  ]
   Joining fence domain...                                 [  OK  ]
[root@node2 ~]#


 Member Name                                                     ID   Status
 ------ ----                                                     ---- ------
 node1                                                               1 Online, Local, rgmanager
 node2                                                               2 Online

Sunday, 26 January 2014

Failed; service running on original owner

When trying to start a Service in RHEL Cluster:

[root@node2 ~]# clusvcadm -r SQL_SG
Trying to relocate service:SQL_SG...Failed; service running on original owner
[root@node2 ~]#

If you encounter the above message. The possible cause is.

cat /etc/cluster/cluster.conf

 <service domain="Webserver_FA" exclusive="1" name="Webserver_SG" recovery="relocate">

<service domain="SQL_FA" exclusive="1" name="SQL_SG" recovery="relocate">

From the above we can see the cluster is configured to run 2 different services in Exclusive mode, Which is not possible on RHEL. So at a give time on cluster we can only only exclusive service.

So, You can disable exclusive option for SQL_SG and start it or make SQL_SG as a part of Webserver_SG.

 Service Name                   Owner (Last)                   State
 ------- ----                   ----- ------                   -----
 service:SQL_SG                 (none)                         stopped
 service:Webserver_SG           node1                          started


 Service Name                   Owner (Last)                   State
 ------- ----                   ----- ------                   -----
 service:SQL_SG                 node2                          started
 service:Webserver_SG           node1                          started

Saturday, 25 January 2014

connect() failed on local socket: No such file or directory Internal cluster locking initialisation failed. WARNING: Falling back to local file-based locking. Volume Groups with the clustered attribute will be inaccessible.

If the you get the message,

Check if the clvmd process is running.

If not start the process.

[root@node1 /]# service clvmd start
Starting clvmd:
Activating VG(s):   2 logical volume(s) in volume group "VolGroup" now active
  clvmd not running on node node2
                                                           [  OK  ]
[root@node1 /]#

[root@node1 /]# pvs
  PV                 VG       Fmt  Attr PSize PFree
  /dev/mapper/oracle          lvm2 a--  1.00g 1.00g
  /dev/sda2          VolGroup lvm2 a--  7.51g    0
[root@node1 /]#

Thursday, 23 January 2014

libvirtError: internal error Cannot find suitable emulator for x86_64


If you get this message on a first time configuration.

1) Make sure libvirtd demon is running fine.

2) Restart the libvirtd.

3) If is does not fix the issue, make sure the virtualization softwares are installed on the server.

4) Else reboot the server.

Wednesday, 22 January 2014

Root Disk Mirroring

# lspv
hdisk0          00046474e2326aa4                    rootvg          active
hdisk1          000b026db61a3de7                    None

# extendvg -f rootvg hdisk1

#  mirrorvg -S -c 2 rootvg hdisk1
0516-1124 mirrorvg: Quorum requirement turned off, reboot system for this
        to take effect for rootvg.
0516-1126 mirrorvg: rootvg successfully mirrored, user should perform
        bosboot of system to initialize boot records.  Then, user must modify
        bootlist to include:  hdisk0 hdisk1.

# lsvg -p rootvg
rootvg:
PV_NAME           PV STATE          TOTAL PPs   FREE PPs    FREE DISTRIBUTION
hdisk1            active            542         357         108..24..08..108..109
hdisk0            active            542         357         108..24..08..108..109
#

# bootlist -m both -o hdisk0 hdisk1
hdisk0 blv=hd5
hdisk1
hdisk0 blv=hd5
hdisk1
#



Friday, 17 January 2014

Tuesday, 14 January 2014

Failover A Service Group In RHEL

I am going to show how to do a failover of ServiceGroup in RHEL6 Cluster

[root@node1 /]# clustat
Cluster Status for mycluster @ Mon Jan 13 21:33:34 2014
Member Status: Quorate

 Member Name                                                     ID   Status
 ------ ----                                                     ---- ------
 node1                                                               1 Online, Local, rgmanager
 node2                                                               2 Online, rgmanager

 Service Name                                                     Owner (Last)                                                     State
 ------- ----                                                     ----- ------                                                     -----
 service:Oracle_SG                                                node1                                                            started
[root@node1 /]#

The Oracle_SG is running on node1

I am going to run on node2.

[root@node1 /]# clusvcadm -r service:Oracle_SG -m node2
Trying to relocate service:Oracle_SG to node2...Success
service:Oracle_SG is now running on node2
[root@node1 /]#

[root@node1 /]# clustat
Cluster Status for mycluster @ Mon Jan 13 21:37:14 2014
Member Status: Quorate

 Member Name                                                     ID   Status
 ------ ----                                                     ---- ------
 node1                                                               1 Online, Local, rgmanager
 node2                                                               2 Online, rgmanager

 Service Name                                                     Owner (Last)                                                     State
 ------- ----                                                     ----- ------                                                     -----
 service:Oracle_SG                                                node2                                                            started
[root@node1 /]#

Monday, 13 January 2014

Steps To Start Cluster On RHEL

[root@node1 ~]# service cman start
Starting cluster:
   Checking if cluster has been disabled at boot...        [  OK  ]
   Checking Network Manager...                             [  OK  ]
   Global setup...                                         [  OK  ]
   Loading kernel modules...                               [  OK  ]
   Mounting configfs...                                    [  OK  ]
   Starting cman...                                        [  OK  ]
   Waiting for quorum...                                   [  OK  ]
   Starting fenced...                                      [  OK  ]
   Starting dlm_controld...                                [  OK  ]
   Starting gfs_controld...                                [  OK  ]
   Unfencing self...                                       [  OK  ]
   Joining fence domain...                                 [  OK  ]
[root@node1 ~]#

[root@node1 ~]# service clvmd start
Starting clvmd:
Activating VG(s):   2 logical volume(s) in volume group "vg_node1" now active
                                                           [  OK  ]
[root@node1 ~]# service rgmanager start
Starting Cluster Service Manager:                          [  OK  ]
[root@node1 ~]# clustat
Cluster Status for oracle @ Sun Jan 12 17:21:17 2014
Member Status: Quorate

 Member Name                                                     ID   Status
 ------ ----                                                     ---- ------
 node1                                                               1 Online, Local
 node2                                                               2 Online

[root@node1 ~]# clustat
Cluster Status for oracle @ Sun Jan 12 17:21:23 2014
Member Status: Quorate

 Member Name                                                     ID   Status
 ------ ----                                                     ---- ------
 node1                                                               1 Online, Local, rgmanager
 node2                                                               2 Online, rgmanager

 Service Name                                                     Owner (Last)                                                     State
 ------- ----                                                     ----- ------                                                     -----
 service:Oracle_SG                                                node1                                                            started
[root@node1 ~]#

Steps To Stop Cluster On RHEL

[root@node1 ~]# service rgmanager stop
Stopping Cluster Service Manager:                          [  OK  ]
[root@node1 ~]#

[root@node1 ~]# service clvmd stop
Signaling clvmd to exit                                    [  OK  ]
Waiting for clvmd to exit:                                 [  OK  ]
clvmd terminated                                           [  OK  ]
[root@node1 ~]#

[root@node1 ~]# service cman stop
Stopping cluster:
   Leaving fence domain...                                 [  OK  ]
   Stopping gfs_controld...                                [  OK  ]
   Stopping dlm_controld...                                [  OK  ]
   Stopping fenced...                                      [  OK  ]
   Stopping cman...                                        [  OK  ]
   Waiting for corosync to shutdown:                       [  OK  ]
   Unloading kernel modules...                             [  OK  ]
   Unmounting configfs...                                  [  OK  ]
[root@node1 ~]#

Thursday, 2 January 2014

Build A Filesystem Using VCS In Linux

VCS Filesystem Creation

Veritas Volume Manager Commands

/etc/default/vxassist : File that contains default attributes of vxassist command.

# vxdisk -f scandisks : Scan for new disks connected to the system and reinitiate dynamic configuration of MPIO disks.

# vxdctl enable : Command to rebuild volume device node directory and updates DMP Internal database upon adding new disk to the system.

# vxdisk scandisks new : Discover only new devices that were not known earlier.


# vxdisk scandisks fabric : Discover for fabric devices.


# vxdisk scandisks device=c#t#d#,c#t#d#  : Scan for particular disks.


# vxdisk scandisks \!device=c1t1d1 : Scan for all device except the c1t1d1.


# vxdisk scandisks \!ctlr=c1 : Scan for all the disks except on the  logical controller c1.

# vxdisk scandisks pctlr=8/12.8.0.255.0 : Scan for devices connected to the specific controller.

The items in a list of physical controllers are separated by + characters.

# vxdmpadm getctlr all : Command to get the controllers on the machine.

# vxddladm list : List all the devices including SCSI Devices.

The following is a sample output:
HBA c2 (20:00:00:E0:8B:19:77:BE)
Port c2_p0 (50:0A:09:80:85:84:9D:84)
Target c2_p0_t0 (50:0A:09:81:85:84:9D:84)
LUN c2t0d0


# vxddladm list hbas : List the HBA on the machine.