首页 热点资讯 义务教育 高等教育 出国留学 考研考公
您的当前位置:首页正文

RHCS配置实例

2022-05-25 来源:华拓网
RHCS配置实例

拓扑图

注意事项

1.各节点时间保持基本一致2.心跳ip一定要配置一.共享存储配置

Target配置1.修改用户名

[root@localhost~]#vim/etc/sysconfig/networkHOSTNAME=target.a.com

[root@localhost~]#hostnametarget.a.com2.修改解析文件hosts

[root@target~]#vim/etc/hosts192.168.4.1node1.a.com192.168.4.2node2.a.com

192.168.4.3target.a.com3.配置yum源

[root@target~]#vim/etc/yum.repos.d/rhel-debuginfo.repo[rhel-server]

name=RedHatEnterpriseLinuxserverbaseurl=file:///mnt/cdrom/Server/

enabled=1gpgcheck=0

gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release[rhel-cluster]

name=RedHatEnterpriseLinuxclusterbaseurl=file:///mnt/cdrom/Cluster/

enabled=1gpgcheck=0

gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release[rhel-clusterstorage]

name=RedHatEnterpriseLinuxclusterstoragebaseurl=file:///mnt/cdrom/ClusterStorage/enabled=1gpgcheck=0

gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release4.更新软件源

[root@target~]#mount/dev/cdrom/mnt/cdrom

mount:blockdevice/dev/cdromiswrite-protected,mountingread-only[root@target~]#yumupdate~

5.安装target软件

[root@target~]#yuminstallscsi-target-utils6.创建共享磁盘

[root@target~]#fdisk/dev/sdb

DevicecontainsneitheravalidDOSpartitiontable,norSun,SGIorOSFdisklabelBuildinganewDOSdisklabel.Changeswillremaininmemoryonly,untilyoudecidetowritethem.Afterthat,ofcourse,thepreviouscontentwon'tberecoverable.

Thenumberofcylindersforthisdiskissetto1305.Thereisnothingwrongwiththat,butthisislargerthan1024,andcouldincertainsetupscauseproblemswith:

1)softwarethatrunsatboottime(e.g.,oldversionsofLILO)

2)bootingandpartitioningsoftwarefromotherOSs

(e.g.,DOSFDISK,OS/2FDISK)

Warning:invalidflag0x0000ofpartitiontable4willbecorrectedbyw(rite)Command(mforhelp):nCommandaction

eextended

p

primarypartition(1-4)

p

Partitionnumber(1-4):1

Firstcylinder(1-1305,default1):Usingdefaultvalue1

Lastcylinderor+sizeor+sizeMor+sizeK(1-1305,default1305):+1GCommand(mforhelp):w

Thepartitiontablehasbeenaltered!Callingioctl()tore-readpartitiontable.Syncingdisks.[root@target~]#7.配置scsi主文件

[root@target~]#vim/etc/tgt/targets.conf

#ListoffilestoexportasLUNsbacking-store/dev/sdb1

#Authentication:

#ifno\"incominguser\"isspecified,itisnotused#incominguserbackupsecretpass12

#Accesscontrol:

#defaultstoALLifno\"initiator-address\"isspecifiedinitiator-address192.168.4.0/24

8.开启scsi服务

[root@target~]#servicetgtdstartStartingSCSItargetdaemon:9.查看配置

[root@target~]#tgtadm--lldiscsi--opshow--modetargetTarget1:iqn.2013-01.com.a:target

Systeminformation:

Driver:iscsiState:ready

I_Tnexusinformation:LUNinformation:

LUN:0

Type:controller

SCSIID:deadbeaf1:0SCSISN:beaf10Size:0MB

Online:Yes

Removablemedia:No

Backingstore:NobackingstoreLUN:1

Type:disk

SCSIID:deadbeaf1:1

SCSISN:beaf11Size:1012MBOnline:Yes

Removablemedia:NoBackingstore:/dev/sdb1

Accountinformation:ACLinformation:

192.168.4.0/24

Node2.a.com配置

[OK

]

1.配置如上,与target配置相同

[root@localhost~]#vim/etc/sysconfig/networkHOSTNAME=node2.a.com

[root@localhost~]#hostnamenode2.a.com[root@node2~]#scp192.168.4.3:/etc/hosts

/etc/

[root@node2~]#scp192.168.4.3:/etc/yum.repos.d/rhel-debuginfo.repo/etc/yum.repos.d/[root@node2~]#mount/dev/cdrom/mnt/cdrom

mount:blockdevice/dev/cdromiswrite-protected,mountingread-only[root@node2~]#yumupdate2.安装软件iscsi

[root@node2~]#yuminstalliscsi-initiator-utils3.配置iscsi用户名

[root@node2~]#vim/etc/iscsi/initiatorname.iscsiInitiatorName=iqn.2013-01.com.a.node24.启动iscsi服务

[root@node2~]#serviceiscsistartiscsidisstopped

Turningoffnetworkshutdown.StartingiSCSIdaemon:SettingupiSCSItargets:iscsiadm:Norecordsfound!

[OK

]

5.发现设备

[root@node2~]#iscsiadm--modediscovery--typesendtargets--portal192.168.4.3

192.168.4.3:3260,1iqn.2013-01.com.a:target6.登录设备

[root@node2~]#iscsiadm--modenode--targetnameiqn.2013-01.com.a:target--portal192.168.4.3:3260--login

Logginginto[iface:default,target:iqn.2013-01.com.a:target,portal:192.168.4.3,3260]

Loginto[iface:default,target:iqn.2013-01.com.a:target,portal:192.168.4.3,3260]:successful[root@node2~]#7.查看磁盘情况

[root@node2~]#fdisk-l

Disk/dev/sda:21.4GB,21474836480bytes255heads,63sectors/track,2610cylindersUnits=cylindersof16065*512=8225280bytes

DeviceBoot/dev/sda1/dev/sda2/dev/sda3

*

Start1392546

End3825452610

Blocks

Id

System[OK

][OK

]

305203+83Linux20137477+83Linux

522112+82Linuxswap/Solaris

Disk/dev/sdb:1011MB,1011677184bytes32heads,61sectors/track,1012cylinders

Units=cylindersof1952*512=999424bytes

Disk/dev/sdbdoesn'tcontainavalidpartitiontable8.在target上查看登录情况

[root@target~]#tgtadm--lldiscsi--opshow--modetargetTarget1:iqn.2013-01.com.a:target

Systeminformation:

Driver:iscsi

State:ready

I_Tnexusinformation:

I_Tnexus:1

Initiator:iqn.2013-01.com.a.node2Connection:0

IPAddress:192.168.4.2

Node1.a.com配置

1.[root@localhost~]#vim/etc/sysconfig/networkHOSTNAME=node1.a.com

[root@localhost~]#hostnamenode1.a.com

[root@node1~]#scp192.168.4.3:/etc/hosts/etc/

[root@node1~]#scp192.168.4.3:/etc/yum.repos.d/rhel-debuginfo.repo/etc/yum.repos.d/[root@node1~]#mount/dev/cdrom/mnt/cdrom

mount:blockdevice/dev/cdromiswrite-protected,mountingread-only[root@node1~]#yumupdate

2.安装软件

[root@node1~]#yuminstalliscsi-initiator-utils3.配置iscsi用户名

[root@node1~]#vim/etc/iscsi/initiatorname.iscsiInitiatorName=iqn.2013-01.com.a.node14.启动服务

[root@node1~]#servicescsicstartscsic:unrecognizedservice

[root@node1~]#serviceiscsistart

iscsidisstopped

Turningoffnetworkshutdown.StartingiSCSIdaemon:SettingupiSCSItargets:iscsiadm:Norecordsfound!

[

5.发现设备

[root@node1~]#iscsiadm--modediscovery--typesendtargets--portal192.168.4.3192.168.4.3:3260,1iqn.2013-01.com.a:target6.登录设备

[root@node1~]#iscsiadm--modenode--targetnameiqn.2013-01.com.a:target--portal192.168.4.3:3260--login

Logginginto[iface:default,target:iqn.2013-01.com.a:target,portal:192.168.4.3,3260]

OK

]

[OK

][

OK]

Loginto[iface:default,target:iqn.2013-01.com.a:target,portal:192.168.4.3,3260]:successful7.查看磁盘情况[root@node1~]#fdisk-l

Disk/dev/sda:21.4GB,21474836480bytes255heads,63sectors/track,2610cylinders

Units=cylindersof16065*512=8225280bytesDeviceBootStart

EndBlocksIdSystem/dev/sda1*138305203+/dev/sda239254520137477+/dev/sda3

2546

2610

522112+

Disk/dev/sdb:1011MB,1011677184bytes32heads,61sectors/track,1012cylinders

Units=cylindersof1952*512=999424bytes8.在target上查看登录情况

[root@target~]#tgtadm--lldiscsi--opshow--modetargetTarget1:iqn.2013-01.com.a:target

Systeminformation:

Driver:iscsiState:ready

I_Tnexusinformation:

I_Tnexus:1

Initiator:iqn.2013-01.com.a.node2Connection:0IPAddress:192.168.4.2

I_Tnexus:2

Initiator:iqn.2013-01.com.a.node1

Connection:0

IPAddress:192.168.4.1

二.Gfs磁盘配置

我们在node1.a.com上对其进行配置1.PV阶段

[root@node1~]#pvcreate/dev/sdb

Physicalvolume\"/dev/sdb\"successfullycreated查看pv信息

[root@node1~]#pvcreate/dev/sdb

Physicalvolume\"/dev/sdb\"successfullycreated[root@node1~]#pvscan

/dev/cdrom:openfailed:Read-onlyfilesystem

Attempttoclosedevice'/dev/cdrom'whichisnotopen.PV/dev/sdblvm2[964.81MB]

83Linux83Linux82

Linuxswap/Solaris

Total:1[964.81MB]/inuse:0[02.VG阶段

]/innoVG:1[964.81MB]

[root@node1~]#vgcreatevg0/dev/sdb

/dev/cdrom:openfailed:Read-onlyfilesystem

Attempttoclosedevice'/dev/cdrom'whichisnotopen.Clusteredvolumegroup\"vg0\"successfullycreated查看VG信息

[root@node1~]#vgscan

[root@node1~]#vgscan

Readingallphysicalvolumes.Thismaytakeawhile.../dev/cdrom:openfailed:Read-onlyfilesystemAttempttoclosedevice'/dev/cdrom'whichisnotopen.Foundvolumegroup\"vg0\"usingmetadatatypelvm23.LV阶段

[root@node1~]#lvcreate-L512-nlv0vg0Logicalvolume\"lv0\"created查看LV信息

[root@node1~]#lvscanACTIVE'/dev/vg0/lv0'[512.00MB]inherit查看生成的虚拟磁盘

[root@node1~]#ll/dev/vg0total0

lrwxrwxrwx1rootroot19Jan2419:31lv0->/dev/mapper/vg0-lv04.格式lv0为gfs格式

[root@node1~]#gfs_mkfs-plock_dlm-tcluster1:lv0-j3/dev/vg0/lv0Thiswilldestroyanydataon/dev/vg0/lv0.Areyousureyouwanttoproceed?[y/n]yDevice:Blocksize:

FilesystemSize:Journals:

ResourceGroups:LockingProtocol:LockTable:Syncing...AllDone

[root@node1~]#

5.测试

因为node1与node2使用的/dev/sbd是挂载到target上的/dev/sdb1,所以在node1上做的操作,也会相应的出现在node2上,我们可以查看一下在node2上生成的虚拟磁盘

[root@node2~]#ll/dev/vg0

/dev/vg0/lv040963272438

lock_dlmcluster1:lv0

total0

lrwxrwxrwx1rootroot19Jan2419:36lv0->/dev/mapper/vg0-lv0和node1上是一模一样的

我们也可以查看两个节点的同步性

在Node1上挂载/dev/vg0/lv0,并且创建一个文件,查看node2上的情况[root@node1~]#mount/dev/vg0/lv0/var/www/html[root@node1~]#touch/var/www/html/index.html[root@node1~]#ll/var/www/html

total4

-rw-r--r--1rootroot0Jan2419:41index.html在Node2上查看[root@node2~]#mount/dev/vg0/lv0/var/www/html[root@node2~]#ll/var/www/htmltotal4

-rw-r--r--1rootroot0Jan2419:41index.html可以看到达到了同步性

并且在其中一个节点中打开文件index.html时,另一个节点再打开这个文件是不可写的,这是锁机制。

三.RHCS配置

(1)Node2.a.com配置1.软件安装

[root@node2~]#yuminstall-yricci2.启动ricci服务

[root@node2~]#servicericcistartStartingoddjobd:

generatingSSLcertificates...Startingricci:

[

done

[OK

]OK

]

3.安装httpd

[root@node2~]#mount/dev/cdrom/mnt/cdrom(2)Node1.a.com配置软件安装riccilucihttpd

[root@node1~]#yuminstall-yricciluci[root@node1cluster]#yuminstall-yhttpd1.启动ricci服务

[root@node1~]#servicericcistartStartingoddjobd:

generatingSSLcertificates...Startingricci:

[

done

[OK

]OK

]

[root@node1~]#

2.初始化luci,并启动服务

[root@node1~]#luci_admininit

InitializingtheluciserverCreatingthe'admin'userEnterpassword:Confirmpassword:Pleasewait...

Theadminpasswordhasbeensuccessfullyset.GeneratingSSLcertificates...

TheluciserverhasbeensuccessfullyinitializedYoumustrestarttheluciserverforchangestotakeeffect.Run\"servicelucirestart\"todoso[root@node1~]#

[root@node1~]#servicelucistart

Startingluci:GeneratinghttpsSSLcertificates...

done

[

Pointyourwebbrowsertohttps://node1.a.com:8084toaccessluci3.安装httpd服务

[root@node1cluster]#yuminstall-yhttpd5.进入图形配置界面

在地址栏中输入ip:https://node1.a.com:8084

OK

]

或直接使用地址https://192.168.4.1:8084

5.创建一个群集

创建过程需要一段时间的等待创建之后会出现这个界面

创建fencedaemonkey

6.配置资源ip

7.配置资源http服务

8.配置资源存储与挂载

9.创建故障转移域

10.为节点指明fence

因为现在没有fence设备,所以我们可以为每个节点添加一个手工fence

11.添加服务

12.开启服务

13.验证开启

可以在两个节点上查看cluster的情况Node1.a.com上

[root@node2~]#clustat

ClusterStatusforcluster1@ThuJan2420:19:552013MemberStatus:QuorateMemberName----------node2.a.comrgmanagernode1.a.comServiceNameState

----------------service:http-serverstarted

Node1.a.com上

[root@node1~]#clustat

ClusterStatusforcluster1@ThuJan2420:20:352013MemberStatus:QuorateMemberName----------node2.a.comnode1.a.comrgmanagerServiceNameState

----------------service:http-serverstarted

可以看出node1.a.com是主节点

IDStatus

----------1Online,Local,2Online,rgmanager

Owner(Last)-----------node1.a.com

ID

Status

----------1Online,rgmanager

2Online,Local,

Owner(Last)-----------node1.a.com

14.在两个节点中各创建一个网页主页

[root@node1~]#echo\"node1\">/var/www/html/index.html[root@node2~]#echo\"node2\">/var/www/html/index.html15.测试网页

16.断掉主节点

[root@node1~]#servicericcistopShuttingdownricci:查看情况

[OK]

刷新一个页面,会看到这种情况,这是因为我们没有fence设备,不能进行节点间的切换,所以我们不能得到node2.a.com为主节点

但是我们可以把node1.a.com节点关闭掉,开启节点node2.a.com测试下[root@node1~]#servicericcistart//先开启ricci服务Startingricci:[OK]关闭cluster服务

开启节点2

开启成功

测试网页

查看状态

[root@node2~]#clustat

ClusterStatusforcluster1@ThuJan2420:19:552013MemberStatus:QuorateMemberName----------node2.a.comrgmanager

ID----------1Online,Local,Status

node1.a.comServiceNameState

----------------service:http-serverstarted

2Online,rgmanager

Owner(Last)-----------node1.a.com

遇到的问题

在一个节点上创建vg0之后,在另一个节点上查看vg0状态时出现以下错误[root@node2~]#vgscan

Readingallphysicalvolumes.Thismaytakeawhile...

/dev/cdrom:openfailed:Read-onlyfilesystem

Attempttoclosedevice'/dev/cdrom'whichisnotopen.Skippingclusteredvolumegroupvg0//skipping[root@node2~]#解决方案

在此节点下修改/etc/lvm/lvm.conf文件,修改locking_type的值为0然后用

vgchange-cnvg0

最后再把locking_type的值改回1

因篇幅问题不能全部显示,请点此查看更多更全内容