Category Archives: HA

Infrastuctur New Design (DRBD + Heartbeat + Pacemaker + Corosyn )


Infrastuctur New Designrequirement_infrastuktur

  1. Teknologi yang digunakan Heartbeat & DRBD or Pacemaker, Corosyn dan DRBD
    1. Heartbeat : Mengecek mati hidupnya suatu service
    2. DRBD         : Mirroring data level network
  2. Requirement
    1. Server menggunakan hardisk SAS dan minimal size 250GB (RAID) karena akan dibagi 2 VM, jika size hardisk 146 GB maka bisa di set jenis raidnya size lebih besar.
    2. Direkomentasikan jumlah NIC sebanyak 4 (standard server)

i.     1 interface untuk sinkronisasi data

ii.     1 interface untuk ke switch + host

  1. Minimal core jika di buat 2 VM adalah 8 Core

i.     2 Core untuk Host

ii.     3 Core untuk VM1

iii.     3 Core untuk VM2

  1. Minimal memory 8 GB

i.     2 GB untuk Host

ii.     3 GB untuk VM1

iii.     3 GB untuk VM2

  1. Sistem Operasi untuk  VMware di install di SSD dengan pentimbangan size hardisk lebih leluasa digunakan dan kerusakan / corrupt OS EXSI tidak mengganggu VM
  2. Storage External

i.     Digunakan untuk backup database,recording,aplikasi dll (recommended >=1TB SATA)

  1. Dengan requirement diatas diperkirakan sudah bisa menghandle <=75 AgentJ
  2. Service yang di failover
    1. Mysql
    2. Httpd
    3. Ecentrix
    4. Asterisk
  3. Data yang di mirror
    1. /var/www/html
    2. /var/lib/mysql
    3. /home (include all config)
    4. /etc/asterisk
    5. /var/lib/asterisk/sounds
  4. Jika salah satu server down/crash/rusak maka backup akan mengambil alih secara otomatis, waktunya yang diperlukan kurang lebih 1 menit.
  5. Keuntungan:
    1. Data terjamin aman,downtime kecil
    2. Kerusakan server tidak perlu mengganti IP server atau IP di agent / user (VIP)
    3. Mediagateway register ke virtual IP

PROXMOX , ACCESS VM FROM IP PUBLIC USING IPTABLE (NAT)


BEFORE

root@proxmox1:/etc/network# cat interfaces

# network interface settings
auto lo
iface lo inet loopback

auto eth0
iface eth0 inet static
address  192.168.137.104
netmask  255.255.255.0

iface eth1 inet manual

auto eth2
iface eth2 inet static
address  192.168.0.251
netmask  255.255.255.0

auto vmbr0
iface vmbr0 inet static
address  192.168.0.250
netmask  255.255.255.0
gateway  192.168.0.254
bridge_ports eth1
bridge_stp off

        bridge_fd 0
AFTER
root@proxmox1:/etc/network# cat interfaces

# network interface settings
auto lo
iface lo inet loopback

auto eth0
iface eth0 inet static
address 192.168.137.104
netmask 255.255.255.0

iface eth1 inet manual

auto eth2
iface eth2 inet static
address 192.168.0.251
netmask 255.255.255.0

auto vmbr0
iface vmbr0 inet static
address 192.168.0.250
netmask 255.255.255.0
gateway 192.168.0.254
bridge_ports eth1
bridge_stp off
bridge_fd 0
post-up echo 1 > /proc/sys/net/ipv4/conf/eth1/proxy_arp  (Perhatikan baris ini yah!!)

auto vmbr1
iface vmbr1 inet static
address 192.168.37.1
netmask 255.255.255.0
bridge_ports none
bridge_stp off
bridge_fd 0
post-up echo 1 > /proc/sys/net/ipv4/ip_forward
post-up iptables -t nat -A POSTROUTING -s ‘192.168.37.0/24’ -o vmbr0 -j MASQUERADE
post-down iptables -t nat -D POSTROUTING -s ‘192.168.37.0/24’ -o vmbr0 -j MASQUERADE

Restart Network

root@proxmox1:/etc/network# /etc/init.d/networking restart

Run on HOST

iptables -t nat -A PREROUTING -i vmbr0 -p tcp –dport 2222 -j DNAT –to 192.168.37.2:22

iptables -t nat -A PREROUTING -i vmbr0 -p tcp –dport 8888 -j DNAT –to 192.168.37.2:80

 

VM :
ifconfig eth0 192.168.37.2 netmask 255.255.255.0

route add default gw 192.168.37.1

Thanks

 

 

 

DRBD + HEARTBEAT CENTOS 6.3


[root@node1 drbd.d]# rpm -aq | egrep -i “heartbeat|drbd|kmod”
heartbeat-libs-3.0.4-1.el6.x86_64
kmod-drbd83-8.3.13-2.el6_3.elrepo.x86_64
drbd83-utils-8.3.13-1.el6.elrepo.x86_64
heartbeat-3.0.4-1.el6.x86_64
[root@node1 drbd.d]#

[root@node1 drbd.d]# cat global_common.conf
global {
usage-count yes;
dialog-refresh 1;
minor-count 5;
# minor-count dialog-refresh disable-ip-verification
}

common {
protocol C;

handlers {
pri-on-incon-degr “/usr/lib/drbd/notify-pri-on-incon-degr.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f”;
pri-lost-after-sb “/usr/lib/drbd/notify-pri-lost-after-sb.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f”;
local-io-error “/usr/lib/drbd/notify-io-error.sh; /usr/lib/drbd/notify-emergency-shutdown.sh; echo o > /proc/sysrq-trigger ; halt -f”;
# fence-peer “/usr/lib/drbd/crm-fence-peer.sh”;
# split-brain “/usr/lib/drbd/notify-split-brain.sh root”;
# out-of-sync “/usr/lib/drbd/notify-out-of-sync.sh root”;
# before-resync-target “/usr/lib/drbd/snapshot-resync-target-lvm.sh -p 15 — -c 16k”;
# after-resync-target /usr/lib/drbd/unsnapshot-resync-target-lvm.sh;
}

startup {
# wfc-timeout degr-wfc-timeout outdated-wfc-timeout wait-after-sb
wfc-timeout 20;
degr-wfc-timeout 30;
}

disk {
# on-io-error fencing use-bmbv no-disk-barrier no-disk-flushes
# no-disk-drain no-md-flushes max-bio-bvecs
on-io-error detach;
}

net {
# sndbuf-size rcvbuf-size timeout connect-int ping-int ping-timeout max-buffers
# max-epoch-size ko-count allow-two-primaries cram-hmac-alg shared-secret
# after-sb-0pri after-sb-1pri after-sb-2pri data-integrity-alg no-tcp-cork
timeout 20;
connect-int 20;
ping-int 20;
max-buffers 2048;
max-epoch-size 2048;
ko-count 30;
cram-hmac-alg “sha1”;
shared-secret “drbdtestingIntelix”;
}

syncer {
# rate after al-extents use-rle cpu-mask verify-alg csums-alg
rate 10M;
al-extents 257;
}
}

 

Load Balance Webserver + HAProxy On Centos 6.2 / 64 bit


“HAProxy is a free, very fast and reliable solution offering high availability, load balancing, and proxying for TCP and HTTP-based applications. It is particularly suited for web sites crawling under very high loads while needing persistence or Layer7 processing. Supporting tens of thousands of connections is clearly realistic with todays hardware. Its mode of operation makes its integration into existing architectures very easy and riskless, while still offering the possibility not to expose fragile web servers to the Net.” reff http://www.howtoforge.com/setting-up-a-high-availability-load-balancer-with-haproxy-keepalived-on-debian-lenny

 

Keterangan:

Server : 192.168.0.253 berfungsi sebagai load balancer ( server inilah yang akan di akses oleh client)

Server : 192.168.0.7 & 192.168.0.11 sebagai web server, sekaligus tempat aplikasi web yang akan di akses. so pastikan sudah terinstall paket apache.

Server 192.168.0.253

Install haproxy (sesuaikan dengan versi nya apakah menggunakan 32 or 64 bit)

[root@svrrepo ~]# rpm -ivh http://mirrors.ispros.com.bd/fedora-epel/6/x86_64/epel-release-6-7.noarch.rpm

[root@svrrepo ~]# yum search haproxy
haproxy.x86_64 : HA-Proxy is a TCP/HTTP reverse proxy for high availability environments
[root@svrrepo ~]# yum install haproxy.x86_64
Setelah proses instalasi ini maka akan membentuk satu user haproxy di /etc/passwd
[root@svrrepo haproxy]# cat /etc/passwd
haproxy:x:497:497:HAProxy user:/var/lib/haproxy:/bin/false

[root@svrrepo ~]# cd /etc/haproxy/
Backup config aslinya

[root@svrrepo haproxy]# cp haproxy.cfg haproxy.cfg.def

[root@svrrepo html]# vim /etc/haproxy/haproxy.cfg

#tambahkan baris berikut

listen SVR253 192.168.0.253:80 # ip server load balancer
mode http
balance roundrobin
cookie JSESSIONID prefix
option httpchk HEAD /check.txt HTTP/1.0
option httpclose
option forwardfor
stats auth testing1:testing1 #username untuk authentikasi untuk mengakses statistic haproxy
server server7 192.168.0.7:80 check cookie A check  # ip server web server1
server server11 192.168.0.11:80 check cookie B check # ip server web server 2

save wq!

check configurasi valid or not
[root@svrrepo haproxy]# haproxy -f /etc/haproxy/haproxy.cfg -c
Restart service

[root@svrrepo haproxy]#service haproxy start

Register sebagai service (runlevel) so boot pada saat startup server

[root@svrrepo haproxy]#chkconfig haproxy on

Login ke server 192.168.0.7 & 192.168.0.11

[root@server11 ~]# vim /var/www/html/index.php
<?php phpinfo();?>
[root@telephony ~]# vim /var/www/html/index.php
<?php phpinfo();?>

Akses http://192.168.0.253/index.php
harusnya sudah muncul isi dari phpinfo();

Untuk memonitor apakah loadbalancing sudah jalan maka akses http://192.168.0.253/haproxy?stats dan login dengan username sesuai dengan config yang ada di haproxy.conf (testing1:testing1)

Coba amati log di server 192.168.0.7 & 192.168.0.11 dengan command tail -f /var/log/http/access.log, kemudian matikan salah satu webserver apa yang terjadi??? Juga amati Bytes IN/Out pada monitoring:D

Vmware Interview Questions


Vmware Interview Questions

  1. Explain about your production environment? How many cluster’s, ESX, Data Centers, H/w etc ?
  2. How does VMotion works? What’s the port number used for it?
  3. Prerequisites for VMotion
  4. How does HA works? Port number? How many host failure allowed and why?
  5. What are active host / primary host in HA? Explain it?
  6. Prerequisites for HA ?
  7. How do DRS works? Which technology used? What are the priority counts to migrate the VM’s?
  8. How does snap shot’s works?
  9. What are the files will be created while creating a VM and after powering on the VM?
  10. If the VMDK header file corrupt what will happen? How do you troubleshoot?
  11. Prerequisites VC, Update manager?
  12. Have you ever patched the ESX host? What are the steps involved in that?
  13. Have you ever installed an ESX host?  What are the pre and post conversion steps involved in that? What would be the portions listed? What would be the max size of it?
  14. I turned on Maintenance mode in an ESX host, all the VM’s has been migrated to another host, but only one VM failed to migrate? What are the possible reasons?
  15. How will you turn start / stop a VM through command prompt?
  16. I have upgraded a VM from 4 to 8 GB RAM; it’s getting failed at 90% of powering on? How do you troubleshoot?
  17. Storage team provided the new LUN ID to you? How will you configure the LUN in VC? What would be the block size (say for 500 GB volume size)?
  18. I want to add a new VLAN to the production network? What are the steps involved in that? And how do you enable it?
  19. Explain about VCB? What it the minimum priority (*) to consolidate a machine?
  20. How VDR works?
  21. What’s the difference between Top and ESXTOP command?
  22. How will you check the network bandwidth utilization in an ESXS host through command prompt?
  23. How will you generate a report for list of ESX, VM’s, RAM and CPU used in your Vsphere environment?
  24. What the difference between connecting the ESX host through VC and Vsphere? What are the services involved in that? What are the port numbers’s used?
  25. How does FT works? Prerequisites? Port used?
  26. Can I VMotion between 2 different data centers? Why?
  27. Can I deploy a VM by template in different data centers ?
  28. I want to increase the system partition size (windows 2003 server- Guest OS) of a VM? How will you do it without any interruption to the end user?
  29. Which port number used while 2 ESX transfer the data in between?
  30. Unable to connect to a VC through Vsphere client? What could be the reason? How do you troubleshoot?
  31. Have you ever upgraded the ESX 3.5 to 4.0? How did you do it?
  32. What are the Vsphere 4.0, VC 4.0, ESX 4.0, VM 7.0 special features?
  33. What is AAM? Where is it used? How do you start or stop through command prompt?
  34. Have you ever called VMWare support? Etc
  35. Explain about Vsphere Licensing? License server?
  36. How will you change the service console IP?
  37. What’s the difference between ESX and ESXi?
  38. What’s the difference between ESX 3.5 and ESX 4.0?

TIP’S DRBD


Langkah-langkahnya adalah sbb:

1. Tambahkan resources baru di /etc/drbd.conf
2. Create metadata untuk node1
[root@node1 ~]# drbdadm create-md asterisk
[root@node1 ~]# /etc/init.d/drbd start
[root@node1 ~]# /etc/init.d/drbd (harusnya ini masih secondary/secondary dan inconsistence
[root@node2 ~]# /etc/init.d/drbd start
3. Jadikan node1 sebagai primary
[root@node1 ~]# drbdadm — –overwrite-data-of-peer asterisk
4. Format /dev/drdb1 (lakukan hanya di node1
[root@node1 ~]# mkfs -j /dev/drbd1
[root@node1 ~]# tune2fs -j 1 -c 0 /dev/drdb1
[root@node1 ~]# mkdir /asterisk
[root@node2 ~]# mkdir /asterisk
[root@node1 ~]# mount -o rw /dev/drbd1 /asterisk
[root@node1 ~]# mkdir -p /asterisk/{01,02}
5. Jadikan node1 sbg secondary
[root@node1 ~]# umount /asterisk
[root@node1 ~]# drbdam secondary asterisk
6. Jadikan node2 sebagai primary
[root@node2 ~]# drbdadm primary asterisk
[root@node2 ~]# mount -o rw /dev/drbd1 /asterisk
[root@node2 ~]# ls -l /asterisk ( harusnya folder 01, dan 02 harus muncul)
7. Kembalikan node1 sbg primary
[root@node2 ~]# umount /asterisk
[root@node2 ~]# drbdam secondary asterisk
[root@node1 ~]# drbdadm primary asterisk
[root@node1 ~]# mount -o rw /dev/drbd1 /asterisk
[root@node1 ~]# ls -l /asterisk
8. Tambahkan di /etc/ha.d/haresources (node1 & node2)
node1 IPaddr::192.168.137.100 drbddisk::mydrbd Filesystem::/dev/drbd0::/asterisk::ext3 mysql httpd drbddisk::asterisk Filesystem::/dev/drbd1::/asterisk::ext3 asterisk
9. Restat heartbeat di masing-masing node
10.Check hasilnya
============================================================================================

[root@node1 ~]# lvcreate -n lvasterisk -L+100M /dev/vgdata
[root@node2 /]# lvcreate -n lvasterisk -L+100M /dev/vgdata
[root@node1 ~]# drbdadm create-md asterisk
[root@node1 ~]# /etc/init.d/drbd start
[root@node1 ~]# cat /proc/drbd

[root@node1 etc]# cat /proc/drbd
version: 8.0.16 (api:86/proto:86)
GIT-hash: d30881451c988619e243d6294a899139eed1183d build by mockbuild@v20z-x86-64.home.local, 2009-08-22 13:23:34
0: cs:Connected st:Secondary/Secondary ds:UpToDate/UpToDate C r—
ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0
resync: used:0/61 hits:0 misses:0 starving:0 dirty:0 changed:0
act_log: used:0/257 hits:0 misses:0 starving:0 dirty:0 changed:0
1: cs:Connected st:Secondary/Secondary ds:Inconsistent/Inconsistent C r—
ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0
resync: used:0/61 hits:0 misses:0 starving:0 dirty:0 changed:0
act_log: used:0/257 hits:0 misses:0 starving:0 dirty:0 changed:0

[root@node1 etc]# /etc/init.d/drbd status
drbd driver loaded OK; device status:
version: 8.0.16 (api:86/proto:86)
GIT-hash: d30881451c988619e243d6294a899139eed1183d build by mockbuild@v20z-x86-64.home.local, 2009-08-22 13:23:34
m:res cs st ds p mounted fstype
0:mydrbd Connected Secondary/Secondary UpToDate/UpToDate C
1:asterisk Connected Secondary/Secondary Inconsistent/Inconsistent C

[root@node2 ha.d]# cat /proc/drbd
version: 8.0.16 (api:86/proto:86)
GIT-hash: d30881451c988619e243d6294a899139eed1183d build by mockbuild@v20z-x86-64.home.local, 2009-08-22 13:23:34
0: cs:Connected st:Secondary/Secondary ds:UpToDate/UpToDate C r—
ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0
resync: used:0/61 hits:0 misses:0 starving:0 dirty:0 changed:0
act_log: used:0/257 hits:0 misses:0 starving:0 dirty:0 changed:0
1: cs:Connected st:Secondary/Secondary ds:Inconsistent/Inconsistent C r—
ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0
resync: used:0/61 hits:0 misses:0 starving:0 dirty:0 changed:0
act_log: used:0/257 hits:0 misses:0 starving:0 dirty:0 changed:0
[root@node2 ha.d]# /etc/init.d/drbd status
drbd driver loaded OK; device status:
version: 8.0.16 (api:86/proto:86)
GIT-hash: d30881451c988619e243d6294a899139eed1183d build by mockbuild@v20z-x86-64.home.local, 2009-08-22 13:23:34
m:res cs st ds p mounted fstype
0:mydrbd Connected Secondary/Secondary UpToDate/UpToDate C
1:asterisk Connected Secondary/Secondary Inconsistent/Inconsistent C
[root@node1 etc]# drbdadm — –overwrite-data-of-peer primary asterisk
[root@node1 etc]# /etc/init.d/drbd status
drbd driver loaded OK; device status:
version: 8.0.16 (api:86/proto:86)
GIT-hash: d30881451c988619e243d6294a899139eed1183d build by mockbuild@v20z-x86-64.home.local, 2009-08-22 13:23:34
m:res cs st ds p mounted fstype
0:mydrbd Connected Secondary/Secondary UpToDate/UpToDate C
… sync’ed: 36.0% (65976/102360)K
1:asterisk SyncSource Primary/Secondary UpToDate/Inconsistent C

[root@node2 ha.d]# /etc/init.d/drbd status
drbd driver loaded OK; device status:
version: 8.0.16 (api:86/proto:86)
GIT-hash: d30881451c988619e243d6294a899139eed1183d build by mockbuild@v20z-x86-64.home.local, 2009-08-22 13:23:34
m:res cs st ds p mounted fstype
0:mydrbd Connected Secondary/Secondary UpToDate/UpToDate C
1:asterisk Connected Secondary/Primary UpToDate/UpToDate C
[root@node1 etc]# mkfs -j /dev/drbd1
mke2fs 1.39 (29-May-2006)
Filesystem label=
OS type: Linux
Block size=1024 (log=0)
Fragment size=1024 (log=0)
25688 inodes, 102360 blocks
5118 blocks (5.00%) reserved for the super user
First data block=1
Maximum filesystem blocks=67371008
13 block groups
8192 blocks per group, 8192 fragments per group
1976 inodes per group
Superblock backups stored on blocks:
8193, 24577, 40961, 57345, 73729

Writing inode tables: done
Creating journal (4096 blocks): done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 27 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.
[root@node1 etc]#
[root@node1 etc]# tune2fs -c 1 -i 0 /dev/drbd1
tune2fs 1.39 (29-May-2006)
Setting maximal mount count to 1
Setting interval between checks to 0 seconds
[root@node1 etc]# mkdir /asterisk
[root@node2 ha.d]# mkdir /asterisk
[root@node1 etc]# mount -o rw /dev/drbd1 /asterisk/

[root@node1 etc]# vim /etc/ha.d/haresources
node1 IPaddr::192.168.137.100 drbddisk::mydrbd Filesystem::/dev/drbd0::/data::ext3 httpd mysqld drbddisk::asterisk Filesystem::/dev/drbd1::/asterisk::ext3 asterisk

[root@node2 ha.d]# vim /etc/ha.d/haresources
node1 IPaddr::192.168.137.100 drbddisk::mydrbd Filesystem::/dev/drbd0::/data::ext3 httpd mysqld drbddisk::asterisk Filesystem::/dev/drbd1::/asterisk::ext3 asterisk
[root@node1 etc]# /etc/init.d/drdbd restart
[root@node2 etc]# /etc/init.d/drdbd restart

Apabila terjadi WFCONNECTION OR STANDALONE

[root@node2 etc]# drbdadm secondary mydrbd
[root@node2 etc]# drbdadm — –discard-my-data connect mydrbd

[root@node1 etc]# drbdadm primary mydrbd
[root@node1 etc]# drbdadm connect mydrbd

DRBD & HEARTBEAT (High Availability Cluster)


Percobaan dilakukan pada virtualbox dengan mencreate 2 oS centos 5.5 32 bit dan mencreate  hardisk virtual di masing-masing node. Untuk kali hanya di testing fungsi dari si heartbeat dan next akan di uraikan lagi secara detail fungsi si DRBD.

Case I
Service yang akan di failover adalah httpd,mysql maka pastikan di kedua server sudah terinstall apache dan pastikan bisa diakses dari jaringan, dalam kasus ini http://192.168.137.200 & http://192.168.137.201

1. Config /etc/hosts

[root@node1 ~]# vim /etc/hosts
127.0.0.1 localhost.localdomain localhost
#::1 localhost6.localdomain6 localhost6
192.168.137.200 node1.intelix.co.id node1
192.168.137.201 node2.intelix.co.id node2

[root@node2~]# vim /etc/hosts
127.0.0.1 localhost.localdomain localhost
#::1 localhost6.localdomain6 localhost6
192.168.137.200 node1.intelix.co.id node1
192.168.137.201 node2.intelix.co.id node2

Config /etc/sysctl.conf


[root@node1 ~]# echo “kernel.hostname = node1” >> /etc/sysctl.conf
[root@node1 ~]# sysctl –p

[root@node2 ~]# echo “kernel.hostname = node2” >> /etc/sysctl.conf
[root@node1 ~]# sysctl -p

2. Paket yang harus di install di kedua node
[root@node1 ]yum -y install heartbeat.i386 OpenIPMI-libs heartbeat-pils.i386 heartbeat-stonith.i386 libnet.i386
[root@node2 ]yum -y install heartbeat.i386 OpenIPMI-libs heartbeat-pils.i386 heartbeat-stonith.i386 libnet.i386

3. File configuration
[root@node1 ~]# cd /usr/share/doc/heartbeat-2.1.4/
[root@node1 heartbeat-2.1.4]# cp authkeys ha.cf haresources /etc/ha.d/
a. authkeys
[root@node1 ha.d]# vim /etc/ha.d/authkeys
Tambahkan isi di baris paling bawah
auth 2
2 sha1 intelix-ha-igc
b. ha.cf
[root@node1 ha.d]# vim /etc/ha.d/ha.cf
logfile /var/log/ha-log
logfacility local0
keepalive 2
deadtime 30
initdead 120
udpport 694
bcast eth0
auto_failback on
node node1
node node2
c. haresources
[root@node1 ha.d]# vim /etc/ha.d/ haresources
node1 192.168.137.100 httpd

4. Copy semua file config /etc/ha.d/ ke node2
[root@node1 ha.d]# cd /etc/ha.d/
[root@node1 ha.d]# scp -r authkeys ha.cf haresources root@192.168.137.201:/etc/ha.d/

5. Rubah configurasi file httpd.conf di kedua node
[root@node1 ~]# vim /etc/httpd/conf/httpd.conf
Option Listen harus dalam format sbb:
Listen 192.168.137.100:80

[root@node2 ~]# vim /etc/httpd/conf/httpd.conf
Option Listen harus dalam format sbb:
Listen 192.168.137.100:80

6. Stop service httpd dengan menggunakan command “chkconfig” sehingga ketika boot service itu tidak “up”
[root@node1 ~]# chkconfig httpd off
[root@node2~]# chkconfig httpd off

7. Restart Service heartbeat di kedua node
[root@node1 ~]# /etc/init.d/heartbeat start
Starting High-Availability services:
2011/11/03_20:57:00 INFO: Resource is stopped
[ OK ]

[root@node2 ~]# /etc/init.d/heartbeat stop
Stopping High-Availability services:
[ OK ]

[root@node2 ~]# /etc/init.d/heartbeat start
Starting High-Availability services:
2011/11/03_20:57:42 INFO: Resource is stopped

8. Pengujian
Untuk menguji apakah system yang sudah di setup berjalan dengan benar maka buka masing-masing 3 terminal untuk node1 dan node2

Node 1
Terminal 1
[root@node1 ~]# echo “node01 apache test server” > /var/www/html/index.html
Terminal 2
[root@node1~]# tail –f /var/log/ha-log
Terminal 3
[root@node1~]# for i in `seq 1 199`; do ifconfig eth0:0; sleep 5; done

Node2
[root@node2 ~]# echo “node02 apache test server” > /var/www/html/index.html
Terminal 2
[root@node2~]# tail –f /var/log/ha-log
Terminal 3
[root@node2~]# for i in `seq 1 199`; do ifconfig eth0:0; sleep 5; done

Buka url via browser , http://192.168.137.100/
Maka jika muncul tulisan “node01 apache test server” maka config anda sudah benar

Stop service heartbeat di node1 dan amati masing-masing log + status eth0:0 di terminal masing-masing node
[root@node1~]# /etc/init.d/heartbeat stop
Buka kembali url http://192.168.137.100/
Maka jika muncul tulisan “node02 apache test server” maka config anda sudah benar

Start Service hearbeat di node1
[root@node1~]# /etc/init.d/heartbeat start
Buka kembali url http://192.168.137.100/
Maka jika muncul tulisan “node01 apache test server” maka config anda sudah benar berjalan 100%

MYSQL
Untuk mengimplementasikan di mysql maka pastikan di masing-masing node sudah terinstall msysql. Caranya adalah sbb:

1. Edit file /etc/ha.d/resources di node1 & node2
Sebelum
[root@node1 ]# vim /etc/ha.d/ haresources
node1 192.168.137.100 httpd

Sesudah
[root@node1d]# vim /etc/ha.d/ haresources
node1 192.168.137.100 httpd mysqld

2. Stop runlevel service mysql sehingga tidak up ketika boot

[root@node1 ]# chkconfig mysqld off
[root@node2 ]# chkconfig mysqld off

3. Restart Service heartbeat
[root@node1 ]# /etc/init.d/heartbeat restart
[root@node2 ]# /etc/init.d/heartbeat restart

4. Grant privilege di node1 sehingga bisa di remote via mysqlyog or front
[root@node1 ]# mysql -uroot –p
enter
mysql> select user,password,host from user;
mysql> update user set user=’root’,host=’%’ where user=” and host=’localhost’;
mysql> flush privileges;
mysql> exit;

5. Stop service hearbeat di node1
[root@node1 ]# /etc/init.d/heartbeat stop

6. Grant privilege di node2 sehingga bisa di remote via mysqlyog or front
[root@node2 ]# mysql -uroot –p
enter
mysql> select user,password,host from user;
mysql> update user set user=’root’,host=’%’ where user=” and host=’localhost’;
mysql> flush privileges;
mysql> exit;

7. Start Service heartbeat di node1
[root@node1 ]# /etc/init.d/heartbeat start

8. Akses ip 192.168.137.100 via mysqlfront or mysqlyog dan create 1 database
9. Stop Service heartbeat di node1
10. Akses ip 192.168.137.100 via mysqlfront or mysqlyog dan harusnya masih tetap ada nanum database yang anda create sebelumnya tidak akan muncul , itu disebabkan belum ada syncronisasi data antar node. Nah agar data tetap ada maka diperlukanlah :

“Si -DRBD(Distributed Replicated Block Device)”

Tampilan drbd.conf yang menggunakan lvm (/dev/vgdata/lvdata)

drbd.conf

ASTERISK

Bagaimana dengan service yang lain seperti ssh, asterisk,ecentrix???? Mari kita coba bersama dan persiapkan alat-alat yang kita butuhkan untuk mensimulasikan sesuai dengan “dilapangan”

Created by moses tambunan

http://dev.centos.org/centos/5/testing/i386/RPMS/

WFConnection /StandAlone Secondary/Unknown


Untuk memcoba di pilot project saya mencoba menginstall 2 guest di virtualbox untuk mensimulasikan heartbeat dan drbd,
ketika saya stop node1 & node2 secara "paksa" maka pada saat di start-up kembali maka kedua node muncul status sbb:
Atau cara untuk membuat agar satusnya seperti diatas adalah:
Stop service heartbeat pada node1 kemudian pada node2 jalankan ifdown eth0 (lakukan dengan cepat via command line)
dan check status masing-masing node. Dari sini bisa kita lihat bahwa node2 belum sempat men-take-over node1 sementara
node2 sudah keburu off (mati) 

[root@node1 ~]# /etc/init.d/drbd status
drbd driver loaded OK; device status:
version: 8.0.16 (api:86/proto:86)
GIT-hash: d30881451c988619e243d6294a899139eed1183d build by mockbuild@v20z-x86-64.home.local, 2009-08-22 13:23:34
m:res     cs          st               ds                 p  mounted  fstype
0:mydrbd  StandAlone  Primary/Unknown  UpToDate/DUnknown  -  /data    ext3

[root@node2 ~]# /etc/init.d/drbd status
drbd driver loaded OK; device status:
version: 8.0.16 (api:86/proto:86)
GIT-hash: d30881451c988619e243d6294a899139eed1183d build by mockbuild@v20z-x86-64.home.local, 2009-08-22 13:23:34
m:res     cs            st                 ds                 p  mounted  fstype
0:mydrbd  WFConnection  Secondary/Unknown  UpToDate/DUnknown  C

Solusinya cukup menjalankan command
[root@node1 ~]# drbdadm connect r0
Tunggu beberapa saat sampai proses syncronisasi complete, gunakan command cat /proc/drbd
check status masing-masing node
[root@node2 sort]# cat /proc/drbd
version: 8.0.16 (api:86/proto:86)
GIT-hash: d30881451c988619e243d6294a899139eed1183d build by mockbuild@v20z-x86-64.home.local, 2009-08-22 13:23:34
 0: cs:SyncTarget st:Secondary/Primary ds:Inconsistent/UpToDate C r---
    ns:0 nr:135936 dw:135936 dr:0 al:0 bm:14 lo:1 pe:487 ua:0 ap:0
        [=================>..] sync'ed: 91.9% (15572/151508)K
        finish: 0:00:07 speed: 1,940 (1,788) K/sec
        resync: used:2/61 hits:8967 misses:16 starving:0 dirty:0 changed:16
        act_log: used:0/257 hits:0 misses:0 starving:0 dirty:0 changed:0
[root@node2 sort]# cat /proc/drbd
version: 8.0.16 (api:86/proto:86)
GIT-hash: d30881451c988619e243d6294a899139eed1183d build by mockbuild@v20z-x86-64.home.local, 2009-08-22 13:23:34
 0: cs:SyncTarget st:Secondary/Primary ds:Inconsistent/UpToDate C r---
    ns:0 nr:135936 dw:135936 dr:0 al:0 bm:14 lo:1 pe:487 ua:0 ap:0
        [=================>..] sync'ed: 91.9% (15572/151508)K
        finish: 0:00:07 speed: 1,940 (1,788) K/sec
        resync: used:2/61 hits:8967 misses:16 starving:0 dirty:0 changed:16
        act_log: used:0/257 hits:0 misses:0 starving:0 dirty:0 changed:0

FINISH
[root@node1 ha.d]# /etc/init.d/drbd status
drbd driver loaded OK; device status:
version: 8.0.16 (api:86/proto:86)
GIT-hash: d30881451c988619e243d6294a899139eed1183d build by mockbuild@v20z-x86-64.home.local, 2009-08-22 13:23:34
m:res     cs         st                 ds                 p  mounted  fstype
0:mydrbd  Connected  Secondary/Primary  UpToDate/UpToDate  C
[root@node2 sort]# /etc/init.d/drbd status
drbd driver loaded OK; device status:
version: 8.0.16 (api:86/proto:86)
GIT-hash: d30881451c988619e243d6294a899139eed1183d build by mockbuild@v20z-x86-64.home.local, 2009-08-22 13:23:34
m:res     cs         st                 ds                 p  mounted  fstype
0:mydrbd  Connected  Primary/Secondary  UpToDate/UpToDate  C  /data    ext3

MySQL HA with DRDB and Heartbeat on CentOS 5.5


July 20, 2010

This is one of a few MySQL High Availability strategies.  I have used this for years and found it work great.  If you don’t know about DRBD and MySQL you should read Peter’s comments.

These are step by step instructions for Redhat 5 or CentOS.

If you need more details please refer to:
http://www.drbd.org/users-guide/

Configuring MySQL for DRBD
http://dev.mysql.com/doc/refman/5.1/en/ha-drbd-install-mysql.html

Getting started:

The OS in this example is CentOS 5.5.  I added a new disk (/dev/sde) to the four disk RAID-5 and RAID-1 I was already using.   I’m only creating an 8 gig disk (vmware). You should start with a partition (LVM and or RAID) partition big enough for your data.

# uname -a 

Linux db1.grennan.com 2.6.18-194.8.1.el5 #1 SMP Thu Jul 1 19:04:48 EDT 2010 x86_64 x86_64 x86_64 GNU/Linux

 # df

Filesystem           1K-blocks      Used Available Use% Mounted on
/dev/md1              24065660   2826564  19996896  13% /
/dev/md0                101018     20988     74814  22% /boot
tmpfs                   513476         0    513476   0% /dev/shm

# fdisk -l /dev/sde

Disk /dev/sde: 8589 MB, 8589934592 bytes
255 heads, 63 sectors/track, 1044 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot      Start         End      Blocks   Id  System
/dev/sde1               1        1044     8385898+  83  Linux

DRBD:

Installation:
On machine1 and machine2 install DRBD and its kernel module.  You may need to review the packages you have available using ‘yum list | grep drbd’.  These are for CentOS 5.5.  You may also need to reboot after this step.

 # yum -y install drbd  # yum –y install kmod-drbd82.x86_64 # modprobe drbd

Configuration:
On both machines edit this configuration file.  I have highlighted parts you will need to edit in red.

# vi /etc/drbd.conf
# # please have a a look at the example configuration file in # /usr/share/doc/drbd82/drbd.conf # # Our MySQL share resource db { protocol C; startup { wfc-timeout 0; degr-wfc-timeout 120; } disk { on-io-error detach; } # or panic, ... syncer {  rate 6M; } on db1.grennan.com { device /dev/drbd1;} disk /dev/sde1;  address 192.168.2.13:7789;  meta-disk internal; } on db2.grennan.com { device /dev/drbd1; disk /dev/sde1; address 192.168.2.14:7789; meta-disk internal; } }

 

 

Manage DRDB processes:

On both machines run

 # drbdadm adjust db

 

On machine1

 # drbdsetup /dev/drbd1 primary –o
 # service drbd start 

 

On machine2

 # service drbd start

 

On both machines(see status):

 # service drbd status

 

On machine1

 # mkfs -j /dev/drbd1 # tune2fs -c -1 -i 0 /dev/drbd1 # mkdir /data # mount -o rw /dev/drbd1 /data

 

On machine2

 # mkdir /data

Test failover:
This is how you perform a manual fail over. You will use HA to do this for you in the next sections.

 

On primary (server1) # umount /data # drbdadm secondary db

 

On secondary (server2)

 # drbdadm primary db # service drbd status # mount -o rw /dev/drbd1 /data # df
Filesystem           1K-blocks      Used Available Use% Mounted on /dev/md1              24065660   1898696  20924764   9% / /dev/md0                101018     14886     80916  16% /boot tmpfs                   513472         0    513472   0% /dev/shm /dev/drbd1             8253948    149628   7685040   2% /data


Note we never formatted (mkfs) the disk on machine2! Here it is, ready to go, DRDB has copied all the data.

 

MySQL:

Here are a few notes for you to think about.

  • The default location for MySQL data is /var/lib/mysql.  You will be moving this to /data/mysql.
  • MySQL configuration is in /etc/my.cnf.  So that changes to the configuration move with failover, you should put my.cnf in /data/mysql and create a sym-link of /etc/my.cnf to this file.

Now comes the hurdle.

  • Install MySQL as you wish.
  • Move your data directory to a /data/mysql

On machine1

 # mkdir /data/mysql
 # chown  mysql.mysql /data/mysql
 # cp –prv /var/lib/mysql/* /data/mysql

Start MySQL on machine1.
Create some sample database and table. Stop MySQL. Do a manual switchover of DRBD. Start MySQL on machine2 and query for that table. It should work. But, this is of no use if you have to switchover manually every time. When you have this working you are ready to move to Heartbeat.

Here are a couple of scripts to make this easy.

 

drdb-secondary

 # service mysql stop
 # umount /data
 # drbdadm secondary db
 # drdb-primary:
 # drbdadm primary db
 # mount -o rw /dev/drbd1 /data
 # service mysql start 

 

 

 


 

HA:

 

  • IMPORTANT: Heartbeat uses either Linux Services (LSB) Resource Agents or Heartbeat Resource Agents (HRA) to start and stop heartbeat resources. You will be adding MySQL (LSB), drbddisk (HRA) and IPaddr2 (HRA) are our heartbeat resources.
  • Refer this page on Resource Agent
  • As you are aware of it many *nix services are started using LSB Resource Agents. They are found in /etc/init.d

 

Installation:

On machine1 and machine2 install Heartbeat and needed utilities.  You may need to review the packages you have available using ‘yum list | grep drbd’.  These are for CentOS 5.5.  You may also need to reboot after this step.

 # yum -y install gnutls*
 # yum -y install ipvsadm*
 # yum -y install heartbeat*
 # yum -y install heartbeat.x86_64

 

Configuration:

 

Edit /etc/sysctl.conf and set net.ipv4.ip_forward = 1

 # vi /etc/sysctl.conf

Controls IP packet forwarding net.ipv4.ip_forward = 1

 # /sbin/chkconfig --level 2345 heartbeat on
 # /sbin/chkconfig --del ldirectord

 

 

Configure HA:

 

You need to setup the following configuration files on both machines:

 

# vi /etc/ha.d/ha.cf
#/etc/ha.d/ha.cf content debugfile /var/log/ha-debug logfile /var/log/ha-log logfacility local0 keepalive 2 deadtime 30 warntime 10 initdead 120 udpport 694 # If you have multiple HA setup in same network.. use different ports bcast eth0 # Linux auto_failback on # This will failback to machine1 after it comes back ping 192.168.2.1 # The gateway apiauth ipfail gid=haclient uid=hacluster node db1.grennan.com node db2.grennan.com 

 

On both machines

NOTE: Assuming 192.168.2.15 is virtual IP for your MySQL resource and mysqld is the LSB resource agent. The host name (db2) should be the secondary server’s name.

 # vi /etc/ha.d haresources

# /etc/ha.d/haresources content
db2.grennan.com LVSSyncDaemonSwap::master Paddr2::192.168.2.15/24/eth0  rbddisk::db Filesystem::/dev/drbd1::/data::ext3 mysqld

# vi /etc/ha.d/authkeys

#/etc/ha.d/authkeys content
auth 2
2 sha1 BigSecretKeyks9wjwlf9gskg905snvl

Now, make your authkeys secure:

# chmod 600 /etc/ha.d/authkeys

Check your work:

On both machines, one at a time, stop MySQL and make sure MySQL does not start when the system reboots (init 6).

If it does, you may need to remove it from the init process with:

 # /sbin/chkconfig --level 2345 MySQL off

Start Heartbeat.

# service heartbeat start

These commands will give you status about this LVS setup:

 # /etc/ha.d/resource.d/LVSSyncDaemonSwap master status
 # ip addr sh
 # service heartbeat status
 # df
 # service mysqld status

Access your HA-MySQL server like:

 # mysql –h 192.168.2.15

Shutdown machine1 to see MySQL up on machine2. ‘shutdown now’

Start machine1 to see MySQL back on machine1.

BALANCER


Membuat Clustering Load Balancing Menggunakan Ubuntu 10.04

Selama ini orang selalu menganggap bahwa membuat cluster load balancing adalah hal yang rumit dan memusingkan. Dan.. memang benar pendapat ini. Tapi sebenarnya ada
satu cara mudah untuk mencapainya dengan menggunakan yang namanya balance. Sebelumnya kita bahas dahulu sedikit mengenai konsep Clustering. Secara prinsip
clustering mempunyai 2 buah pendekatan:

1. High Availability (Failover), adalah bila satu server gagal melayani service tertentu, maka tugas server tersebut otomatis akan dilempar ke server lainnya.
2. High throughput (Performance), disini yang diinginkan adalah performance yang tinggi yang dicapai dengan “membagi2″ tugas yang ada ke sekumpulan server.

Contohnya adalah:

– High-performance Computing (HPC), adalah sekumpulan server yang bekerja bersama-sama pada saat yang bersamaan untuk mengerjakan sesuatu tugas tertentu, biasanya dalam bentuk tugas perhitungan yang berat2, seperti simulasi bumi, me-render film animasi, dll.

– Load Balancing, adalah membagi2 beban kerja ke sekumpulan server diluar konteks computing, misalnya membagi beban kerja web server, mail server, dll.

Bagaimana mencapai hal ini?
Ada beberapa software opensource yang dapat kita gunakan:
1. Linux High-Availability (http://www.linux-ha.org)
2. RedHat Cluster Suite dan Piranha (http://www.redhat.com)
3. Linux Virtual Server (http://www.linuxvirtualserver.org)
4. BeoWulf Cluster (http://www.beowulf.org)
5. Openmosix (http://openmosix.sourceforge.net)

Namun solusi-solusi diatas kadang kala terlalu “canggih” atau “overkill” untuk mencapai tujuan clustering kita. Disinilah ‘balance’ masuk. Apa yang dapat ia sediakan?
1. Merupakan user-space program. Tidak perlu compile kernel dll. Langsung jalan secara command line.
2. Load balancing secara tcp. Cukup menyebutkan protocol atau port tcp berapa yang ingin kita load balancing.

Cara setup:
1. Download paketnya dari http://www.inlab.de/balance.html
$ wget http://www.inlab.de/balance-3.40.tar.gz

2. Extract, compile dan install:
$ tar zxvf balance-3.40.tar.gz
$ cd balance-3.40
$  vi Makefile

Ubah baris ini: MANDIR=${BINDIR}/../man/man1
Menjadi: Untuk Ubuntu: MANDIR=/usr/share/man/man1
Untuk RedHat: MANDIR=/usr/local/share/man/man1

$ make
$ make install

Cara pakai:
Sebelumnya kita misalkan skenario seperti ini:
Kita mempunyai sebuah website yang ingin kita bagi beban kerjanya ke 3 buah server web. Maka kita perlu mensetupnya seperti terlihat di gambar , Tiga buah server web yaitu www1 (192.168.0.1), www2 (192.168.0.2), dan www3 (192.168.0.3). Di depan mereka kita install sebuah server (192.168.0.254) yang bertugas membagi2 bebas kerja para server www tersebut. Jadi IP yang akan diakses oleh user adalah IP 192.168.0.254, jangan ke masing2 server www.

Jadi cara pakainya adalah misalnya:
balance -f http 192.168.0.1 192.168.0.2 192.168.0.3

Option -f itu artinya balance jalan di foreground, berguna untuk kita debug dan cancel. Kalau misalnya sudah ok, bisa kita jalankan tanpa option -f, maka balance akan jalan di background.

Untuk melihat cara bekerja balance adalah dengan membuka sebuah terminal dan meload website 192.168.0.254 secara berulang2. Untuk mudahnya dapat kita gunakan
text browser seperti elinks:
watch elinks –dump http://192.168.0.254

Untuk kebutuhan testing, dapat kita atur agar isi website di 192.168.0.1, 192.168.0.2, dan 192.168.0.3 berbeda, jadi command di atas akan menampilkan isi website yang
berbeda, tanda bahwa balance sudah meload balancing traffik web ke tiga buah server tersebut.
Contoh lain adalah:
balance -f http 192.168.0.1::100 ! 192.168.0.2::100 ! 192.168.0.3

Arti option di atas adalah: koneksi http akan diprioritaskan ke server 192.168.0.1 sampai sebanyak 100 koneksi, bila sudah penuh maka akan dilempar ke 192.168.0.2 sampai sebanyak 100 koneksi juga, sisanya akan ke 192.168.0.3 Bagaimana jika kita ingin menghandle koneksi yang memerlukan session seperti website
dynamic pakai php? Hal ini bisa dicapai dengan option ‘%’ yaitu mengaktifkan session seperti ini:

balance -f http 192.168.0.1 192.168.0.2 192.168.0.3 %

Untuk option2 selengkapnya dapat kita lihat di ‘man balance’. Apakah hanya dapat digunakan untuk akses http? Tentu tidak, dengan sedikit eksplorasi
kita dapat pula menggunakannya untuk keperluan lain seperti load balancing akses internet, email, proxy, dll.

Penutup
Program balance ini menyediakan sebuah solusi praktis dan mudah untuk membuat sebuah cluster load balancer. Performance yang dihasilkan cukup bagus. Namun bila kita
ingin menggunakan solusi yang lebih handal, kita dapat menggunakan LVS (linux virtual server) dengan kombinasi linux-ha. Namun tentu saja settingannya akan jauh lebih rumit.
Kita akan membahasnya di lain kesempatan.

Selamat mencoba

 

sumber https://verrysoon030391.wordpress.com/tag/failover/

shisdew

Listens until think alike

moses.spaceku@yahoo.com / voip ipbx

Hosted PBX, IP-PBX SOHO/ CALL CENTER, VOICE GATEWAY, VOICE CARD, COST EFECTIVE SOLUTIONS (LCR), GSM/CDMA GATEWAY