When you start the loadbalancer.org appliance you will see the following:
Default login:
Username: root
Password: loadbalancer
Access to webclient from an external client is:
http://192.168.1.129:9080
http://192.168.1.129:9443
You can access the web administrator using the IP and ports described onscreen.
For the sri lanka porject we are looking for performance and the network diagram indicates we are happy to have the cluster on the same subnet as the rest of the network.
Direct routing is the fasted performance possible, it has the advantage over NAT that the Loadbalancer does not become a bottleneck for incoming and outgoing packets. With DR the loadbalancer simply examines incoming packets and the servers to route the packets directly back to the requesting user.
The web interfaceis the only way to fully configure the loadbalancer vm. The console tool lbwizard will get it initiallised and any further configurations can then be done via the webinterface.
Using lbwizard for the Sri lanka configuration follow these steps.
On the first Loadbalancer:
//Start
Is this unit part for a HA Pair?
YES
Have you already setup the Slave?
NO
Is this a one-armed configuration?
YES
Enter the IP Address for the interface eth0?
Enter IP address you wish to be assigned to the SLAVE loadbalancer.
Enter the netmask for interface eth0?
Enter netmask for the subnet.
Enter the Floating IP adrress?
Enter the IP address that will be IP assosiacted the the HA-pair of loadbalancers.
//Finish
On the 2nd loadbalancer VM, run the lbwizard.
//Start
Is this unit part of an HA-Pair?
YES
Have you already set up the Slave?
YES
What is the slave units UP address?
Enter the IP which you entered when configuring the other loadbalancer VM.
Is this a one-armed configuration?
YES
Enter the IP Address for the interface eth0?
Enter the IP that will be assigned to the MASTER loadbalancer
Enter the netmask for interface eth0?
Enter the subnet netmask.
Enter the Floating IP address?
Enter the IP address that will be IP assosiacted the the HA-pair of loadbalancers.
Enter the address of the default gateway?
Enter the deafult gateway for the subnet.
Enter the IP of the nameserver?
Enter the dns server.
Enter the port for the first Virtual server?
Enter 22 for ssh
Enter the IP address of the first real server?
Enter the real IP of the first appserver
//Finish
Now this is complete we need to go to the web admin interface to configure the 2nd Real Server. As the lbwizard program will only allow you to configure 1 real server.
Now login to the web admin using the default password:
username: loadbalancer
password: loadbalancer
Note: Connect to the IP you have now set for your master loadbalancer
Goto the edit configuration tab
Now click add a real server:
Enter a label
IP address of the server plus the port of the service i.e. 192.168.1.125:22
Edit Configuration -> Virtual Servers
persistancte -> NO
Scheduler-> LC
LC - Least-Connection: assign more jobs to real servers with
fewer active jobs.
Service to check -> custom1
Check port -> 22
Forwarding Method -> DR
Feedback Method -> Agent
Arp Problem when using DR
Every real server must be configured to respond to the VIP address as well as the RIP
address.
You can use iptables (netfilter) on the real server to re-direct incoming packets destined for the virtual
server IP address.
This is a simple case of adding the following command to your start up script (rc.local):
//replace 10.0.0.21 with the Virtual Server IP
iptables -t nat -A PREROUTING -p tcp -d 10.0.0.21 -j REDIRECT
chkconfig iptables on
Recital 10 Express Edition Linux x86 Free Download.
Recital 10 introduces the free single-user developer edition called Recital Express that can be used to develop and test multi-user Recital, Recital Server and Recital Web applications. Once the applications are ready for deployment a commercial license must be purchased. Recital Express, Recital Server Express and Recital Web Express can be used unlicensed for non-commercial purposes only.What does this download include:
![]() |
Recital 10 A powerful scripting language with an embedded database used for developing desktop database applications on Linux and Unix. Recital has a high degree of compatibility with Microsoft FoxPRO enhanced with many additional enterprise class extensions. |
![]() |
Recital 10 Web A server-side scripting language with an embedded database for creating web 2.0 applications. Includes plugins for apache and IIS. Coming soon! Recital Web Framework, a comprehensive OO framework built on YUI for building RIA (Rich Internet Applications) in Recital Web. |
![]() |
Recital 10 Server A cross-platform SQL database and application server which includes client drivers for ODBC, JDBC and .NET enabling Recital data to be accessed client/server from Windows, Linux and Mac. |
![]() |
Recital 10 Replication A comprehensive replication product that addresses urgent data movement and synchronization needs to help support disaster recovery and business continuity for Recital applications. |
Recital 10 Quick Start:
Graphical Installation
Note: The installation must be run as root. For systems with a hidden root account, please use ’Run as Root’.
- Download the distribution file into a temporary directory
- Check that the distribution file has the execute permission set
- Run the distribution file
- Follow the on screen instructions:
- License agreement
- Select components
- Installation directory and shortcuts
- Linking to /usr/bin
- ODBC Installation type (Recital Server / Recital Client Drivers)
- Java Virtual Machine selection (Recital Server / Recital Client Drivers)
- TomCat Installation type (Recital Server / Recital Client Drivers)
- Apache Firecat Plugin Installation (Recital Web Developer)
- Replication Service Type (Recital Replication Server)
- Install license file
Text Installation
Note: The installation must be run as root. For systems with a hidden root account, please precede commands with ’sudo’.
- Download the distribution file into a temporary directory
- Check that the distribution file has the execute permission set
- Run the distribution file
- Follow the on screen instructions:
- License agreement
- Select components
- Installation directory and shortcuts
- Linking to /usr/bin
- ODBC Installation type (Recital Server / Recital Client Drivers)
- Java Virtual Machine selection (Recital Server / Recital Client Drivers)
- TomCat Installation type (Recital Server / Recital Client Drivers)
- Apache Firecat Plugin Installation (Recital Web Developer)
- Replication Service Type (Recital Replication Server)
- Install license file
$hdiutil create /tmp/tmp.dmg -ov -volname "RecitalInstall" -fs HFS+ -srcfolder "/tmp/macosxdist/"
$hdiutil convert /tmp/tmp.dmg -format UDZO -o RecitalInstall.dmg
APPEND FROM TYPE CSV <file-name.csv>The TYPE keyword has now been enhanced to support a comma separated values (CSV) format
Many motherboards nowadays have integrated gigabit ethernet that use the Realtek NIC chipset.
The Realtek r8168B network card does not work out of the box in Redhat/Centos 5.3: instead of loading the r8168 driver, modprobe loads the r8169 driver, which is broken as can be seen with ifconfig which shows large amounts of dropped packets. A solution is to download the r8168 driver from the Realtek website and install it using the following steps:
Check whether the built-in driver, r8169.ko (or r8169.o for kernel 2.4.x), is installed.
# lsmod | grep r8169
If it is installed remove it.
# rmmod r8169
Download the R8168B linux driver from here into /root.
Unpack the tarball :
# cd /root
# tar vjxf r8168-8.012.00.tar.bz2
Change to the directory:
# cd r8168-8.012.00
If you are running the target kernel, then you should be able to do :
# make clean modules
# make install
# depmod -a
# insmod ./src/r8168.ko (or r8168.o in linux kernel 2.4.x)
make sure modprobe knows not to use r8169, and that depmod doesn’t find the r8169 module.
# echo "blacklist r8169" >> /etc/modprobe.d/blacklist
# mv /lib/modules/`uname -r`/kernel/drivers/net/r8169.ko \ /lib/modules/`uname -r`/kernel/drivers/net/r8169.ko.bak
You can check whether the driver is loaded by using the following commands.
# lsmod | grep r8168
# ifconfig -a
If there is a device name, ethX, shown on the monitor, the linux driver is loaded. Then, you can use the following command to activate it.
# ifconfig ethX up
After this you should not see any more dropped packets reported.
DRBD:
DRBD (Distributed Replicated Block Device) forms the storage redundancy portition of a HA cluster setup. Explained in basic terms DRBD provides a means of achieving RAID 1 behavoir over a network, where whole block devices are mirrored accross the network.
To start off you will need 2 indentically sized raw drives or partitions. Many how-to's on the internet assume the use of whole drives, of course this will be better performance, but if you are simply getting familar with the technology you can repartition existing drives to allow for two eqaully sized raw partitions, one on each of the systems you will be using.
There are 3 DRBD replication modes:
• Protocol A: Write I/O is reported as completed as soon as it reached local disk and local TCP send buffer
• Protocol B: Write I/O is reported as completed as soon as it reached local disk and remote TCP buffer cache
• Protocol C: Write I/O is reported as completed as soon as it reached both local and remote disks.
If we were installing the HA cluster on a slow LAN or if the geogrphical seperation of the systems involved was great, then I recommend you opt for asyncronous mirroring (Protocol A) where the notifcation of a completed write operation occurs as soon as the local disk write is performed. This will greatly improve performance.
As we are setting up our HA cluster connected via a fast LAN, we will be using DRBD in fully syncronous mode, protocol C.
Protocol C involves the file system on the active node only being notified that the write operation was finished when the block is written to both disks of the cluster. Protocol C is the most commonly used mode of DRBD.
/etc/drbd.conf
global { usage-count yes; }
common { syncer { rate 10M; } }
resource r0 {
protocol C;
net {
max-buffers 2048;
ko-count 4;
}
on bailey {
device /dev/drbd0;
disk /dev/sda4;
address 192.168.1.125:7789;
meta-disk internal;
}
on giskard {
device /dev/drbd0;
disk /dev/sda3;
address 192.168.1.127:7789;
meta-disk internal;
}
}
drbd.conf explained:
Global section, usage-count. The DRBD project keeps statistics about the usage of DRBD versions. They do this by contacting a HTTP server each time a new DRBD version is installed on a system. This can be disabled by setting usage-count no;.
The common seciton contains configurations inhereted by all resources defined.
Setting the syncronisation rate, this is accoimplished by going to the syncer section and then assigning a value to the rate setting. The syncronisation rate refers to rate in which the data is being mirrored in the background. The best setting for the syncronsation rate is related to the speed of the network with which the DRBD systems are communicating on. 100Mbps ethernet supports around 12MBps, Giggabit ethernet somewhere around 125MBps.
in the configuration above, we have a resource defined as r0, the nodes are configured in the "on" host subsections.
"Device" configures the path of the logical block device that will be created by DRBD
"Disk" configures the block device that will be used to store the data.
"Address" configures the IP address and port number of the host that will hold this DRBD device.
"Meta-disk" configures the location where the metadata about the DRBD device will be stored.
You can set this to internal and DRBD will use the physical block device to store the information, by recording the metadata within the last sections of the disk.
Once you have created your configuration file, you must conduct the following steps on both the nodes.
Create device metadata.
$ drbdadm create-md r0
v08 Magic number not found
Writing meta data...
initialising activity log
NOT initialized bitmap
New drbd meta data block sucessfully created.
success
Attach the backing device.
$ drbdadm attach r0
Set the syncronisation parameters.
$ drbdadm syncer r0
Connect it to the peer.
$ drbdadm connect r0
Run the service.
$ service drbd start
Heartbeat:
Heartbeat provides the IP redundancy and the service HA functionailty.
On the failure of the primary node the VIP is assigned to the secondary node and the services configured to be HA are started on the secondary node.
Heartbeat configuration:
/etc/ha/ha.conf
## /etc/ha.d/ha.cf on node1
## This configuration is to be the same on both machines
## This example is made for version 2, comment out crm if using version 1
// replace the node variables with the names of your nodes.
crm no
keepalive 1
deadtime 5
warntime 3
initdead 20
bcast eth0
auto_failback yes
node bailey
node giskard
/etc/ha.d/authkeys
// The configuration below set authentication off, and encryption off for the authentication of nodes and their packets.
//Note make sure the authkeys file has the correct permisisions chmod 600
## /etc/ha.d/authkeys
auth 1
1 crc
/etc/ha.d/haresources
//192.168.1.40 is the VIP (Virtual IP) assigned to the cluster.
//the "smb" in the configuration line represents the service we wish to make HA
// /devdrbd0 represents the resource name you configured in the drbd.conf
## /etc/ha.d/haresources
## This configuration is to be the same on both nodes
bailey 192.168.1.40 drbddisk Filesystem::/dev/drbd0::/drbdData::ext3 smb
This guide will assist you in setting up an rsnapshot backup server on your network. rsnapshot uses rsync via ssh to perform unattended backups of multiple systems in your network. The guide can be found on the centos website here.