This guide will assist you in setting up an rsnapshot backup server on your network. rsnapshot uses rsync via ssh to perform unattended backups of multiple systems in your network. The guide can be found on the centos website here.
After split brain has been detected, one node will always have the resource in a StandAlone connection state. The other might either also be in the StandAlone state (if both nodes detected the split brain simultaneously), or in WFConnection (if the peer tore down the connection before the other node had a chance to detect split brain).
At this point, unless you configured DRBD to automatically recover from split brain, you must manually intervene by selecting one node whose modifications will be discarded (this node is referred to as the split brain victim). This intervention is made with the following commands:
# drbdadm secondary resource
# drbdadm disconnect resource
# drbdadm -- --discard-my-data connect resource
On the other node (the split brain survivor), if its connection state is also StandAlone, you would enter:
# drbdadm connect resource
You may omit this step if the node is already in the WFConnection state; it will then reconnect automatically.
If all else fails and the machines are still in a split-brain condition then on the secondary (backup) machine issue:
drbdadm invalidate resource
Here's how to set up field validation based on dynamic values from another table.
Using the products.dbf table from the southwind sample database, validation can be added to the categoryid field to ensure it matches an existing categoryid from the categories.dbf table.
open database southwindThe rlookup() function checks whether an expression exists in the index (master or specified) of the specified table . An attempt to update categoryid with a value not in the list will give an error: Validation on field 'CATEGORYID' failed.
alter table products add constraint;
(categoryid set check rlookup(products.categoryid,categories))
If you have access to the Recital Workbench, you can use the modify structure worksurface to add and alter your dictionary entries, including a customized error message if required.

iptables -I INPUT -j ACCEPT -p tcp --destination-port 8001 -i lo
iptables -A INPUT -j DROP -p tcp --destination-port 8001 -i eth0
There is a good article on the gluster website here which gives some good information regarding file system optimization suitable for a HA Recital cluster solution.
./configure CFLAGS='-arch x86_64' APXSLDFLAGS='-arch x86_64' --with-apxs=/usr/sbin/apxsThen you must pass the these additional flags to the apxs command in order to generate a Universal Binary shared module.
-Wl,-dynamic -Wl,'-arch ppc' -Wl,'-arch ppc64' -Wl,'-arch i386' -Wl,'-arch x86_64' -Wc,-dynamic -Wc,'-arch ppc' -Wc,'-arch ppc64' -Wc,'-arch i386' -Wc,'-arch x86_64'If you then do a file command on the shared module it should return;
$ file mod_recital.so mod_recital2.2.so: Mach-O universal binary with 4 architectures mod_recital2.2.so (for architecture ppc7400): Mach-O bundle ppc mod_recital2.2.so (for architecture ppc64): Mach-O 64-bit bundle ppc64 mod_recital2.2.so (for architecture i386): Mach-O bundle i386 mod_recital2.2.so (for architecture x86_64): Mach-O 64-bit bundle x86_64The apache module files are stored in the /usr/libexec/apache2/ directory on a default apache install on the Mac and the configuration file is /private/etc/apache2/httpd.conf
In this article Barry Mavin explains step by step how to setup a Linux HA (High Availability) cluster for the running of Recital applications on Redhat/Centos 5.3 although the general configuration should work for other linux versions with a few minor changes.