USE accounts INDEX on account_no TAG outstanding FOR balance > 0 EXPLAIN SELECT * FROM accounts WHERE balance > 0 Optimized using for condition on tag 'OUTSTANDING'
// the click event handler
private function onclick_sourcetree(e:Event):void {
yourTree.editable = false;
}
// the doubleclick event handler
private function ondoubleclick_sourcetree(e:Event):void {
yourTree.editable = true;
yourTree.editedItemPosition = {columnIndex:0, rowIndex:sourceTree.selectedIndex};
} There is a good article on the gluster website here which gives some good information regarding file system optimization suitable for a HA Recital cluster solution.
Many motherboards nowadays have integrated gigabit ethernet that use the Realtek NIC chipset.
The Realtek r8168B network card does not work out of the box in Redhat/Centos 5.3: instead of loading the r8168 driver, modprobe loads the r8169 driver, which is broken as can be seen with ifconfig which shows large amounts of dropped packets. A solution is to download the r8168 driver from the Realtek website and install it using the following steps:
Check whether the built-in driver, r8169.ko (or r8169.o for kernel 2.4.x), is installed.
# lsmod | grep r8169
If it is installed remove it.
# rmmod r8169
Download the R8168B linux driver from here into /root.
Unpack the tarball :
# cd /root
# tar vjxf r8168-8.012.00.tar.bz2
Change to the directory:
# cd r8168-8.012.00
If you are running the target kernel, then you should be able to do :
# make clean modules
# make install
# depmod -a
# insmod ./src/r8168.ko (or r8168.o in linux kernel 2.4.x)
make sure modprobe knows not to use r8169, and that depmod doesn’t find the r8169 module.
# echo "blacklist r8169" >> /etc/modprobe.d/blacklist
# mv /lib/modules/`uname -r`/kernel/drivers/net/r8169.ko \ /lib/modules/`uname -r`/kernel/drivers/net/r8169.ko.bak
You can check whether the driver is loaded by using the following commands.
# lsmod | grep r8168
# ifconfig -a
If there is a device name, ethX, shown on the monitor, the linux driver is loaded. Then, you can use the following command to activate it.
# ifconfig ethX up
After this you should not see any more dropped packets reported.
The REQUIRE() statement includes and executes the contents of the specified file at the current program execution level.
When a file is included, the code it contains inherits the variable scope of the line on which the include occurs. Any variables, procedures, functions or classes declared in the included file will be available at the current program execution level.
The REQUIRE_ONCE() statement is identical to the REQUIRE() statement except that Recital will check to see if the file as already been included and if so ignore the command.
The full syntax is:
REQUIRE( expC ) REQUIRE_ONCE( expC ) e.g. REQUIRE_ONCE( "myapp/myglobals.prg" )
After split brain has been detected, one node will always have the resource in a StandAlone connection state. The other might either also be in the StandAlone state (if both nodes detected the split brain simultaneously), or in WFConnection (if the peer tore down the connection before the other node had a chance to detect split brain).
At this point, unless you configured DRBD to automatically recover from split brain, you must manually intervene by selecting one node whose modifications will be discarded (this node is referred to as the split brain victim). This intervention is made with the following commands:
# drbdadm secondary resource
# drbdadm disconnect resource
# drbdadm -- --discard-my-data connect resource
On the other node (the split brain survivor), if its connection state is also StandAlone, you would enter:
# drbdadm connect resource
You may omit this step if the node is already in the WFConnection state; it will then reconnect automatically.
If all else fails and the machines are still in a split-brain condition then on the secondary (backup) machine issue:
drbdadm invalidate resource
Occasionally as a Linux administrator you will be in the situation where working on a remote server and you are left with no option but to force a reboot the system. This may be for a number of reasons, but where I have found it most frequent is when working on Linux clusters in a remote location.
When the "reboot" or "shutdown" commands are executed daemons are gracefully stopped and storage volumes unmounted.
This is usually accomplished via scripts in the /etc/init.d directory which will wait for each daemon to shut down gracefully before proceeding on to the next one. This is where a situation can develop where your Linux server fails to shutdown cleanly leaving you unable to administer the system until it is inspected locally. This is obviously not ideal so the answer is to force a reboot on the system where you can guarantee that the system will power cycle and come back up. The method will not unmount file systems nor sync delayed disk writes, so use this at your own discretion.
To force the kernel to reboot the system we will be making use of the magic SysRq key.
The magic_SysRq_key provides a means to send low level instructions directly to the kernel via the /proc virtual file system.
To enable the use of the magic SysRq option type the following at the command prompt:
echo 1 > /proc/sys/kernel/sysrq
Then to reboot the machine simply enter the following:
echo b > /proc/sysrq-trigger
Voilà! Your system will instantly reboot.
{linkr:related;keywords:linux;limit:5;title:Related Articles}
{linkr:bookmarks;size:small;text:nn;separator:%20;badges:2,1,18,13,19,15,17,12}
Recital is a dynamic programming language with an embedded high performance database engine particularly well suited for the development and deployment of high transaction throughput applications.
The Recital database engine is not a standalone process with which the application program communicates. Instead, the Recital database is an integral part of any applications developed in Recital.
Recital implements most of the SQL-99 standard for SQL, but also provides lower level navigational data access for performing high transaction throughput. It is the choice of the application developer whether to use SQL, navigational data access, or a combination of both depending upon the type of application being developed.
The Recital database engine, although operating as an embedded database in the user process, multiple users and other background processes may access the same data concurrently. Read accesses are satisfied in parallel. Recital uses automatic record level locking when performing database updates. This provides for a high degree of database concurrency and superior application performance and differentiates the Recital database from other embeddable databases such as sqlite that locks the entire database file during writing.
Key features of the Recital scripting language include:
- High performance database application scripting language
- Modern object-oriented language features
- Easy to learn, easy to use
- Fast, just-in-time compiled
- Loosely-typed
- Garbage collected
- Static arrays, Associative arrays and objects
- Develop desktop or web applications
- Cross-platform support
- Extensive built-in functions
- Superb built-in SQL command integration
- Navigational data access for the most demanding applications
- Scripting language is upward compatible with FoxPRO
Key features of the Recital database include:
- A broad subset of ANSI SQL 99, as well as extensions
- Cross-platform support
- Stored procedures
- Triggers
- Cursors
- Updatable Views
- System Tables
- Query caching
- Sub-SELECTs (i.e. nested SELECTs)
- Embedded database library
- Fault tolerant clustering support
- Chronological data versioning with database timelines
- Optional DES3 encrypted data
- Hot backup
- Client drivers for ODBC, JDBC and .NET
DRBD:
DRBD (Distributed Replicated Block Device) forms the storage redundancy portition of a HA cluster setup. Explained in basic terms DRBD provides a means of achieving RAID 1 behavoir over a network, where whole block devices are mirrored accross the network.
To start off you will need 2 indentically sized raw drives or partitions. Many how-to's on the internet assume the use of whole drives, of course this will be better performance, but if you are simply getting familar with the technology you can repartition existing drives to allow for two eqaully sized raw partitions, one on each of the systems you will be using.
There are 3 DRBD replication modes:
• Protocol A: Write I/O is reported as completed as soon as it reached local disk and local TCP send buffer
• Protocol B: Write I/O is reported as completed as soon as it reached local disk and remote TCP buffer cache
• Protocol C: Write I/O is reported as completed as soon as it reached both local and remote disks.
If we were installing the HA cluster on a slow LAN or if the geogrphical seperation of the systems involved was great, then I recommend you opt for asyncronous mirroring (Protocol A) where the notifcation of a completed write operation occurs as soon as the local disk write is performed. This will greatly improve performance.
As we are setting up our HA cluster connected via a fast LAN, we will be using DRBD in fully syncronous mode, protocol C.
Protocol C involves the file system on the active node only being notified that the write operation was finished when the block is written to both disks of the cluster. Protocol C is the most commonly used mode of DRBD.
/etc/drbd.conf
global { usage-count yes; }
common { syncer { rate 10M; } }
resource r0 {
protocol C;
net {
max-buffers 2048;
ko-count 4;
}
on bailey {
device /dev/drbd0;
disk /dev/sda4;
address 192.168.1.125:7789;
meta-disk internal;
}
on giskard {
device /dev/drbd0;
disk /dev/sda3;
address 192.168.1.127:7789;
meta-disk internal;
}
}
drbd.conf explained:
Global section, usage-count. The DRBD project keeps statistics about the usage of DRBD versions. They do this by contacting a HTTP server each time a new DRBD version is installed on a system. This can be disabled by setting usage-count no;.
The common seciton contains configurations inhereted by all resources defined.
Setting the syncronisation rate, this is accoimplished by going to the syncer section and then assigning a value to the rate setting. The syncronisation rate refers to rate in which the data is being mirrored in the background. The best setting for the syncronsation rate is related to the speed of the network with which the DRBD systems are communicating on. 100Mbps ethernet supports around 12MBps, Giggabit ethernet somewhere around 125MBps.
in the configuration above, we have a resource defined as r0, the nodes are configured in the "on" host subsections.
"Device" configures the path of the logical block device that will be created by DRBD
"Disk" configures the block device that will be used to store the data.
"Address" configures the IP address and port number of the host that will hold this DRBD device.
"Meta-disk" configures the location where the metadata about the DRBD device will be stored.
You can set this to internal and DRBD will use the physical block device to store the information, by recording the metadata within the last sections of the disk.
Once you have created your configuration file, you must conduct the following steps on both the nodes.
Create device metadata.
$ drbdadm create-md r0
v08 Magic number not found
Writing meta data...
initialising activity log
NOT initialized bitmap
New drbd meta data block sucessfully created.
success
Attach the backing device.
$ drbdadm attach r0
Set the syncronisation parameters.
$ drbdadm syncer r0
Connect it to the peer.
$ drbdadm connect r0
Run the service.
$ service drbd start
Heartbeat:
Heartbeat provides the IP redundancy and the service HA functionailty.
On the failure of the primary node the VIP is assigned to the secondary node and the services configured to be HA are started on the secondary node.
Heartbeat configuration:
/etc/ha/ha.conf
## /etc/ha.d/ha.cf on node1
## This configuration is to be the same on both machines
## This example is made for version 2, comment out crm if using version 1
// replace the node variables with the names of your nodes.
crm no
keepalive 1
deadtime 5
warntime 3
initdead 20
bcast eth0
auto_failback yes
node bailey
node giskard
/etc/ha.d/authkeys
// The configuration below set authentication off, and encryption off for the authentication of nodes and their packets.
//Note make sure the authkeys file has the correct permisisions chmod 600
## /etc/ha.d/authkeys
auth 1
1 crc
/etc/ha.d/haresources
//192.168.1.40 is the VIP (Virtual IP) assigned to the cluster.
//the "smb" in the configuration line represents the service we wish to make HA
// /devdrbd0 represents the resource name you configured in the drbd.conf
## /etc/ha.d/haresources
## This configuration is to be the same on both nodes
bailey 192.168.1.40 drbddisk Filesystem::/dev/drbd0::/drbdData::ext3 smb
http://kbala.com/ie-9-supports-corner-radius/