Key features of the Recital database include:
- SQL-92 and a broad subset of ANSI SQL 99, as well as extensions
- Cross-platform support
- Stored procedures
- Triggers
- Cursors
- Updatable Views
- System Tables
- Query caching
- High-performance
- Single-User and Multi-User
- Multi-Process
- ACID Transactions
- Referential Integrity
- Cascading Updates and Deletes
- Multi-table Joins
- Row-level Locking
- BLOBs (Binary Large Objects)
- UDFs (User Defined Functions)
- OLTP (On-Line Transaction Processing)
- Drivers for ODBC, JDBC, and .NET
- Sub-SELECTs (i.e. nested SELECTs)
- Embedded database library
- Database timelines providing data undo functionality
- Fault tolerant clustering support
- Hot backup
The REQUIRE() statement includes and executes the contents of the specified file at the current program execution level.
When a file is included, the code it contains inherits the variable scope of the line on which the include occurs. Any variables, procedures, functions or classes declared in the included file will be available at the current program execution level.
The REQUIRE_ONCE() statement is identical to the REQUIRE() statement except that Recital will check to see if the file as already been included and if so ignore the command.
The full syntax is:
REQUIRE( expC ) REQUIRE_ONCE( expC ) e.g. REQUIRE_ONCE( "myapp/myglobals.prg" )
RTOS()
Syntax
RTOS( [ <workarea> ] )Description
The RTOS() function returns all the fields in the current row as a string. The string will begin with the unique row identifier and then the deleted flag, followed by the data in the record. An optional workarea can be specified, otherwise the current workarea will be usedExample
use backup in 0
use accounts in 0
nrecs=reccount()
for i = 1 to nrecs
if rtos(accounts) != rtos(backup)
debug("record "+recno()+" don't match")
endif
next
In this article Barry Mavin, CEO and Chief Software Architect for Recital, details Working with Stored Procedures in the Recital Database Server.
Overview
Stored procedures and user-defined functions are collections of SQL statements and optional control-of-flow statements written in the Recital 4GL (compatible with VFP) stored under a name and saved in a Database. Both stored procedures and user-defined functions are just-in-time compiled by the Recital database engine. Using the Database Administrator in Recital Enterprise Studio, you can easily create, view, modify, and test Stored Procedures, Triggers, and user-defined functions
Creating and Editing Stored Procedures
To create a new Stored Procedure, right-click the Procedures node in the Databases tree of the Project Explorer and choose Create. To modify an existing stored procedure select the Stored Procedure in the Databases Tree in the Project Explorer by double-clicking on it or selecting Modify from the context menu . By convertion we recommend that you name your Stored Procedures beginning with "sp_xxx_", user-defined functions with "f_xxx_", and Triggers with "dt_xxx_", where xxx is the name of the table that they are associated with.
Testing the Procedure
To test run the Stored Procedure, select the Stored Procedure in the Databases Tree in the Project Explorer by double-clicking on it. Once the Database Administrator is displayed, click the Run button to run the procedure.
Getting return values
Example Stored Procedure called "sp_myproc":
parameter arg1, arg2 return arg1 + arg2
Example calling the Stored Procedure from C# .NET:
////////////////////////////////////////////////////////////////////////
// include the references below
using System.Data;
using Recital.Data;
////////////////////////////////////////////////////////////////////////
// sample code to call a Stored Procedure that adds to numeric values together
public int CallStoredProcedure()
{
RecitalConnection conn = new
RecitalConnection("Data Source=localhost;Database=southwind;uid=?;pwd=?");
RecitalCommand cmd = new RecitalCommand();
cmd.Connection = conn;
cmd.CommandText = "sp_myproc(@arg1, @arg2)";
cmd.CommandType = CommandType.StoredProcedure;
cmd.Parameters["@arg1"].Value = 10;
cmd.Parameters["@arg2"].Value = 20;
conn.Open();
cmd.ExecuteNonQuery();
int result = (int)(cmd.Parameters["retvalue"].Value); // get the return value from the sp
conn.Close();
return result;
}
Writing Stored Procedures that return a Resultset
If you want to write a Stored Procedure that returns a ResultSet, you use the SETRESULTSET() function of the 4GL. Using the Universal .NET Data Provider, you can then execute the 4GL Stored Procedure and return the ResultSet to the client application for processing. ResultSets that are returned from Stored Procedures are read-only.
Example Stored Procedure called "sp_myproc":
parameter query
select * from customers &query into cursor "mydata"
return setresultset("mydata")
Example calling the Stored Procedure from C# .NET:
////////////////////////////////////////////////////////////////////////
// include the references below
using System.Data;
using Recital.Data;
////////////////////////////////////////////////////////////////////////
// sample code to call a stored procedure that returns a ResultSet
public void CallStoredProcedure()
{
RecitalConnection conn = new
RecitalConnection("Data Source=localhost;Database=southwind;uid=?;pwd=?");
RecitalCommand cmd = new RecitalCommand();
cmd.Connection = conn;
cmd.CommandText = "sp_myproc(@query)";
cmd.CommandType = CommandType.StoredProcedure;
cmd.Parameters["@query"].Value = "where not deleted()";
conn.Open();
RecitalDataReader dreader = cmd.ExecuteReader();
int sqlcnt = (int)(cmd.Parameters["sqlcnt"].Value); // returns number of affected rows
while (dreader.Read())
{
// read and process the data
}
dreader.Close();
conn.Close();
} DRBD:
DRBD (Distributed Replicated Block Device) forms the storage redundancy portition of a HA cluster setup. Explained in basic terms DRBD provides a means of achieving RAID 1 behavoir over a network, where whole block devices are mirrored accross the network.
To start off you will need 2 indentically sized raw drives or partitions. Many how-to's on the internet assume the use of whole drives, of course this will be better performance, but if you are simply getting familar with the technology you can repartition existing drives to allow for two eqaully sized raw partitions, one on each of the systems you will be using.
There are 3 DRBD replication modes:
• Protocol A: Write I/O is reported as completed as soon as it reached local disk and local TCP send buffer
• Protocol B: Write I/O is reported as completed as soon as it reached local disk and remote TCP buffer cache
• Protocol C: Write I/O is reported as completed as soon as it reached both local and remote disks.
If we were installing the HA cluster on a slow LAN or if the geogrphical seperation of the systems involved was great, then I recommend you opt for asyncronous mirroring (Protocol A) where the notifcation of a completed write operation occurs as soon as the local disk write is performed. This will greatly improve performance.
As we are setting up our HA cluster connected via a fast LAN, we will be using DRBD in fully syncronous mode, protocol C.
Protocol C involves the file system on the active node only being notified that the write operation was finished when the block is written to both disks of the cluster. Protocol C is the most commonly used mode of DRBD.
/etc/drbd.conf
global { usage-count yes; }
common { syncer { rate 10M; } }
resource r0 {
protocol C;
net {
max-buffers 2048;
ko-count 4;
}
on bailey {
device /dev/drbd0;
disk /dev/sda4;
address 192.168.1.125:7789;
meta-disk internal;
}
on giskard {
device /dev/drbd0;
disk /dev/sda3;
address 192.168.1.127:7789;
meta-disk internal;
}
}
drbd.conf explained:
Global section, usage-count. The DRBD project keeps statistics about the usage of DRBD versions. They do this by contacting a HTTP server each time a new DRBD version is installed on a system. This can be disabled by setting usage-count no;.
The common seciton contains configurations inhereted by all resources defined.
Setting the syncronisation rate, this is accoimplished by going to the syncer section and then assigning a value to the rate setting. The syncronisation rate refers to rate in which the data is being mirrored in the background. The best setting for the syncronsation rate is related to the speed of the network with which the DRBD systems are communicating on. 100Mbps ethernet supports around 12MBps, Giggabit ethernet somewhere around 125MBps.
in the configuration above, we have a resource defined as r0, the nodes are configured in the "on" host subsections.
"Device" configures the path of the logical block device that will be created by DRBD
"Disk" configures the block device that will be used to store the data.
"Address" configures the IP address and port number of the host that will hold this DRBD device.
"Meta-disk" configures the location where the metadata about the DRBD device will be stored.
You can set this to internal and DRBD will use the physical block device to store the information, by recording the metadata within the last sections of the disk.
Once you have created your configuration file, you must conduct the following steps on both the nodes.
Create device metadata.
$ drbdadm create-md r0
v08 Magic number not found
Writing meta data...
initialising activity log
NOT initialized bitmap
New drbd meta data block sucessfully created.
success
Attach the backing device.
$ drbdadm attach r0
Set the syncronisation parameters.
$ drbdadm syncer r0
Connect it to the peer.
$ drbdadm connect r0
Run the service.
$ service drbd start
Heartbeat:
Heartbeat provides the IP redundancy and the service HA functionailty.
On the failure of the primary node the VIP is assigned to the secondary node and the services configured to be HA are started on the secondary node.
Heartbeat configuration:
/etc/ha/ha.conf
## /etc/ha.d/ha.cf on node1
## This configuration is to be the same on both machines
## This example is made for version 2, comment out crm if using version 1
// replace the node variables with the names of your nodes.
crm no
keepalive 1
deadtime 5
warntime 3
initdead 20
bcast eth0
auto_failback yes
node bailey
node giskard
/etc/ha.d/authkeys
// The configuration below set authentication off, and encryption off for the authentication of nodes and their packets.
//Note make sure the authkeys file has the correct permisisions chmod 600
## /etc/ha.d/authkeys
auth 1
1 crc
/etc/ha.d/haresources
//192.168.1.40 is the VIP (Virtual IP) assigned to the cluster.
//the "smb" in the configuration line represents the service we wish to make HA
// /devdrbd0 represents the resource name you configured in the drbd.conf
## /etc/ha.d/haresources
## This configuration is to be the same on both nodes
bailey 192.168.1.40 drbddisk Filesystem::/dev/drbd0::/drbdData::ext3 smb
Recital is a dynamic programming language with an integrated high performance database particularly well suited for the development and deployment of high transaction throughput applications. Recital 10 further enhances Recital with extensive features and functionality to facilitate its use in fault tolerant high availability systems. Much of the development of Recital 10 was concentrated on performance optimizations and cluster aware functionality to provide an application platform that can be scaled as needed without any application changes.
Key features of Recital 10 include:
- Cluster aware database engine that works transparently with drbd, heartbeat, glusterfs and samba
- High degree of fault tolerance with self healing indexes
- Massive performance improvements
- Extensive internals overall and modernization with superior object-oriented capabilities
- Chronological data versioning with database timelines
- SmartQuery caching
- Database Administration Tools
- Code and Data Profiling
- Better integration with unix/linux command shell
- Incorporates a range of new built-in functions compatible with those in the PHP core libraries
- Built-in support for outputting data in HTML, XML, and JSON format
- Seamless SQL command integration into the Recital scripting language
- Much improved Microsoft FoxPRO language compatibility
- Numerous extensions and improvements (see below for details)
- Very large file support (2^63)