[data] oplocks = False level2 oplocks = False
veto oplock files = /*.dbf/*.DBF/*.ndx/*.NDX/*.dbx/*.DBX/*.dbt/*.DBT/
You can further tune samba by following this guide.
mount -t cifs {mount-point} -o username=name,pass=pass,directio
The directio option is used to not do inode data caching on files opened on this mount. This precludes mmaping files on this mount. In some cases with fast networks and little or no caching benefits on the client (e.g. when the application is doing large sequential reads bigger than page size without rereading the same data) this can provide better performance than the default behavior which caches reads (readahead) and writes (writebehind) through the local Linux client pagecache if oplock (caching token) is granted and held. Note that direct allows write operations larger than page size to be sent to the server.
Apr 22 16:57:39 bailey kernel: Status code returned 0xc000006d NT_STATUS_LOGON_FAILURE Apr 22 16:57:39 bailey kernel: CIFS VFS: Send error in SessSetup = -13 Apr 22 16:57:39 bailey kernel: CIFS VFS: cifs_mount failed w/return code = -13The you need to create the Samba user specified on the mount command
smbpasswd -a usernameFYI - Make sure you umount all the Samba {mount-point(s)} before shutting down Samba.
DRBD:
DRBD (Distributed Replicated Block Device) forms the storage redundancy portition of a HA cluster setup. Explained in basic terms DRBD provides a means of achieving RAID 1 behavoir over a network, where whole block devices are mirrored accross the network.
To start off you will need 2 indentically sized raw drives or partitions. Many how-to's on the internet assume the use of whole drives, of course this will be better performance, but if you are simply getting familar with the technology you can repartition existing drives to allow for two eqaully sized raw partitions, one on each of the systems you will be using.
There are 3 DRBD replication modes:
• Protocol A: Write I/O is reported as completed as soon as it reached local disk and local TCP send buffer
• Protocol B: Write I/O is reported as completed as soon as it reached local disk and remote TCP buffer cache
• Protocol C: Write I/O is reported as completed as soon as it reached both local and remote disks.
If we were installing the HA cluster on a slow LAN or if the geogrphical seperation of the systems involved was great, then I recommend you opt for asyncronous mirroring (Protocol A) where the notifcation of a completed write operation occurs as soon as the local disk write is performed. This will greatly improve performance.
As we are setting up our HA cluster connected via a fast LAN, we will be using DRBD in fully syncronous mode, protocol C.
Protocol C involves the file system on the active node only being notified that the write operation was finished when the block is written to both disks of the cluster. Protocol C is the most commonly used mode of DRBD.
/etc/drbd.conf
global { usage-count yes; }
common { syncer { rate 10M; } }
resource r0 {
protocol C;
net {
max-buffers 2048;
ko-count 4;
}
on bailey {
device /dev/drbd0;
disk /dev/sda4;
address 192.168.1.125:7789;
meta-disk internal;
}
on giskard {
device /dev/drbd0;
disk /dev/sda3;
address 192.168.1.127:7789;
meta-disk internal;
}
}
drbd.conf explained:
Global section, usage-count. The DRBD project keeps statistics about the usage of DRBD versions. They do this by contacting a HTTP server each time a new DRBD version is installed on a system. This can be disabled by setting usage-count no;.
The common seciton contains configurations inhereted by all resources defined.
Setting the syncronisation rate, this is accoimplished by going to the syncer section and then assigning a value to the rate setting. The syncronisation rate refers to rate in which the data is being mirrored in the background. The best setting for the syncronsation rate is related to the speed of the network with which the DRBD systems are communicating on. 100Mbps ethernet supports around 12MBps, Giggabit ethernet somewhere around 125MBps.
in the configuration above, we have a resource defined as r0, the nodes are configured in the "on" host subsections.
"Device" configures the path of the logical block device that will be created by DRBD
"Disk" configures the block device that will be used to store the data.
"Address" configures the IP address and port number of the host that will hold this DRBD device.
"Meta-disk" configures the location where the metadata about the DRBD device will be stored.
You can set this to internal and DRBD will use the physical block device to store the information, by recording the metadata within the last sections of the disk.
Once you have created your configuration file, you must conduct the following steps on both the nodes.
Create device metadata.
$ drbdadm create-md r0
v08 Magic number not found
Writing meta data...
initialising activity log
NOT initialized bitmap
New drbd meta data block sucessfully created.
success
Attach the backing device.
$ drbdadm attach r0
Set the syncronisation parameters.
$ drbdadm syncer r0
Connect it to the peer.
$ drbdadm connect r0
Run the service.
$ service drbd start
Heartbeat:
Heartbeat provides the IP redundancy and the service HA functionailty.
On the failure of the primary node the VIP is assigned to the secondary node and the services configured to be HA are started on the secondary node.
Heartbeat configuration:
/etc/ha/ha.conf
## /etc/ha.d/ha.cf on node1
## This configuration is to be the same on both machines
## This example is made for version 2, comment out crm if using version 1
// replace the node variables with the names of your nodes.
crm no
keepalive 1
deadtime 5
warntime 3
initdead 20
bcast eth0
auto_failback yes
node bailey
node giskard
/etc/ha.d/authkeys
// The configuration below set authentication off, and encryption off for the authentication of nodes and their packets.
//Note make sure the authkeys file has the correct permisisions chmod 600
## /etc/ha.d/authkeys
auth 1
1 crc
/etc/ha.d/haresources
//192.168.1.40 is the VIP (Virtual IP) assigned to the cluster.
//the "smb" in the configuration line represents the service we wish to make HA
// /devdrbd0 represents the resource name you configured in the drbd.conf
## /etc/ha.d/haresources
## This configuration is to be the same on both nodes
bailey 192.168.1.40 drbddisk Filesystem::/dev/drbd0::/drbdData::ext3 smb
Opening SSH to the outside world is a security risk. Here is how to restrict SSH access to certain IP addresses on a machine.
- Edit the /etc/hosts.allow file to include these lines, assuming your machine is on the 192.168.2.x nonrouting IP block, and you want to enable an external address of 217.40.111.121 IP block: Remember to add the period on the end of each incomplete IP number. If you have another complete IP address or range, add a space and that range on the end.
sshd,sshdfwd-X11: 192.168.2. 217.40.111.121
- Edit your /etc/hosts.deny file to include this line:
sshd,sshdfwd-X11:ALL
- These lines refuse SSH connections from anyone not in the IP address blocks listed.
Additionally you can restrict SSH access by username.
- Edit the /etc/ssh/sshd_config file and add the following lines
PermitRootLogin no
AllowUsers user1 user2 user3 etc
PasswordAuthentication yes
Now restart the ssh daemon for these changes to take effect
service sshd restart
The 64bit port of Recital requires these libraries to allow access to 32bit Xbase and C-ISAM data files which are 32bit.
If you do not have these libraries installed you will either get a "can't find db.exe" or an "error loading shared libraries" when trying to run or license Recital.
Installing the ia32 shared libraries
Redhat EL 5 / Centos 5 / Fedora 10
-
Insert the Red Hat Enterprise Linux 5 Supplementary CD, which contains the ia32el package.
-
After the system has mounted the CD, change to the directory containing the Supplementary packages. For example:
cd /media/cdrom/Supplementary/
-
Install the ia32el package:
rpm -Uvh ia32el-<version>.ia64.rpm
yum install ia32el
Ubuntu / Debian
sudo apt-get install ia32-libs
In this article Barry Mavin, CEO and Chief Software Architect for Recital provides details on how to use the Recital Universal .NET Data Provider with the Recital Database Server.
Overview
A data provider in the .NET Framework serves as a bridge between an application and a data source. A data provider is used to retrieve data from a data source and to reconcile changes to that data back to the data source.
Each .NET Framework data provider has a DataAdapter object: the .NET Framework Data Provider for OLE DB is the OleDbDataAdapter object, the .NET Framework Data Provider for SQL Server is the SqlDataAdapter object, the .NET Framework Data Provider for ODBC is the OdbcDataAdapter object, and the .NET Framework Data Provider for the Recital Database Server is the RecitalDataAdapter object.
The Recital Universal .NET Data Provider can access any data sources supported by the Recital Database Server. It is not restricted to only access Recital data. It can be used to access server-side ODBC, JDBC and OLE DB data sources also.
Core classes of the Data Provider
The Connection, Command, DataReader, and DataAdapter objects represent the core elements of the .NET Framework data provider model. The Recital Universal .NET Data Provider is plug compatible with the .NET Framework Data Provider for SQL Server. All SQL Server classes are prefixed with "Sql" e.g. SqlDataAdaptor. To use the Recital Universal Data Adaptor, simply change the "Sql" prefix to "Recital" e.g. RecitalDataAdaptor.
The following table describes these objects.
| Object | Description |
|---|---|
| RecitalConnection | Establishes a connection to a specific data source. |
| RecitalCommand | Executes a command against a data source. |
| RecitalDataReader | Reads a forward-only, read-only stream of data from a data source. |
| RecitalDataAdapter | Populates a DataSet and resolves updates with the data source. |
Along with the core classes listed in the preceding table, a .NET Framework data provider also contains the classes listed in the following table.
| Object | Description |
|---|---|
| RecitalTransaction | Enables you to enlist commands in transactions at the data source. |
| RecitalCommandBuilder | A helper object that will automatically generate command properties of a DataAdapter or will derive parameter information from a stored procedure and populate the Parameters collection of a Command object. |
| RecitalParameter | Defines input, output, and return value parameters for commands and stored procedures. |
The Recital Universal .NET Data Provider provides connectivity to the Recital Database Server running on any supported platform (Windows, Linux, Unix, OpenVMS) using the RecitalConnection object. The Recital Universal .NET Data Provider supports a connection string format that is similar to the SQL Server connection string format.
The basic format of a connection string consists of a series of keyword/value pairs separated by semicolons. The equal sign (=) connects each keyword and its value.
The following table lists the valid names for keyword values within the ConnectionString property of the RecitalConnection class.
| Name | Default | Description |
|---|---|---|
| Data Source -or- Server -or- Servername -or- Nodename |
The name or network address of the instance of the Recital Database Server which to connect to. | |
| Directory | The target directory on the remote server where data to be accessed resides. This is ignored when a Database is specified. | |
| Encrypt -or- Encryption |
false | When true, DES3 encryption is used for all data sent between the client and server. |
| Initial Catalog -or- Database |
The name of the database on the remote server. | |
| Password -or- Pwd |
The password used to authenticate access to the remote server. | |
| User ID -or- uid -or- User -or- Username |
The user name used to authenticate access to the remote server. | |
| Connection Pooling -or- Pool |
false | Enable connection pooling to the server. This provides for one connection to be shared. |
| Logging | false | Provides for the ability to log all server requests for debugging purposes |
| Rowid | true | When Rowid is true (the default) a column will be post-fixed to each SELECT query that is a unique row identifier. This is used to provide optimised UPDATE and DELETE operations. If you use the RecitalSqlGrid, RecitalSqlForm, or RecitalSqlGridForm components then this column is not visible but is used to handle updates to the underlying data source. |
| Logfile | The name of the logfile for logging | |
| Gateway |
Opens an SQL gateway(Connection) to a foreign SQL data source on the remote server.
The gateway can be specified in several formats: |
Populating a DataSet from a DataAdaptor
The ADO.NET DataSet is a memory-resident representation of data that provides a consistent relational programming model independent of the data source. The DataSet represents a complete set of data including tables, constraints, and relationships among the tables. Because the DataSet is independent of the data source, a DataSet can include data local to the application, as well as data from multiple data sources. Interaction with existing data sources is controlled through the DataAdapter.
A DataAdapter is used to retrieve data from a data source and populate tables within a DataSet. The DataAdapter also resolves changes made to the DataSet back to the data source. The DataAdapter uses the Connection object of the .NET Framework data provider to connect to a data source and Command objects to retrieve data from and resolve changes to the data source.
The SelectCommand property of the DataAdapter is a Command object that retrieves data from the data source. The InsertCommand, UpdateCommand, and DeleteCommand properties of the DataAdapter are Command objects that manage updates to the data in the data source according to modifications made to the data in the DataSet.
The Fill method of the DataAdapter is used to populate a DataSet with the results of the SelectCommand of the DataAdapter. Fill takes as its arguments a DataSet to be populated, and a DataTable object, or the name of the DataTable to be filled with the rows returned from the SelectCommand.
The Fill method uses the DataReader object implicitly to return the column names and types used to create the tables in the DataSet, as well as the data to populate the rows of the tables in the DataSet. Tables and columns are only created if they do not already exist; otherwise Fill uses the existing DataSet schema.
Examples in C#:
////////////////////////////////////////////////////////////////////////
// include the references below
using System.Data;
using Recital.Data;
////////////////////////////////////////////////////////////////////////
// The following code example creates an instance of a DataAdapter that
// uses a Connection to the Recital Database Server Southwind database
// and populates a DataTable in a DataSet with the list of customers.
// The SQL statement and Connection arguments passed to the DataAdapter
// constructor are used to create the SelectCommand property of the DataAdapter.
public DataSet SelectCustomers()
{
RecitalConnection swindConn = new
RecitalConnection("Data Source=localhost;Initial Catalog=southwind");
RecitalCommand selectCMD = new
RecitalCommand("SELECT CustomerID, CompanyName FROM Customers", swindConn);
selectCMD.CommandTimeout = 30;
RecitalDataAdapter custDA = new RecitalDataAdapter();
custDA.SelectCommand = selectCMD;
swindConn.Open();
DataSet custDS = new DataSet();
custDA.Fill(custDS, "Customers");
swindConn.Close();
return custDS;
}
////////////////////////////////////////////////////////////////////////
// The following example uses the RecitalCommand, RecitalDataAdapter and
// RecitalConnection, to select records from a database, and populate a
// DataSet with the selected rows. The filled DataSet is then returned.
// To accomplish this, the method is passed an initialized DataSet, a
// connection string, and a query string that is a SQL SELECT statement
public DataSet SelectRecitalRows(DataSet dataset, string connection, string query)
{
RecitalConnection conn = new RecitalConnection(connection);
SqlDataAdapter adapter = new RecitalDataAdapter();
adapter.SelectCommand = new RecitalCommand(query, conn);
adapter.Fill(dataset);
return dataset;
} The REQUIRE() statement includes and executes the contents of the specified file at the current program execution level.
When a file is included, the code it contains inherits the variable scope of the line on which the include occurs. Any variables, procedures, functions or classes declared in the included file will be available at the current program execution level.
The REQUIRE_ONCE() statement is identical to the REQUIRE() statement except that Recital will check to see if the file as already been included and if so ignore the command.
The full syntax is:
REQUIRE( expC ) REQUIRE_ONCE( expC ) e.g. REQUIRE_ONCE( "myapp/myglobals.prg" )
VMware products, such as ESX, Workstation, Server, and Fusion, come with a built-in VNC server to access guests.
This allows you to connect to the guest without having a VNC server installed in the guest - useful if a server doesn't exist for the guest or if you need access some time when a server would not work (say during the boot process). It's also good in conjunction with Headless Mode.
The VNC server is set up on a per-VM basis, and is disabled by default. To enable it, add the following lines to the .vmx:
RemoteDisplay.vnc.enabled = "TRUE" RemoteDisplay.vnc.port = "5901"
You can set a password with RemoteDisplay.vnc.key; details for how to calculate the obfuscated value given a plaintext password are in Compute hashed password for use with RemoteDisplay.vnc.key.
If you want more than one VM set up in this manner, make sure they have unique port numbers. To connect, use a VNC client pointing at host-ip-address:port. If you connect from a different computer, you may have to open a hole in the OS X firewall. If you use Leopard's Screen Sharing.app on the same computer as Fusion, don't use port 5900 since Screen Sharing refuses to connect to that.
// determine how many Recital users are on the system
nusers = pipetostr("ps -ef | grep db.exe | wc -l")