Recital

Login Register

The getUIComponentBitmapData method can create bitmapdata for a given IUIComponent. Pass any UIcomponent to get its respective bitmapdata.

public static function getUIComponentBitmapData(target:IUIComponent):BitmapData {      
    var resultBitmapData:BitmapData = new BitmapData(target.width, target.height);     
    var m:Matrix = new Matrix();     
    resultBitmapData.draw(target, m);     
    return resultBitmapData; 
}

Now convert the bitmapdata to a jpeg bytearray.

private static function encodeToJPEG(data:BitmapData, quality:Number = 75):ByteArray {     
    var encoder:JPGEncoder = new JPGEncoder(quality);     
    return encoder.encode(data); 
}

Now encode the ByteArray into Base64.

public static function base64Encode(data:ByteArray):String {     
    var encoder:Base64Encoder = new Base64Encoder();     
    encoder.encodeBytes(data);     
    return encoder.flush(); 
}

Upload the base64 encoded ByteArray to the server.

public static uploadData():void {     
    var url:String = "saveFile.php";     
    var urlRequest:URLRequest = new URLRequest(url);     
    urlRequest.method = URLRequestMethod.POST;     
    var urlLoader:URLLoader = new URLLoader();     
    var urlVariables:URLVariables = new URLVariables();     
    urlVariables.file = jpgEncodedFile;    // as returned from base64Encode()     
    urlLoader.data = urlVariables;     
    urlLoader.load(urlRequest); 
}

The saveFile.php file on the server.


Published in Blogs
Read more...

If you have 4 GB or more RAM use the Linux kernel compiled for PAE capable machines. Your machine may not show up total 4GB ram. All you have to do is install PAE kernel package.

This package includes a version of the Linux kernel with support for up to 64GB of high memory. It requires a CPU with Physical Address Extensions (PAE).

The non-PAE kernel can only address up to 4GB of memory. Install the kernel-PAE package if your machine has more than 4GB of memory (>=4GB).

# yum install kernel-PAE

If you want to know how much memory centos is using type this in a terminal:

# cat /proc/meminfo
Published in Blogs
Read more...
Occasionally subversion can get itself confused about what is and what is not in a working copy. This usually occurs if you have replaced the contents of a directory such as when you upgrade a component in Joomla!

You receive a message such containing this:

"working copy admin area is missing"

How to resolve this: 

Step 1 -- Rename the directory that is causing the error from a shell prompt and prefix it with __ 

mv com_docman __com_docman

Step 2 -- Using your subversion client refresh your working copy, then "update" the directory that is causing the problem e.g. update com_docman.

Step 3 -- Now you can commit the __com_docman directory.

After you have done this follow these steps, using your subversion client:

Step 4 -- delete the com_docman directory from your working copy
Step 5 -- rename __com_docman back to com_docman

Now "commit all" and both your working copy and repository will be in sync.
Published in Blogs
Read more...
Found a nice subversion plugin for finder on the MAC.

The goal of the SCPlugin project is to integrate Subversion into the Mac OS X Finder. 

  • Support for Subversion.
  • Access to commonly used source control operations via contextual menu [screenshot]
  • Dynamic icon badging for files under version control. Shows the status of your files visually. [ screenshot ]
Published in Blogs
Read more...
When the node is clicked set editable to false. Set editable to true in the double-click event handler.
// the click event handler 
private function onclick_sourcetree(e:Event):void  {     
    yourTree.editable = false; 
}  

// the doubleclick event handler  
private function ondoubleclick_sourcetree(e:Event):void  {     
    yourTree.editable = true;    
    yourTree.editedItemPosition = {columnIndex:0, rowIndex:sourceTree.selectedIndex}; 
} 
Published in Blogs
Read more...
Subclipse is an Eclipse Team Provider plug-in providing support for Subversion within the Eclipse IDE. This plugin is required in order to use the recital eclipse workspace.
Published in Blogs
Read more...
Each Recital table can have one or more data dictionaries to provide a central repository for constraints and other metadata. 

Here's how to set up field validation for a field with a small static number of acceptable values.

Using the example.dbf table from the southwind sample database, validation can be added to the title field to ensure it matches one of a list values.
open database southwind
alter table example add constraint;
(title set check inlist(alltrim(title),"Miss","Mr","Mrs","Ms"))
The inlist() function checks whether the specified expression exists in the comma-separated list which follows.  An attempt to update title with a value not in the list will give an error: Validation on field 'TITLE' failed.

If you have access to the Recital Workbench, you can use the modify structure worksurface to add and alter your dictionary entries, including a customized error message if required.

validation


Published in Blogs
Read more...

DRBD:
DRBD (Distributed Replicated Block Device) forms the storage redundancy portition of a HA cluster setup. Explained in basic terms DRBD provides a means of achieving RAID 1 behavoir over a network, where whole block devices are mirrored accross the network.

To start off you will need 2 indentically sized raw drives or partitions. Many how-to's on the internet assume the use of whole drives, of course this will be better performance, but if you are simply getting familar with the technology you can repartition existing drives to allow for two eqaully sized raw partitions, one on each of the systems you will be using.

There are 3 DRBD replication modes:
• Protocol A: Write I/O is reported as completed as soon as it reached local disk and local TCP send buffer
• Protocol B: Write I/O is reported as completed as soon as it reached local disk and remote TCP buffer cache
• Protocol C: Write I/O is reported as completed as soon as it reached both local and remote disks.

If we were installing the HA cluster on a slow LAN or if the geogrphical seperation of the systems involved was great, then I recommend you opt for asyncronous mirroring (Protocol A) where the notifcation of a completed write operation occurs as soon as the local disk write is performed. This will greatly improve performance.

As we are setting up our HA cluster connected via a fast LAN, we will be using DRBD in fully syncronous mode, protocol C.
Protocol C involves the file system on the active node only being notified that the write operation was finished when the block is written to both disks of the cluster. Protocol C is the most commonly used mode of DRBD.

/etc/drbd.conf

global { usage-count yes; }
common { syncer { rate 10M; } }
resource r0 {
protocol C;
net {
max-buffers 2048;
ko-count 4;
}
on bailey {
device    /dev/drbd0;
disk      /dev/sda4;
address   192.168.1.125:7789;
meta-disk internal;
}
on giskard {
device    /dev/drbd0;
disk      /dev/sda3;
address   192.168.1.127:7789;
meta-disk internal;
}
}

drbd.conf explained:

Global section, usage-count. The DRBD project keeps statistics about the usage of DRBD versions. They do this by contacting a HTTP server each time a new DRBD version is installed on a system. This can be disabled by setting usage-count no;.

The common seciton contains configurations inhereted by all resources defined.
Setting the syncronisation rate, this is accoimplished by going to the syncer section and then assigning a value to the rate setting. The syncronisation rate refers to rate in which the data is being mirrored in the background. The best setting for the syncronsation rate is related to the speed of the network with which the DRBD systems are communicating on. 100Mbps ethernet supports around 12MBps, Giggabit ethernet somewhere around 125MBps.

in the configuration above, we have a resource defined as r0, the nodes are configured in the "on" host subsections.
"Device" configures the path of the logical block device that will be created by DRBD
"Disk" configures the block device that will be used to store the data.
"Address" configures the IP address and port number of the host that will hold this DRBD device.
"Meta-disk" configures the location where the metadata about the DRBD device will be stored.
You can set this to internal and DRBD will use the physical block device to store the information, by recording the metadata within the last sections of the disk.
Once you have created your configuration file, you must conduct the following steps on both the nodes.

Create device metadata.

$ drbdadm create-md r0
v08 Magic number not found
Writing meta data...
initialising activity log
NOT initialized bitmap
New drbd meta data block sucessfully created.
success

Attach the backing device.
$ drbdadm attach r0

Set the syncronisation parameters.
$ drbdadm syncer r0

Connect it to the peer.
$ drbdadm connect r0

Run the service.
$ service drbd start

Heartbeat:

Heartbeat provides the IP redundancy and the service HA functionailty.
On the failure of the primary node the VIP is assigned to the secondary node and the services configured to be HA are started on the secondary node.

Heartbeat configuration:

/etc/ha/ha.conf

## /etc/ha.d/ha.cf on node1
## This configuration is to be the same on both machines
## This example is made for version 2, comment out crm if using version 1
// replace the node variables with the names of your nodes.

crm no
keepalive 1
deadtime 5
warntime 3
initdead 20
bcast eth0
auto_failback yes
node bailey
node giskard

/etc/ha.d/authkeys
// The configuration below set authentication off, and encryption off for the authentication of nodes and their packets.
//Note make sure the authkeys file has the correct permisisions chmod 600

## /etc/ha.d/authkeys
auth 1
1 crc

/etc/ha.d/haresources
//192.168.1.40 is the VIP (Virtual IP) assigned to the cluster.
//the "smb" in the configuration line represents the service we wish to make HA
// /devdrbd0 represents the resource name you configured in the drbd.conf

## /etc/ha.d/haresources
## This configuration is to be the same on both nodes

bailey 192.168.1.40 drbddisk Filesystem::/dev/drbd0::/drbdData::ext3 smb

Published in Blogs
Read more...

In this article Barry Mavin explains step by step how to setup a Linux HA (High Availability) cluster for the running of Recital applications on Redhat/Centos 5.3 although the general configuration should work for other linux versions with a few minor changes.

Published in Blogs
Read more...

If you have software packages which you wish to share with others or simply between your own personal machines, a neat and easy solution is to create your own YUM repository and provide your .repo file for download.

YUM is by far the easiest method of installing software on Red hat, Centos and Fedora. Not only does it mean you don't need to trawl the web looking for somewhere to download the packages, YUM does a great job of satisfying any package dependencies. As long as the required packages are available in the enabled repositories on your system, YUM will go out and get everything you need.

To create your own YUM repository, you will need to install the yum-utils and createrepo packages:

yum install yum-utils createrepo

yum-utils contains the tools you will need to manage your soon to be created repository, and createrepo is used to create the xml based rpm metadata you will require for your repository.

Once you have installed these required tools, create a directory in your chosen web server's document root e.g:

mkdir -p /var/www/html/repo/recital/updates

Copy the rpm's you wish to host into this newly created directory.

The next step is to create the xml based rpm metadata. To create this use the createrepo program we installed earlier.

At the shell type the following command:

createrepo -v -s md5 /var/www/html/repo/recital/updates


This will create the required metadata in the repodata directory of your /var/www/html/repo/recital/updates directory.

root@test repodata]# ls -l
rwotal 44
-rw-r--r-- 1 root root 28996 Jan 13 21:42 filelists.xml.gz
-rw-r--r-- 1 root root   284 Jan 13 21:42 other.xml.gz
-rw-r--r-- 1 root root  1082 Jan 13 21:42 primary.xml.gz
-rw-r--r-- 1 root root   951 Jan 13 21:42 repomd.xml

To do a final consistency check on your repository run the following command:

verifytree /var/www/html/repo/recital/updates

We now have a fully functioning YUM repository for our hosted rpm packages.
The next process is to create a .repo file in the client systems /etc/yum.repos.d directory.

Navigate to the /etc/yum.repos.d directory on your system as root.

Using your preferred text editor to create the .repo file. In this example I will call it recital.repo.
Now paste in the following lines:

[Recital]
name=Recital Update Server
baseurl=http://ftp.recitalsoftware.com/repo/recital/updates
enabled=1
gpgcheck=1

Once that is saved, at the shell prompt on the same machine (YUM client system).

$ yum repolist
Loaded plugins: presto, refresh-packagekit
repo id                  repo name                                 status
Recital                  Recital Update Server                     enabled:      1
adobe-linux-i386         Adobe Systems Incorporated                enabled:     17
fedora                   Fedora 12 - i386                          enabled: 15,366

As you can see the Recital repo is now being picked up and we have access to all the packages it is hosting.

See how easy that was!

Published in Blogs
Read more...

Copyright © 2025 Recital Software Inc.

Login

Register

User Registration
or Cancel