Saturday, March 31, 2012

JCache on NoSQL MySQL Cluster 7.2 ( Memcached )

With the release of MySQL Cluster 7.2 and the support for the native Memcached API we can also use the HA cluster for NoSQL besides JPA, SQL.
In this blogpost I will try out this NoSQL feature with JCache ( alias JSR 107 or javax.cache , it will be part of Java EE 7 and it will also work in Java 6 ).  JCache defines a standard Java Caching API for use by developers and a standard SPI (“Service Provider Interface”) for use by implementers. Coherence of Oracle will also support JCache. For more information see Greg Luck's blog.

With JCache and MySQL Cluster alone we can't get this example running. We also need to have Memcached JCache provider.  Leen Toelen already did the hard work, he made one which uses spymemcached as memcache client. So for this we need to download his code at github, after this we also need to download the latest spymemcached jars.

The code of Leen will work with JCache version 0.4, so we need to download the 0.4 version jars at;quick~javax-cache . You can find the source code at JSR107 github repositories. For the JCache provider you also need to download the CDI-API jar.

For more information on Memcached or MySQL you can read this great blog of clusterdb, he explains it really well, like
  • How to setup your MySQL and Memcached environment
  • What is memcached
  • How it works 
  • Let it work on your existing tables 
or you can read the MySQL Cluster 7.2 whitepaper which can be downloaded at

So we start by downloading MySQL Cluster 7.2 and configuring this cluster. I won't explain this here, there are a lot of great blogs or guides which can help you with this.

After we got the cluster running we need to create the memcached database.

Also we need to have at least 10 API or MYSQLD entries in the config.ini of the cluster. After this change you need to reload this config file with the ndb_mgmd daemon.


DataDir= /usr/cluster/data


# Management Server
HostName= # IP address of this server

# Storage Nodes
HostName= # IP address of storage-node-1
DataDir= /usr/cluster/data

HostName= # IP address of storage-node-2
DataDir= /usr/cluster/data


Then we need to create the ndbmemcache database

mysql -p < /usr/share/mysql/memcache-api/ndb_memcache_metadata.sql
mysql> show databases;
| Database           |
| information_schema |
| mysql              |
| ndbinfo            |
| ndbmemcache        |
| performance_schema |
| test               |

We can start memcached on the cluster nodes (what you like  ) and it needs to connect to the NDB management service.

/usr/sbin/memcached -E /usr/lib64/ -u mysql -e "connectstring=mgt.alfa.local:1186;role=db-only" -vv

you should see a output like  this.

17-Mar-2012 21:12:55 CET NDB Memcache 5.5.19-ndb-7.2.4 started [NDB 7.2.4; MySQL 5.5.19]
Contacting primary management server (mgt.alfa.local:1186) ... 
Connected to "mgt.alfa.local:1186" as node id 6.
Retrieved 3 key prefixes for server role "db-only".
The default behavior is that: 
    GET uses NDB only
    SET uses NDB only
    DELETE uses NDB only.
The 2 explicitly defined key prefixes are "b:" (demo_table_large) and "t:" (demo_table_tabs)
Connected to "" as node id 7.
Server started with 4 threads.
Priming the pump ... 
Connected to "" as node id 8.
Scheduler: using 2 connections to cluster 0
Scheduler: starting for 1 cluster; c0,f0,t1
done [0.677 sec].
Loaded engine: NDB Memcache 5.5.19-ndb-7.2.4
Supplying the following features: compare and swap, persistent storage, LRU
<49 server listening (auto-negotiate)
<50 server listening (auto-negotiate)
<51 send buffer was 126976, now 268435456
<52 send buffer was 126976, now 268435456
<51 server listening (udp)
<52 server listening (udp)
<51 server listening (udp)
<52 server listening (udp)
<51 server listening (udp)
<52 server listening (udp)
<51 server listening (udp)
<52 server listening (udp)

When we go to the cluster management console ( ndb_mgm)  and type show, we should see something like this

ndb_mgm> show
Cluster Configuration
[ndbd(NDB)] 2 node(s)
id=2 @  (mysql-5.5.19 ndb-7.2.4, Nodegroup: 0, Master)
id=3 @  (mysql-5.5.19 ndb-7.2.4, Nodegroup: 0)

[ndb_mgmd(MGM)] 1 node(s)
id=1 @  (mysql-5.5.19 ndb-7.2.4)

[mysqld(API)] 24 node(s)
id=4 @  (mysql-5.5.19 ndb-7.2.4)
id=5 @  (mysql-5.5.19 ndb-7.2.4)
id=6 @  (mysql-5.5.19 ndb-7.2.4)
id=7 @  (mysql-5.5.19 ndb-7.2.4)
id=8 @  (mysql-5.5.19 ndb-7.2.4)
id=9 @  (mysql-5.5.19 ndb-7.2.4)
id=10 @  (mysql-5.5.19 ndb-7.2.4)
id=11 @  (mysql-5.5.19 ndb-7.2.4)
id=12 (not connected, accepting connect from any host)
id=13 (not connected, accepting connect from any host)

We are ready to do some test in java and we start with spymemcached java library ( in the memcachedclient I connect to my two memcached servers and use the default port 11211).
Just create a MemcachedClient and do your set or get operations.

With this as result
2012-03-31 17:29:33.190 INFO net.spy.memcached.MemcachedConnection:  Added {QA sa=/, #Rops=0, #Wops=0, #iq=0, topRop=null, topWop=null, toWrite=0, interested=0} to connect queue
2012-03-31 17:29:33.205 INFO net.spy.memcached.MemcachedConnection:  Added {QA sa=/, #Rops=0, #Wops=0, #iq=0, topRop=null, topWop=null, toWrite=0, interested=0} to connect queue
2012-03-31 17:29:33.221 INFO net.spy.memcached.MemcachedConnection:  Connection state changed for
2012-03-31 17:29:33.221 WARN net.spy.memcached.MemcachedConnection:  Could not redistribute to another node, retrying primary node for greetings.
2012-03-31 17:29:33.221 INFO net.spy.memcached.MemcachedConnection:  Connection state changed for
There is no message
Process exited with exit code 0.

Run it again.

2012-03-31 17:30:47.662 INFO net.spy.memcached.MemcachedConnection:  Added {QA sa=/, #Rops=0, #Wops=0, #iq=0, topRop=null, topWop=null, toWrite=0, interested=0} to connect queue
2012-03-31 17:30:47.678 INFO net.spy.memcached.MemcachedConnection:  Added {QA sa=/, #Rops=0, #Wops=0, #iq=0, topRop=null, topWop=null, toWrite=0, interested=0} to connect queue
2012-03-31 17:30:47.694 INFO net.spy.memcached.MemcachedConnection:  Connection state changed for
2012-03-31 17:30:47.694 INFO net.spy.memcached.MemcachedConnection:  Connection state changed for
Hello World!
Process exited with exit code 0.

We can see that the greetings key with its value is stored into the database ( you can also configure that this key is stored in memory instead of the database )

When we change the  key to b:greetings then the value will be stored in the large demo table which optimized for bigger values ( max 3mb) .

From clusterdb blogpost.
By default, the normal limit of 14K per row still applies when using the Memcached API; however, the standard configuration treats any key-value pair with a key-pefix of “b:” differently and will allow the value to be up to 3 Mb (note the default limit imposed by the Memcached server is 1 Mb and so you’d also need to raise that). Internally the contents of this value will be split between 1 row in ndbmemcache.demo_table_large and one or more rows in ndbmemcache.external_values.

Now let's try the same with JCache.

First create a javax.cache.spi.CachingProvider file in the following folder META-INF\services
This file must contain the JCache provider class name, in this case  net.spy.memcached.jcache.SpyCachingProvider
Then we need to create a SpyCachingProvider, set the Java parameter and configure the CacheManager & Cache.

In combination with CDI you can also use annotations. Enable CDI and inject this bean in your class.

Sunday, March 25, 2012

Oracle bulk insert or select from Java with Eclipselink

Database Bulk operations like a 10.000 rows insert operation from java can be hard and costly plus you won't come near the time when you do the same in PL/SQL. So the solution is to pass on the array of java JPA entities to a PL/SQL package and let the optimized Oracle Database do the hard work.

To make this work in java we need to use eclipselink at least version 2.3.2, this has the required java classes.

In this blogpost I will show you, how you can retrieve all the departments from the HR demo schema. Add a new department and do a multi insert.

On the PLSQL side we need to define PLSQL row and table types in our Oracle Package. Because Eclipselink can't handle these PLSQL table types directly we also need to define the matching Oracle Types counter parts.

So let's create these objects in the Oracle Database under the HR schema

We are ready with the Oracle Database.

So the first step is to create the Departments entity. This entity is not based on a Table but on the Oracle Type, so we need to add @Embeddable and @Struct annotations.

We also need to create some department utils in which we define the eclipselink PLSQLrecord and PLSQLCollection variables and an ObjectRelationalDataTypeDescriptor descriptor for the entitymanager session.
We need to do this else we can't map the Oracle Types to PLSQL table types. Also eclipselink need to link the Deparments class to the Oracle Type.

Off course we need to create a persistence unit which connects to the HR demo schema.

We are ready to test our solutions.

First we start with a single insert where we pass a department as parameter of a procedure.
These are the steps to make this work.
Define a PLSQLStoredProcedureCall with your Oracle procedure name and all the parameters. Because we only pass on a record so we only need to use the department PLSQLrecord. Create DataReadQuery and add our department entity. Retrieve the ServerSession and add the department descriptor.
And at last fire the procedure.

In the second test we will do a multi insert. This works the same as the single result but only now we need to use the department PLSQLCollection and set our parameters to the Array type.

At last we will retrieve all the departments and we will use a function which returns an array of departments. In this case we will use PLSQLStoredFunctionCall and also use the department PLSQLCollection. As result we get a DatabaseRecord where we can read the RESULT key which contains all our departments.

Here you can download the project on github


Wednesday, March 14, 2012

Processing EDIFACT documents in Oracle B2B

In my previous blogpost I already showed you how to process custom xml documents in Oracle B2B and pass it to the Oracle SOA Suite. In part two we will try to process an EDIFACT document. This time we need to use the Oracle Document Editor where we change the EDIFACT guideline, analyze the EDI documents against this guideline and do an export which we can import into Oracle B2B.

Here is an example of an EDIFACT document which is used in the dutch energy world (an EDIFACT dialect called EDINE). It is an acknowledgement document called APERAK.

UNA:+,? '
FTX+AAO+++In period 199904112015 199904112030 your changes are not yet approved: because of inconsistency between other PRCs in the given period'

Before we can start we need to have the Message Implementation Guide ( MIG ) of this APERAK message and off course study the general EDIFACT Guidelines which explains the UNA,UNB,UNT and UNZ segments.

First we need to identify the message. For this we need to analyze the UNH segment.


The Message type is APERAK and version pluis release is D96A. These values are important for the Oracle Document Editor, this way we can detect and select the right EDIFACT APERAK definition and register it in Oracle B2B.

Now we can startup the Oracle Document Editor ( this can be downloaded in the Prerequisites & Recommended Install Process of the SOA Suite download on OTN, for EDIFACT we only need B2B Document Editor Part 1 )

Create a new guideline where we open the EDIFACT part and go to D96A node and select the APERAK message

This is the default implementation of the Aperak Message.

Because we are using an EDIFACT dialect we need to study the message implementation guide and change this guideline so it matches with the MIG. Delete Segments , Element Groups , Elements and add your own values in the element standard lists.

When we are finished with the modifying part we can export this guideline. Click on Export ( File menu )  and select Oracle 2.0

Select Show advanced options.

 Provide your own namespace

Suppress Enumeration in XSD

Then finish the export.

Next step is to use the Analyzer so we can test the created guideline on our EDIFACT test message.

Select the test message and select Show Advanced options ( else we can't generate a xml output based on this test message).

Choose From a guideline file and select your created guideline.

Our test message does not contain a functional group segment so choose for Syntax 3 - no functional group.

Oracle Document Editor can also generate a XML output but we need to enable this option.

The guideline and test message does not have any errors so we are ready to use it in Oracle B2B

To see the generated XML based on this test message we can press the XDATA button. This is the XML which can be processed by Oracle SOA Suite.

The next step is to register the APERAK guideline in Oracle B2B. For this we need go to the Administration part of the B2B application.

Go to the Document Tab and create a new version under EDI_EDIFACT. This version should be called D96A.
Also when we look at this line of our test EDIFACT message UNB+UNOC:3+8712423009097:14+8716867999990:14+080928:1347+1001'
Then we should use UNOC as syntax identifier and the version is 3

Also we don't use a functional group and UN is the Controlling Agency.

Create the APERAK Document Type and use FA as Function Group Identifier Code.
When we take a look at the UNH record of the test message we see that UN is our Controlling Agency and EDINE1 is the Transaction Association. UNH+1001+APERAK:D:96A:UN:EDINE1'

Create the APERAK Document Definition and upload the XSD definition and ECS file.

Next step is to add the EDI identifiers to all the involved Parties. This will be used to identify the agreement.
To know your sender or receiver identifications we can look at the UNB record.

Add the EDI Interchange ID and Qualifier identifiers to all the involved Parties.

Create the Aperak agreement between two parties where we add the EDI identifiers and select Validate and Translate options.

Add a listening channel, so B2B will listen for new EDIFACT files.

We should see the APERAK messages in the Business Messages

And at last this is how the translated XML message looks like.

Sunday, March 11, 2012

Handling Custom XML documents in Oracle B2B

With Oracle B2B which is a part of Oracle SOA Suite 11g R1 you can handle / interchange all kind of messages between partners. Oracle B2B supports many Documents Protocols like off course your own, EDIFACT,  HL7, RosettaNet and many more. Besides this it also supports many interchange channel protocols like AS2, Email , FTP , File etc.
B2B can listen on those channels and tries to identify the document together with the sender and the receiver. For this you need to register the document and create an Agreement between the two parties.  On the partner level you need to register some identifiers like a common name or an email address.
When the document is identified it will be translated to xml and this document can send this to an output channel, this can be a JMS Queue on which Oracle SOA Suite can listen.

In this blogpost I made a small example in which we have an employee xml message which is delivered to an folder, B2B has a listening Channel on this folder and tries to identify it ( uses xpath and examine the file name ). This particular Agreement says it has to deliver the message to a Queue.
This Queue is read by Oracle SOA Suite and the composite generates an employee response message and uses a B2B adapter to deliver it back to Oracle B2B.
The document is identified by Oracle B2B and this Agreement has email as output and the response message is delivered as an attachment in the email.

Off course this xml use case can also be done in Oracle SOA Suite but in my next blogpost we will make it more interesting by doing the same but then with an EDIFACT message where B2B can validate and transform the EDIFACT and even send an Acknowledgement based on the input message.

We start with the employee and the response schemas.

The employee schema

The employee response schema

I will use the user weblogic for all the B2B Administration so I need to assign the IntegrationAdministrators role to the weblogic user ( this can be done in the myrealm security realm).

We start by register these XSD documents in the Administration menu of the B2B application.
this is the url of the B2B application located on the soa server http://soa_server:8001/b2bconsole/faces/login.jspx

Select the Custom Document Protocol and click on the green plus button where we create a new Document Protocol Version called Employee_XSD

Save and click on the new Type button where we create a Document Type Name.

On this screen we can define our Defintion and where we need to upload the XSD.
Important for the identification of the message is that we need to provide the Identification Expression.

In this case I search for the element called type and see if it has employee2 as value.

We do the same steps for our employee response message.

Now we can move to the Partners registration part of the B2B application. Here we will create a partner, add some identifiers, register which documents can be handled by this partner and at last the output communication channel where we can define where the message will be send to.

First click on MyCompany, this is the master partner ( basically this is you, the owner of the B2B server ).

We need to define some Identifiers which can be used in the Partner Agreements.
First a Generic Identifier with MyCompany as value and an other one with as email address.

The Document and the roles this partner can handle, for EmployeeSchema de-select Sender and for the Response de-select Receiver.

The employee message will be delivered to a JMS Queue so we need to define an output channel. Define a JMS channel and provide the JNDI names of the Connection Factory and the JMS Queue.

We also need a Trading Partner so I created a Partner called CorpA. Here we also need to define some Generic Identifiers like CorpA and

The documents, in this case CorpA is the sender of the Employee message and receiver of Response message.

The CorpA want to receive the response as an email attachment so we need to define an email channel where we define the mail server details.

Next we need to define agreements between CorpA and MyCompany and back.
Select the CorpA Partner and create an Agreement

The incoming employee message agreement.

Here we need to select the Employee document and on the MyCompany side we need to choose the JMS channel. Click Save and Deploy the agreement.

In the response message agreement we need to use the Response document and on the CorpA side, select  the email channel.  Also on the the MyCompany side add the Email identifier so B2B knows who is the sender of the email.  Click Save and Deploy the agreement.

The next part is Oracle SOA Suite which receives the employee message and creates the response message.

Drag the B2B Adapter to the Exposed Services side of the composite.

  • Choose JMS as B2B integration type. 
  • Select your AppServer Connection -> The weblogic domain which hosts the SOA & B2B software.
  • Receive
  • Click Next on the Basic Tab
  • Open the Custom Document Definition and select the EmployeeSchema
  • Select Oracle WebLogic JMS in OEMS
  • Select your AppServer Connection -> The weblogic domain which hosts the SOA & B2B software.
  • Lookup the Queue in the Destination name.
  • Create an EIS connection with this name in the JMS Resource Adapter plan.

Drag the B2B Adapter to the External References side of the composite.

  • Choose Default as B2B integration type.
  • Select your AppServer Connection -> The weblogic domain which hosts the SOA & B2B software.
  • Send
  • Click Next on the Basic Tab
  • Open the Custom Document Definition and select the ResponseSchema

    Add an Mediator and wire the two adapters to this mediator.

    • Add a transformation where we will create the response message.
    • Add the B2B Assign Values so B2B can detect the agreement. 

    You need to provide the following Assigns

    b2b.fromTradingPartnerId -> MyCompany
    b2b.fromTradingPartnerType -> Name

    b2b.toTradingPartnerId -> CorpA
    b2b.toTradingPartnerType -> Name
    b2b.documentTypeName -> EmployeeResponseType
    b2b.documentProtocolName -> Custom

    b2b.documentProtocolVersion -> Employee_XSD

    Deploy the composite

    Back to B2B

    The last part before we can test this B2B example is to configure a listening channel which polls for new messages.

    We need to go back to the Administration Tab of the B2B application.

    Add a file channel and provide the Folder name. 
    Important we need to define a filename format so B2B can detect the sender. ( my custom document does not have an identification part )

    Now we can add a xml document to this MyCompany in folder and must be called CorpA.xml

    This is the example document with employee2 as value for the element type (else the document won't be detected).

     We can see the messages and the progress in the Reports part of the B2B Application.

    And the final result where we got an email with the employee response.

     In the next blogpost we will add some EDIFACT messages to B2B.