Friday, November 19, 2010

Devoxx 2010 daily notes: day four

Java EE Key Note: The Future Roadmap of Java EE

Java EE wil evolved to cloud platform and, to do so, it will needs modularity based on Java SE.

JSF 2.1

JSF 2.1 will propose:

  • transient state saving
  • XML view cleaning
  • targeted to HTML 5

JMS

JMS is an old specification, which need to evelove for addressing new communication layers:

  • resolving specification ambiguities
  • standardized vendor extension
  • integration with other specifications
  • integration with the web tier (websocket, JSON, NIO2)

JPA 2.1

JPA targets to improve standardization, flexibility and control of persistence context synchronization. The metamodel will be extended to ORM.

Several addition will take place in this release:

  • add event listener
  • outer join with ON conditions
  • update and delete query based
  • mapping between JPQL and criteria queries
  • DB and vendor function invocation
  • support stored procedure

The expert group will be done for january 2011, but the release date could be delay by J2EE timeline (modularity is waiting for Java SE 8).

JAX-RS

JAX-RS 2.0 should be commited in december.

Actually, JAX-RS specification is not included into web profile.

A client API will be added:

  • low level builder pattern
  • high level response matching

JAX-RS will be adapted for MVC architecture based on JSP and Scalate.

Asynchronous, base on Atmosphere (HTTP streaming), will be too added with validation and injection. For exeample, @Context will be replaced by @Inject).

Comparing JVM Web Frameworks

In 2004, criteria to choose a web framework were:

  • request, component or RIA
  • stateful or stateless
  • project community
  • project futur
  • maintenance
  • technical features

And the choice could be described this way:

  • High traffic: request framework
  • Intranet: component
  • Products: products vendor

Now, in 2010, criteria are:

  • development productivity
  • developer perception
  • learning curve
  • project health
  • developer availability
  • job trends
  • templating
  • components
  • ajax
  • plugin
  • scalability
  • REST support
  • multi language
  • validation
  • books
  • doc
  • mobile
  • risk

After a comparison, to top five is:

  • Spring MVC (pro: configuration, integrtion, REST support; cons: instant reload)
  • GWT (pro: write in Java, easy to learn; cons: slow compilation)
  • Rails (pro: easy to learn, doc; con: performance, development tools)
  • Grails (pro: esay for Java programmers; cons: targeted to Java programmers)
  • Wiket (pro: for Java developpers; cons: no developpers available)

Standard DI with @Inject and JSR-330

CDI is a joined proposition from Google and Spring. It is resumed by five annotations and one interface.

@Inject identifies injectable members, that can be any visible fields (static or instance).

Injection order is:

  1. constructor
  2. fields
  3. methods

By default each time an injection must be done, a new instance is created.

@Qualifier allows to solve naming ambiguities by providing a way to create annotation to qualify a classe.

Interface Provide is a kind of factory where injection is not enough to create an object.

Guice and Spring Framework implements this JSR.

The Java Persistence Criteria API

JPA started with JPQL, a SQL like query language that can be used in static queries (@NamedQuery) and dynamic queries.

JPQL for dynamic query is:

  • String based: very easy to do
  • SQL like syntax
  • But it is String: risk of performance lost, lost of readability
  • No type safety

Criteria API is object based and typesafe (use of a metamodel).

CriteriaQuery are composite:

  • roots
  • join
  • expression
  • predicates
  • selections
  • ordering
  • grouping
  • method to assign: select(), multiselect(), from()
  • browse result: getSingleResult()

CriteriaBuilder: a factory of CriteriaBuilder for Expression and Predicate:

select c from Customer c;

TypedQuery tq = em.createQuery(Customer);

List resultList = tq.getResultList();

  • join -> root.join("orders");
  • where -> cq.where(cb.equal(root.get("...")).select(root)

There is hole in type safety in join and where: usage of string to specify attribute name.

The solution used by JPA is to generate at compile time a static metamodel (logical view).

Directly accessible via EntityManager.getMetamodel(), it is defined by a hierarchy of interfaces to describes types and attributes. So, browsing the metamodel is possible.

Now, it is possible to replace String in criteria by metamodel attribute or by accessing directly to the metamodel (path navigation).

Path navigation brings tha ability to access attributes of composite object: c.get(attribute_a).get(attribute_b). It is used to compound predicates.

CriteriaQuey is able to create subqueries and to call database specific function: cb.function("NAME", return_type, path);

Criteria parameters are created by using CriteriaBuilder and can be named.

Queries are modifiable.

HTML 5 Fact and Fiction

HTML 5 is mainly a collection of features.

It is actually better supported on mobile devices than on desktop.

HTML 5 is driven by mobile development. Developping a native application for each platform is too cheaper, we need a common platform, and this the web.

Feature detection can be done with Javascript and preferably by using library such as Modernizr.

Now, browser evolution is driven by specification. This can avoid a quirck mode where standard is more or less respected. HTML header file becomes:

HTML brings very feww components. So, developers uses external library, but there is no standard. HTML 5 comes with new input types. If they are not yet supported by browser, a text field is displayed. One of the target of such component is to brings facilities to mobile devices. For example, an email input field results to the @ symbol to be directly available on the keyboard.

It defines too things that are used from decades such as headers and footers.

It brings too:

  • autofocus
  • local storage: response to heavy cookie usage
  • offline mode: cache the application
  • microdata: semantic web
  • multithreading
  • geolocation (not part of HTML 5)

Activiti in Action

Activiti is a BPM open source tool , it is now mainly developed by Alfresco, but these are 2 distinct products, it is supported by SpringSource and Signavio. Activiti is based on the BPMN 2.0 standard, and the data can be persisted in an XML format.

Activiti allows you to model a process in BPMN 2.0 using “Activiti modeler” (from Signavio), or via an eclipse-based interface. At each step of the process, you can define who can execute the task, and define what are the information that should be filled in. Afterwards, you can initiate process instances, and each person involved will have access to the “activiti explorer” where he will have the possibility to fill in the required information. Once a process step is completed, activity notifies the next person involved that he has a task to do. The “activiti explorer” also provides analysts with process instances status and statistics (it is also possible to create easily a BIRT report about process instances report).

Activiti also provides other modules:

  • Activiti Probe (check process status)
  • Activiti Cycle (BPM collaboration platform)
  • iPhone client
  • Grails integration

Finally, Activiti is highly customizable and can be easily embedded into another application (you can, for instance, request to add a document into Alfresco at a particular process step using 5 lines of Java code …)

  • SpringSource provides an integration via a bean
  • A full query API provides access to the Activiti data
  • REST interface provided (used by an iPhone application

Thursday, November 18, 2010

Devoxx 2010 daily notes: day three

Java SE Keynote

Java platform is now targeting:

  • productivity
  • performance
  • universality
  • modularity
  • integration

Java language evolution will be:

  • for generics: declarations like: HashMap<String, List<String>> map = new HashMap<String, List<String>>(); will become: HashMap<String, List<String>> map = new HashMap<>();
  • lambada expressions will be added (JDK 8)
  • type reification: primitive type will be used as generics
  • module will be added: based on descriptor (module-info.java) and manage by JMOD (integration with maven and its repositories)
  • module will be able to be package as RPM, JAR or DEB or directly managed by JMOD

Java evolution has been planned until 2030.

For the roadmap:

  • Java 7: end of july 2011 (project coin, fork/join framework)
  • Java 8: end of 2012

Java Persistence 2.0

JPA 2.0 brings some news:

  • Criteria API
  • pessimistic lock
  • metamodel API
  • standardize configuration
  • collections of basic types

Basic types collections:

  • @ElementCollection: this collection must be stored as an elmeent collection
  • @CollectionTable(...): specify the table name
  • @Colcumn(...): specify the collection column name

Embadable objects:

  • multi level (@AttributeOverride)
  • relationship (@AssociationOverride, @JoinTable)

Persistence order:

  • implemented by provider as an additional column (@OrderColumn)
  • in case of legacy use @OrderBy to use existing schema

Maps usage:

  • key: basic type, embeddable, entity
  • value: basic type, embeddable, entity

JPQL:

  • conditional expression: CASE ... THEN .. (usefull with map)
  • restriction on type TYPE(e) IN (...) (usefull for inheritance)

Criteria API: support object and strinf literals

  • CriteriaQuery: query description
  • CriteriaBuilder: criteria query objects factory
  • How to express: select c from Customer
    CriteriaQuery cq = criteriaBuilder.createQuery(Customer.Class);
    Root root = cq.from(Customer.class);
    cq.select(root);

Metamodel: abstract schema model.

Pessemistic lock: specify mode as properties.

Second level cache API: use of @Cacheable on properties

Standardized configuration by providing a set of default properties.

Validation: automatic validation for PrePersist, PreUpdate and PreRemove.

The Next Big JVM Language

Actually, languages challenge Java by providing more features and different expression ways (have a look to Stephen Colebourn).

What Java has done right:

  • simplify C++ migration
  • take old ideas but JVM make them popular (garbage collector for example)

What Java has done wrong:

  • Checked exception: how to ignored them?
  • primitives: split type system
  • arrays: split type system and expose JVM future
  • monitors: unsafe effects, difficult to optimize
  • static: cannot be override, hurt concurrency
  • method overloading: complexity for compiler and for programmer
  • generics: too complex, background compatibility result in erasure

The New Big JVM Language should be:

  • like C syntax
  • multi paradigm: object oriented and functional
  • type system: static typing
  • easy reflection
  • have properties
  • have closure in language core
  • a better null handling (avoiding NullPointerException and long and borring debugging)
  • thread not exposed
  • have modules
  • good tooling (make tooling esay to build)
  • extensible (combination of language feature)
  • not verbose

No JVM languages fullfil these criteria, so, perhaps it is time for a none backward compatible version of Java: Java 9?

Project Lambda: To Multicore and Beyond

The based of Lambda project is SAM types: interfaces or abstract classes with a single abstract method.

Closure take the following syntax:

#{Person.getLastName()}

Collection framework will be adapted to provides facilities based on closure, such as filtering, sorting, etc...

New key words will be added to define immutable classes and their properties:

value class ImmutableClass {  
properties String lastname;
}

Actually, modifying an interface results to breaking API compatibility. With Lambda, it will be possible to add method with default behavior.

Spring 3.1 - Themes and Trends

Spring 3.0 brings:

  • annotated component model
  • expression language
  • REST support
  • Portlet 2.0
  • support Java EE 6 (JPA 2)
  • custom annotation
  • include JavaConfig (configuration classes)
  • @Value (use of value express with EL)
  • model validation @Valid on parameters
  • type conversion @DateTimeFormat(iso=ISO.DATE)
  • annotation scheduling: @Scheduled

Spring 3.1 will provide:

  • environment profiles
    • activated in command line by: -DspringProfile=env
    • provides environment abstraction as an API
    • custom placeholder resolution
  • java based configuration: @Configuration on class
  • cache abstraction
    • EhCache support (important for cloud)
    • support for Gemfire, Coherence
    • annotation @Cacheable
    • cache could be conditional
    • @CacheEvict to avoid caching
    • new namespace declaration
  • conversation management:
    • HttpSession ++
    • association with browser and tabs
    • foundation for WebFlow 3
  • Groovy will be used as template engine
  • c: namespace for constructor arguments
  • Roadmap:
    • M1 in december
    • 3.1 GA march/april 2011

HTML 5 Websockets: A New World of Limitless, Live, and Wickedly Cool Web Applications

Actually, HTTP is not full duplex, so, Websockets brings full duplex to web.

It is implemented as a Javascript API that uses IETF protocol, but is still compatible with HTTP. The idea is not to replace HTTP.

Websocket allowed cross origin, secured and unsecured communication in standard.

It is supported by browsers:

  • Chrome 4.0+
  • Safari 5.0, iOS 4
  • Firefox 4 beta
  • to check compatibility, go to Websocket web site

Beyond the scene, Websocket allows:

  • connection reuse
  • extends client server protocol (available in JS API)
From the tests, Websocket, compared to pure HTTP, reduce latency from 150 ms to 50 ms.

Visage Android Workshop

Visage is a dynamic UI language inspired by Flex and JavaFX for different target, and especially, for Android.It is declarative, brings data binding, provides closure to implements triggers and brings null safety.

Compilation is divided in two phases:

  • Visage to Java bytecode
  • Bytecode to Dalvik

Tuesday, November 16, 2010

Mockito: a quick tour

Mockito is a test framework for building... mock objects. The development team has known learn from other similar frameworks like JMock, EasyMock. It is based on a DSL to stub the mock implemented by fluent interfaces and can mock classs and interfaces.The restrictions are:
  • can't mock static classes
  • can't mock final classes
  • can't mock static methods
  • can't mock equals and hashCode

So, lets go for a quick tour.

First Sample

As usual, we create a DAO interface, thtat is used by a service:

public interface SampleDao {

Sample findByPrimaryKey(Long id);
}

public class SampleService {

private SampleDao sampleDao;

public void setSampleDao(SampleDao sampleDao) {
this.sampleDao = sampleDao;
}

public boolean exists(Long id) {
return sampleDao.findByPrimaryKey(id) != null;
}
}

To test our service, we need to create a mock of the DAO:


public class SampleServiceTest {

private SampleService sampleService = new SampleService();
private SampleDao sampleDao;

@Before
public void setUp() throws Exception {
if(sampleDao == null) {
sampleDao = Mockito.mock(SampleDao.class);
sampleService.setSampleDao(sampleDao);
}
}

}

Creating a mock is simply done by calling mock method on a class object. Mockito can mock interfaces, it is the minimum for a mocking framework, but it can also mock classes in its standard edition. For example, EasyMock can mock classes, but an extension must be used to do so. When an object is a mock, if no behavior is specified, methods return the default return type value.

Our test is quiet simple, and the return value of findByPrimaryKey is correct. But how can we be sure our DAO code is called. Mockito provides an easy way to check.

public void testExists() {
Random random = new Random(System.currentTimeMillis());
Long id = random.nextLong();
boolean result = sampleService.exists(id);

Mockito.verify(sampleDao).findByPrimaryKey(id);

Assert.assertFalse(result);
}

By this way, Mockito check if the method findByPrimaryKey has been called only one time, and with id as parameter. If no verify is implemented in test, no check is done on the method execution. So, the order in which the behavior is implemented does not have any importance, and mock building is uncoupled to the method call stack.

Implementing behavior

Implementing the behavior is called stubbing in Mockito. In our previous sample, the mocked DAO uses the default behavior, i.e. findByPrimaryKey always return null. But how to configure the DAO for a call to exists return true?

public void testExists() {
Random random = new Random(System.currentTimeMillis());
Long id = random.nextLong();

Mockito.when(sampleDao.findByPrimaryKey(id)).thenReturn(new Sample());

boolean result = sampleService.exists(id);

Mockito.verify(sampleDao).findByPrimaryKey(id);

Assert.assertTrue(result);
}

It is very simple to implement. Method when encapsulate a call to the stubbed method, and method thenReturn provides the object to return on a method call. By stubbing the method this way, we say that a new Sample object is returned for this specific id value. In our case, there is no importance, because we produce this id. But in other case we don't known the id value, Mockito provides us way to implement more complexe behavior depending of the passed arguments.

We can also throw an exception on a method call. For example, we can suppose our findByPrimaryKey does not return null when no data has been found, but throw a EntityNotFoundException (JPA API). Our exists methods becomes:

public boolean exists(Long id) {
try {
sampleDao.findByPrimaryKey(id);
} catch(EntityNotFoundException e) {
return false;
}

return true;
}

And the test:

public void testExistsNotFound() {
Random random = new Random(System.currentTimeMillis());
Long id = random.nextLong();

Mockito.when(sampleDao.findByPrimaryKey(id)).thenThrow(new EntityNotFoundException());

boolean result = sampleService.exists(id);

Mockito.verify(sampleDao).findByPrimaryKey(id);

Assert.assertFalse(result);
}

Now, we have two tests. Each one implements its own behavior on the DAO. To unregister behavior on mock, we have to reset it:

@After
public void tearDown() throws Exception {
if(dao != null) {
Mockito.reset(sampleDao);
}
}

Deal with void return method

Now, we know how to stub a method with a non void return, but how to implement behavior on this kind of method. To do it, we add a method to our DAO interface:

public interface SampleDao {

Sample findByPrimaryKey(Long id);

void update(Sample sample);
}

To mock it, we have to change a little the stubbing syntax. In place of writing:

when(mock.call_to_the_method).then()

The syntax becomes:

doNothing().when(mock).call_to_the_method

The standard behavior of an update method, is to call a findByPrimaryKey method to retrive tha actual stored bean values. In case of no bean has been found, the EntityNotFoundException is thrown.

To implement the case of update success:

Mockito.doNothing().when(dao).update(new Sample());

To implement the case of update failure:

Mockito.doThrow(new EntityNotFoundException()).when(dao).update(new Sample());

Advanced Stubbing

Until now, we have implemented simple stubbing. So, it is time to look at advanced stubbing, and to do so, we take our two test methods on exists and turn them in one, and we implements a behavior, which is able to throw an exception or return something depending of the parameter. I'm agree: if I do that, I can't check service return, but this is only for example, so be indulgent.

public void testExists() {
Random random = new Random(System.currentTimeMillis());
Long id = random.nextLong();

Mockito.when(sampleDao.findByPrimaryKey(id)).thenAsnwer(
new Answer<Sample>() {
@Override
public Sample answer(InvocationOnMock invocation) {
Long id = (long) Invocation.getParameters()[0];
if (id > 10000) {
throw new EntityNotFoundException();
} else {
return new Sample();
}
}
});

sampleService.exists(id);
Mockito.verify(sampleDao).findByPrimaryKey(id);
}

Now, is id is greater than 10000, an exception is thrown, else, a new Sample object is created. Answer is an easy way to implements behavior depending of method parameters. But, its usage must not be systematic. Then().do...() syntax should be mostly used.

For now, we have to specify for which id, our behavior is implemented. But we implements that for any id, isn't it? Mockito comes with matchers to express this kind of constraints:

public void testExists() {
Random random = new Random(System.currentTimeMillis());

Mockito.when(sampleDao.findByPrimaryKey(Mockito.anyLong())).thenAsnwer(
new Answer() {
@Override
public Sample answer(InvocationOnMock invocation) {
Long id = (long) Invocation.getParameters()[0];
if (id > 10000) {
throw new EntityNotFoundException();
} else {
return new Sample();
}
}
});
sampleService.exists(random.nextLong());
Mockito.verify(sampleDao).findByPrimaryKey(Mockito.anyLong());
}

WARNING: it is not possible to use matcher on only one stubbed method parameter. Each method parameter must use matcher, else, Mockito throws a InvalidUseOfMatchersException.

It is finished for this quick tour. For furthur, go to the Mockito documentation page and have a look to documentation.

Devoxx 2010 daily notes: day two

Dive into Android

Layout are not pluggable in Android SDK, i.e. layout classes are containers. It is strongly recommended to use dip unit (device independent pixel).To set a component to use all place let by the parent, set the attribute: MATCH_PARENT. As big as the content, set attribute WRAP_CONTENT.

AbsolutLayout should not be used anymore.

With linear layout:

  • weights redistribute empty space
  • size should be 0 dip. When building UI, SDK evaluate size in first place.

Don't forgot to use tools such TraceView and HierarchyViewer for debugging.

JBoss tools the deployment Ninja

JBoss Tools provides tools for code generation and edition. The challenge is, Eclipse has only one way to produce things. JBoss Tools add different facilities:

  • project archieves to hot deploy projects on servers (can be explored in editors)
  • single files and directories (very usefull with maven) deployment
  • not dependent of JBoss AS usage

It is possible to deploy application by simply drag and drop it on server instance. Instance can also be in remote, and keep its hot deployment capabilities.

Introduction to HBase

HBase is modeled after Google BigTable.It is build on top of Hadoop and is dedicated to store several 100Gb and more.

Its characterics are:

  • transaction only on single row
  • indexes on row key
  • several millions read/write operation per second
  • brings random read/write to HDFS
  • every row has a row key and a timestamp
  • it is a distributed sorted map
  • Value = Row Key + Column Key + Timestamp

HBase shell

Hbase brings a shell to execute command against it, for monitoring or content managment:

  • status: cluster status
  • list: list all user table
  • get: request on one row
  • put: insert
  • scan: request some rows
  • count: row count
  • delete: delete column or row
  • remove: delete table
  • add/remove column family

Architecture

A region is a table row subset, a region server serves region data. A master is responsible of coordinate region servers.

For management of all this information, HBase comes with ZooKeeper.

Data is written to the region memstore (in memory storage). It is flushed at a certain size. Compaction is the operation to compact data into HDFS managed files. There is two kind of compaction:

  • minor: each time memstore is flushed
  • major: by default daily operation. Files are compact to several Gb files.

Region server should not be located with data, and they can split region. In this case, master know that a region has been splitted, and then, affect a new region server to it.

API

API provides configuration capabilities:

Configuration conf = HBaseConfiguration.create();
conf.set("hbase.zookeeper.quorum", "localhost");

API for cluster administration is also available:

  • HTableDescriptor: table and column family
  • HColumnDescriptor: describes a column family

HTable allows to access to an existing table.Put/Get/Scan objects allows to insert/get/select row(s).

HTable files can be managed by MapReduce. API provides TableInputFormat and TableOuputFormat for processing:

  • TableInputFormat: splits HBase tables and use scanner to read split
  • TableOutputFormat: write in HFile format (HDFS file format)

Advanced API

  • Add a row if it does not not already exists: table.checkAndPut.
  • Scanner can be refined by using Filters.
  • The write-ahead log can be disabled: put.setWriteToWAL(boolean)
  • Proxy server is also available to be accessed from other technologies.
  • A REST API is also available (Stargate)
  • Services can be exposed via "Thirst"

Deployment

HBase comes with a pre installed ZooKeeper, but programmer can replace it.

Monitoring can be done with tool like Ganglia or by using JMX interfaces.

Backup can be hold by a MapReduce job to import/export data.

To improve performance, scanner caching can be enabled, but use more memory, and compression method can be set to LZO. For data greater than 12Gb, use filesystem cache.

Groovy/Grails Development in Eclipse

They are available as Spring Tool Suite extension. But, for the moment, Groovy DSL can be added, but, to be recognized in Eclipse, an Eclipse extension point should be added.

Groovy Eclipse integration is actually better than classic compiler, as compiler gives only the first compilation error, but, STS gives all errors. Soon, Groovy++, maven and Gradle will be supported and DSL will be used without added an axtension to Eclipse.

STS brings following facilities to Grails:

  • a new perspective
  • command wizard
  • plugin manager
  • drag & drop into tc Server (agent based reloading)
  • STS grails edition coming soon

Scalable and RESTful web applications: at the crossroads of Kauri and Lily

Kauri is a resource oriented web platform, which core concepts are: models, ptototyping, RIA, routing, pages. There is no web servlet container, but it is replaced by a Spring based application container that load modules in place deploying war files. It uses Groovy for routing and Jax-RS for service.

But from front end is scalable by using Rest services, the bottleneck is relocated to the database server which is no more scalable than before.

To fix this problem, Lily is a scalable store and search engine based on HBase for storage and SOLR for search, and it support model versioning.

Devoxx 2010 daily notes: day one

Hadoop Fundamentals: HDFS, MapReduce, Pig, and Hive

Hadoop has two core components:
  • HDFS
  • MapReduce

And an ecosystem: Pig, Hive, HBase, Flume, Oozie, Sqoop.

HDFS

HDFS is a distributed file system based on GFS and on top of native file system such as extfs. It provides redundant storage. It performs best with files 100Mb or more and is optimized for large streaming files. HDFS does not allowed random write, but append allowed from version 0.21. It splits files in blocks of 64 Mb or 128Mb, and these blocks are distributed and duplicated on nodes. A master node keeps trace of files/blocks mapping, and data nodes hold actual blocks.

MapReduce

MapReduce is programming model to distribute tasks accross multiple nodes. It is automatically parallel processing and fault tolerant.It comeswith a monitoring tool.

Abstraction for Java and scripting language compliant with HDFS streaming are provided with hte following semantic:

  • job: full program
  • task: execution of map or reduce
  • task attempt: execution of a task
  • job tracker in the master node (job and task manager) to monitor jobs

MapReduce divides in two phases: map, and reduce. Between them, there is the stored and shuffle phase. Before excuting a reduce, every map has to be finished and there produced data stored in the same node than the node from which data are read. Reducer can be single or multiple and its output are written to HDFS. A possible bottleneck is with a slow mapper, as reducer cannot be launch if one mapper is not yet finished, and another is huge amount of data produced by mapper.

Combiner allows to send intermediate data to reducer.

A job is defined by a driver class, usually in a main method, which could read configuration from /etc/hadoop/conf and where the input and output dir should be specified. Mapper, Reducer and Combiner implementations should be specified into the conf. It run the job against th conf (JobConf). Mapper implementations should extend MapReduceBase and implements Mapper (parameterized by key/value read and produced). Method map take OutputCollector and Reporter to respectively, collect produced key/value pairs and aggregate data via Counters (visible into GUI).

Distributed cache push data to all slave nodes. It should implement Tool, and be invoke by the ToolRunner.

Hive

Hive is build on top of MapReduce provides a SQL like interface:

  • subset of SQL 92 + hive specificity
  • no transaction
  • no indexes
  • no update nor delete

Hive provides a metastore that holds structure of a table and data about where are data in HDFS. It allows to copy file from local FS to table, but does no check at load. Failure are found at request time.

Pig

Pig is a data flow language located on client side. It is executed by LocalJobRunner and on local FS.

The latest innovations of Adobe Flash Platform for Java developers

Flash Player 10.1

Now, Flash Player supports:

  • multitouche - gesture
  • accelerometer
  • screen orientation
  • hardware acceleration

For optimization, Flash player brings:

  • sleep mode
  • reduce memory usage by 50%
  • reduces CPU usage for video by using hardware acceleration
  • prevent stack trace popup in production mode (programmatically)

AIR 2

New AIR platform version provides:

  • native process API (profile: extendedDesktop)
  • native installer generation (Windows, Androis, iOS), but needs shared library to be installed on device
  • cross compilation, usage of LLVM (but some API cannot be implemented)

Flex 4

Flex 4 comes many news:

  • FXG framework for graphics
  • skins are separated from components
  • 3D API
  • new layout framework (don't forget to override updateDisplayList on layout classes)
  • asynchronous list and paging components
  • globalization API

Mobile

  • Needs Flex 4.5 (Hero)
  • provides debuger, packager, web view, geolocation, ease of deploiment

Flash Catalyst

  • Flash Catalyst reduces gap between developpers and designers
  • from a vectorial (Adobe Director) image, we can sepcify which graphical element should be interactive and generate a Flex project

LCDS

  • brings bridge to other technologies such as .NET, PHP, etc

Live Cycle Collaboration Service

  • from 15$/month
  • clustered to Adobe
  • component for dashbord/ chat, webcam management

Spring Developer Tools to push your Productivity

Spring focuses on providing tools for framework and languages to speed up application development. Spring Tool Suite comes comes with tcServer, maven, and Spring ROO, and has auto-configuration capabilities. It detects, at installation time, tc Server and Tomcat.

While developing a web application, a part of lost time is located in stop/server the server. Tc Server comes with three refresh approch:

  • standard: reload on change
  • JMX based: reload only if dynamic content has changed
  • agent based

Intelligent data analysis - Apache Mahout

Mahout is a tool mixing data mining, to extract pattern data, and machine learning, to extract model data. As it is build on top of Hadoop, it manages huge amount of data. It's target is to provide scalable data mining algorithm to, for example, analyse news through the internet and group articles by subject and eliminate doublon. Or another example is to search facial photo looks like to a given one in a collection.

Monday, November 15, 2010

GWT 2.1: Request Factory

RequestFactory is the new machanism to ease client/server data transfer. It brings you the possibility to define data centric code on server side, and to define business component on client side.

Entity

It is the base class of the mechanism. It is the DAO implementation class and it should be located to a shared package (accessible from both, client side and server side). It is not mandatory that this entity be a JPA entity, but RequestFactory constraints made entities looks like JPA entities.

public class Employee {
private String userName;
private String department;
private String password;
private Long id;
private Employee supervisor;
public String getUserName() {
return userName;
}
public void setUserName(String userName) {
this.userName = userName;
}
public String getDepartment() {
return department;
}
public void setDepartment(String department) {
this.department = department;
}
public String getPassword() {
return password;
}
public void setPassword(String password) {
this.password = password;
}
public Long getId() {
return id;
}
public void setId(Long id) {
this.id = id;
}
public Employee getSupervisor() {
return supervisor;
}
public void setSupervisor(Employee supervisor) {
this.supervisor = supervisor;
}

public static Long countEmployees() {
return 2l;
}

public void persist() { }

public static Employee findEntity(Long id) {
return new Employee();
}

public Integer getVersion() {
return 0;
}
}
Above, we can see a POJO that defines en Employee. It implements getter end setters for each attribute, and methods countEmployees, findEntity and getVersion. Methods findEntity and getVersion are mandatory and are used internally by RequestFactory. Method countEmployees

is a data access method that could be used on client.

Entity Proxy

Entity proxy is the entity client side representation, well known as Data Transfer Object. It must implements EntityProxy and provides methods to access (read/write) to data stored in the entity. You haven't to provides data access methods such as countEmployees().

@ProxyFor(Employee.class)
public interface EmployeeProxy extends EntityProxy {

String getUserName();
void setUserName(String userName);
String getDepartment();
void setDepartment(String department);
String getPassword();
void setPassword(String password);
Long getId();
void setId(Long id);
Employee getSupervisor();
void setSupervisor(Employee supervisor);
}
Annotation @ProxyFor specifies the entity class to proxy. EntityProxy are objects used on client side, as Entities are objects user on server.

Request

Request defines an interface to use data access method. It is interface that deals with the server, and it can be seen as the service interface.
@Service(Employee.class)
public interface EmployeeRequest extends RequestContext {

Request<Long> countEmployees();

InstanceRequest<EmployeeProxy, Void> persist();
}

This interface defines methods that can be served and must be the same signature (except for return type) than the concrete service (on server). The return type must be a Request object parameterized with the return type of the server service. Request class is used for static methods, for instance method, such persist, it is InstanceRequest that is used to specify the instance type, and the method return code.

Annotation @Service gives the service class object. In our case, it is the same than the entity, but it can be different. It is only a current practice in JPA to merge DAO and entity.

Request Factory

RequestFactory allows to define a central point to create each RequestContext implementation. It is the interface that is directly instanciated (via deferred binding) by the developer.

public interface EmployeeRequestFactory extends RequestFactory {

EmployeeRequest employeeRequest();
}

Putting it all together

To use al of this stuff, first, we have to declare RequestFactory servlet:
<servlet>
<servlet-name>gwtRequest</servlet-name>
<servlet-class>com.google.gwt.requestfactory.server.RequestFactoryServlet</servlet-class>
</servlet>

<servlet-mapping>
<servlet-name>gwtRequest
<url-pattern>/gwtRequest
</servlet-mapping>

And we have to create the RequestFactory and to give it a EventBus:
EmployeeRequestFactory factory = GWT.create(EmployeeRequestFactory.class);
factory.initialize(new SimpleEventBus());

Now, we can use it. To call a server side static method, we have to get the RequestContext and call the method on it. To execute it effectively, Request defines a method fire that executes the method to the server, and takes a Receiver in parameter. Receiver provides callback methods to execute some code. Defined callback are:

  • onFailure: in case of an error occured on server
  • onSuccess: to get the result
  • onViolation: when validation failed. Validation is supported by the RequestFactory. Validation rules should be set on entity.
factory.employeeRequest().countEmployees().fire(new Receiver<Long>() {
@Override
public void onSuccess(Long response) {
// implementation
}
});

Creating a proxy should not be done by using deferred binding (GWT.create):

EmployeeProxy employee = factory.employeeRequest().create(EmployeeProxy.class);

One time a proxy is created, we can persist it on server:

factory.employeeRequest().persist().using(employee).fire();

A call to fire() is equivalent to:

fire(new Receiver() {
@Override
public void onSuccess(Object response) {
// implementation
}
});

RequestFactory supports relationship, it sends a whole object rgaph for persistence, but it does not support embedded object.