This module is mostly composed of small utilities that lives on their own. To use this module in your Maven project add this snippet to your POM:
<dependency>
<groupId>it.tidalwave.thesefoolishthings</groupId>
<artifactId>it-tidalwave-util</artifactId>
<version>3.2-ALPHA-23</version>
</dependency>
Snippets for other build tool (such as Gradle) are available here. The dependencies of this module are described here. Information about quality and continuous integration is available at the main project page.
Pair
and Triple
are immutable heterogeneous tuples (for n=2,3) that can be simply used to hold values together:
final var p = Pair.of("foo bar", 7); final var fooBar = p.a; final var seven = p.b;
final var t = Triple.of("foo bar", 7, false); final var fooBar = t.a; final var seven = t.b; final var bool = t.c;
Both Pair
and Triple
offer methods to generate special Stream
s. For instance, this code:
final var stream1 = Pair.pairRangeClosed("foo bar", 1, 3);
generates pairs ["foo bar", 1], ["foo bar", 2], ["foo bar", 3]
. The following code:
final var stream2 = Pair.indexedPairStream(List.of("foo", "bar"));
generates pairs [0, "foo"], [1, "bar"]
. Variants allow to start from custom Stream
s, Collection
s and Iterable
s, or
to pick a different value for the starting index.
Pair
can be used to easily implement a two-level nested loop with Stream
s. For instance, the following code:
final var actual = IntStream.rangeClosed(1, limit) .boxed() .flatMap(a -> Pair.pairRangeClosed(a, a + 1, limit)) .collect(toList());
is equivalent to this two-levels nested loop:
final List<Pair<Integer, Integer>> expected = new ArrayList<>(); for (var a = 1; a <= limit; a++) { for (var b = a + 1; b <= limit; b++) { expected.add(Pair.of(a, b)); } }
In a similar way, this code with Triple
:
final var actual = IntStream.rangeClosed(1, limit) .boxed() .flatMap(a -> Pair.pairRangeClosed(a, a + 1, limit)) .flatMap(p -> Triple.tripleRangeClosed(p, p.b + 1, limit)) .collect(toList());
is equivalent to this three-levels nested loop:
final List<Triple<Integer, Integer, Integer>> expected = new ArrayList<>(); for (var a = 1; a <= limit; a++) { for (var b = a + 1; b <= limit; b++) { for (var c = b + 1; c <= limit; c++) { expected.add(Triple.of(a, b, c)); } } }
It is also possible to “zip” two streams into a Stream<Pair>
:
// given final var intStream = IntStream.range(0, 5).boxed(); final var stringStream = IntStream.range(0, 5).mapToObj(n -> "string-" + (char)('a' + n)); // when final var underTest = Pair.zip(intStream, stringStream); // then assertThat(underTest.collect(toList()), is(asList(Pair.of(0, "string-a"), Pair.of(1, "string-b"), Pair.of(2, "string-c"), Pair.of(3, "string-d"), Pair.of(4, "string-e"))));
A Finder
is a factory for creating a query that extracts results from a data source: for instance a query on a registry
of persons to get some records according to certain criteria. The data source can be in-memory or a more sophisticated entity such a database. Finder
has
been designed with these main purposes:
Finder
's methods can be either intermediate or termination:
Finder
(even though not the same instance, since a Finder
must be immutable). They are used to set a number of parameter of the query before the query is executed.For instance the intermediate methods shown below can be used to specify which section of the results we are interested into (pagination):
@Nonnull public Finder<T> from (@Nonnegative int firstResult);
@Nonnull public Finder<T> max (@Nonnegative int maxResults);
The termination methods shown below, instead, perform the query, retrieve objects or provide a count of them:
@Nonnull public default Optional<T> optionalResult()
@Nonnull public default Optional<T> optionalFirstResult()
@Nonnull public List<T> results();
@Nonnegative public int count();
Note: at present time, there are some deprecated methods that were designed before Java 8
Optional
was available; their signature declares aNotFoundException
, which is a checked exception. They should not be used for new development as they will be removed in a future release.
For the following examples of Finder
usage we will make reference to a registry of Person
s that exposes a method to query the contained records:
public interface PersonRegistry { @Nonnull public Finder<Person> findPerson(); public void add (@Nonnull Person person); }
Data can be queried as:
log.info("All: {}", registry.findPerson().results()); log.info("Two persons from the 3rd position: {}", registry.findPerson() .from(3) .max(2) .results());
They can be sorted in some basic way:
log.info("All, sorted by first name: {}", registry.findPerson() .sort(BY_FIRST_NAME) .results()); log.info("All, sorted by last name, descending: {}", registry.findPerson() .sort(BY_LAST_NAME, DESCENDING) .results());
Intermediate methods can be freely mixed. This first example shows the utility of Finder
to offer a clean API that doesn't
inflate with lots of methods only to provide variants of the query (it's the typical advantage of a fluent interface). It will be shown that this API can
be extended with new methods without changing the general concepts.
Finder
sFinder
s can operate both in memory and with more complex data sources. Their core scenario is the latter, otherwise they could be replaced by Java 8 Stream
(a more detailed comparison with Stream
s is at the end of this chapter); but to start with simpler code let's have first a look at the in-memory approach.
In-memory Finder
s can be useful in some real-world cases, for instance when a controller or a DAO has cached data, or to create mocks for testing classes
that use more complex Finder
s.
In the simplest case you already have the results in a Collection
and just want to make them available through a Finder
; in this case the following method
is what you need:
@Nonnull public static <U> Finder<U> ofCloned (@Nonnull final Collection<? extends U> items)
It is used by a first example implementation of PersonRegistry
:
public class InMemoryPersonRegistry implements PersonRegistry { private final List<Person> persons = new ArrayList<>(); @Override public void add (@Nonnull final Person person) { persons.add(person); } @Override @Nonnull public Finder<Person> findPerson() { return Finder.ofCloned(persons); } }
As the name of the method says, the collection is cloned (shallow clone) at construction time, so any change made after the Finder
creation won't be seen.
If data are not immediately available and you want to compute them only on demand, passing a Supplier
is more appropriate:
@Nonnull public static <U> Finder<U> ofSupplier (@Nonnull final Supplier<? extends Collection<? extends U>> supplier)
Finder
sWhile the previously example referred to a in-memory implementation for the sake of simplicity, the Supplier
might retrieve the data from any kind of external
source: perhaps parsing an XML file, querying a database (a more complex example will be provided in the chapters belows) or calling a REST endpoint. If, by
specifying from()
and/or max()
, only a subset of data is required, a waste of computational power might be implied in the case there is a cost associated to
the retrieval: the Supplier
is supposed to provide the whole set of data.
In this case an alternate approach is offered, a Function
that makes available the from
and max
parameters:
@Nonnull public static <U> Finder<U> ofProvider ( @Nonnull final BiFunction<Integer, Integer, ? extends Collection<? extends U>> provider)
An example of implementation is given by this test:
// given final BiFunction<Integer, Integer, List<String>> provider = // This stands for a complex computation to make data available (from, max) -> IntStream.range(from, Math.min(from + max, 10)) .mapToObj(Integer::toString) .collect(toList()); final var underTest = Finder.ofProvider(provider); // when final var actualResult1 = underTest.results(); final var actualResult2 = underTest.from(4).max(3).results(); // then final var expectedResult1 = List.of("0", "1", "2", "3", "4", "5", "6", "7", "8", "9"); final var expectedResult2 = List.of("4", "5", "6"); assertThat(actualResult1, is(expectedResult1)); assertThat(actualResult2, is(expectedResult2));
In most cases this is what you need, without requiring to write a class implementing Finder
.
Sometimes you already have a working Finder
, but you want to provide transformed (perhaps decorated) data. In this case you have a method for the job,
which accepts a mapping Function
:
@Nonnull public static <U, V> Finder<U> mapping (@Nonnull final Finder<V> delegate, @Nonnull final Function<? super V, ? extends U> mapper)
In this example the mapping Finder
relies of a Finder<Integer>
, while the mapper multiplies by two original data and converts them to strings:
// given final var list = List.of(9, 5, 7, 6, 3); final var delegate = Finder.ofCloned(list); final Function<Integer, String> multiplyAndStringify = n -> Integer.toString(n * 2); final var underTest = Finder.mapping(delegate, multiplyAndStringify); // when final var actualResult1 = underTest.results(); final var actualResult2 = underTest.from(2).max(2).results(); // then final var expectedResult1 = List.of("18", "10", "14", "12", "6"); final var expectedResult2 = List.of("14", "12"); assertThat(actualResult1, is(expectedResult1)); assertThat(actualResult2, is(expectedResult2));
Finder
sNow let's see how a Finder
can work with a complex data source that is not in memory. A classic example is the relational database, so we will use JPA
(Java Persistence API) as a reference. Of course similar examples could be made with other APIs for relational database as well as with other kinds of
data sources, such as NoSQL databases, semantic databases, etc.
The central class of JPA is EntityManager
: it's the facility that makes it possible to create and execute queries. What we want is make the
Finder
execute for us code such as:
return em.createQuery(jpaql, resultType).setFirstResult(firstResult).setMaxResults(maxResults);
where jpaql
, firstResult
and maxResults
have been properly set by intermediate methods previously called. Basically JPAFinder
needs to
create a proper JPAQL query string in function of its parameters, as illustrated by the following tests:
@Test public void testSimpleQuery() { // when final var results = underTest.results(); // then assertThat(jpaMock.sqlQuery, is("SELECT p FROM PersonEntity p")); assertThat(jpaMock.firstResult, is(0)); assertThat(jpaMock.maxResults, is(Integer.MAX_VALUE)); } @Test public void testQueryWithAscendingSortAndFirstMax() { // when final var results = underTest.sort(BY_FIRST_NAME).from(2).max(4).results(); // then assertThat(jpaMock.sqlQuery, is("SELECT p FROM PersonEntity p ORDER BY p.firstName")); assertThat(jpaMock.firstResult, is(2)); assertThat(jpaMock.maxResults, is(4)); } @Test public void testQueryWithDescendingSortAndFirstMax() { // when final var results = underTest.sort(BY_LAST_NAME, DESCENDING).from(3).max(7).results(); // then assertThat(jpaMock.sqlQuery, is("SELECT p FROM PersonEntity p ORDER BY p.lastName DESC")); assertThat(jpaMock.firstResult, is(3)); assertThat(jpaMock.maxResults, is(7)); } @Test public void testQueryWithDoubleSort() { // when final var results = underTest.sort(BY_LAST_NAME, DESCENDING).sort(BY_FIRST_NAME, ASCENDING).results(); // then assertThat(jpaMock.sqlQuery, is("SELECT p FROM PersonEntity p ORDER BY p.lastName DESC, p.firstName")); assertThat(jpaMock.firstResult, is(0)); assertThat(jpaMock.maxResults, is(Integer.MAX_VALUE)); } @Test public void testQueryWithCount() { // when final var count = underTest.count(); // then assertThat(jpaMock.sqlQuery, is("SELECT COUNT(p) FROM PersonEntity p")); assertThat(jpaMock.firstResult, is(0)); assertThat(jpaMock.maxResults, is(Integer.MAX_VALUE)); }
Before going on, let's consider that transactions are managed by JPA in a few ways that, while not particularly complex in the context of a real application, require excessive set up for a simple example like the one we're dealing with. So we introduce a simple helper that executes a task in the context of a transaction:
public <T> T computeInTx (@Nonnull Function<? super EntityManager, T> task); public default void runInTx (@Nonnull final Consumer<? super EntityManager> task)
In a real case the EntityManager
would rather be injected.
The first thing we need is to define the state of the Finder
, which must both model the parameters set by intermediate methods and contain a reference
to the data source (which, in our case, is TxManager
).
@Nonnull private final Class<E> entityClass; @Nonnull private final Function<E, T> fromEntity; @Nonnull private final TxManager txManager; @Nonnegative private final int firstResult; @Nonnegative private final int maxResults; @Nonnull private final List<Pair<JpaqlSortCriterion, SortDirection>> sortCriteria;
Let's now focus on the implementation of intermediate methods. They usually don't do anything smart, but just accumulate the required parameters for later
performing the query. Since a Finder
must be immutable, they can't change the internal state: they rather must create and return a cloned Finder
with the original state and only a single field changed. This is a typical approach for immutable objects.
@Override @Nonnull public Finder<T> from (@Nonnegative final int firstResult) { return new JpaFinder<>(entityClass, fromEntity, txManager, firstResult, maxResults, sortCriteria); } @Override @Nonnull public Finder<T> max (@Nonnegative final int maxResults) { return new JpaFinder<>(entityClass, fromEntity, txManager, firstResult, maxResults, sortCriteria); }
Now let's deal with sorting. Sorting works in a different way in function of the Finder
being “in memory” or associated to a data source:
from()
/max()
» results()
» sorting and
sort()
/from()
/max()
» results()
In both cases sorting criteria are defined by means of the interfaces SortCriterion
and InMemorySortCriterion
, which extends the former.
InMemorySortCriterion
declares a method which will be called by the Finder
to perform the sort:
public void sort (@Nonnull List<? extends U> results, @Nonnull SortDirection sortDirection);
A convenience method of()
makes it possible to easily create a working SortCriterion
by wrapping a Comparator
:
public static final SortCriterion BY_FIRST_NAME = InMemorySortCriterion.of(comparing(Person::getFirstName)); public static final SortCriterion BY_LAST_NAME = InMemorySortCriterion.of(comparing(Person::getLastName));
The intermediate method Finder.sort()
behaves as other intermediate methods and just collects data for a later use:
@Override @Nonnull public Finder<T> sort (@Nonnull final SortCriterion criterion, @Nonnull final SortDirection direction) { if (!(criterion instanceof JpaqlSortCriterion)) { throw new IllegalArgumentException("Can't sort by " + criterion); } return new JpaFinder<>(entityClass, fromEntity, txManager, firstResult, maxResults, concat(sortCriteria, Pair.of((JpaqlSortCriterion)criterion, direction))); }
Note that it usually rejects implementations of SortCriterion
that it doesn't know.
While the implementation of SortCriterion
could be a simple enum
that is later evaluated in a switch
, in a good design it provides its
own behaviour (which is disclosed only to the Finder
implementation). In case of JPA is to assemble the ORDER BY
section of the query:
@RequiredArgsConstructor static final class JpaqlSortCriterion implements SortCriterion { @Nonnull private final String field; @Nonnull public String processSql (@Nonnull final String jpaql, @Nonnull final SortDirection sortDirection) { final var orderBy = jpaql.contains("ORDER BY") ? ", " : " ORDER BY "; return jpaql + orderBy + field + ((sortDirection == SortDirection.DESCENDING) ? " DESC" : ""); } } public static final SortCriterion BY_FIRST_NAME = new JpaqlSortCriterion("p.firstName"); public static final SortCriterion BY_LAST_NAME = new JpaqlSortCriterion("p.lastName");
The core part of the Finder
is where it finalises and executes the query. It creates the JPAQL query and then it callsEntityManager
to execute it.
@Nonnull private <R> TypedQuery<R> createQuery (@Nonnull final EntityManager em, @Nonnull final Class<R> resultType, @Nonnull final String jpaqlPrefix) { final var buffer = new AtomicReference<>(jpaqlPrefix + " FROM " + entityClass.getSimpleName() + " p"); sortCriteria.forEach(p -> buffer.updateAndGet(prev -> p.a.processSql(prev, p.b))); final var jpaql = buffer.get(); log.info(">>>> {}", jpaql); // START SNIPPET: createQuery return em.createQuery(jpaql, resultType).setFirstResult(firstResult).setMaxResults(maxResults); // END SNIPPET: createQuery }
At last we can implement termination methods: they run the query, extract the part of the results they need and convert them from a JPA entity to the
desired class (this task may be needed or not in function of the architecture of the application: a Finder
might expose JPA entities if desired).
@Override @Nonnull public Optional<T> optionalResult() { final var results = results(); if (results.size() > 1) { throw new RuntimeException("More than a single result"); } return results.stream().findFirst(); } @Override @Nonnull public Optional<T> optionalFirstResult() { // Warning: the stream must be consumed *within* runInTx2() return txManager.computeInTx(em -> createQuery(em, entityClass, "SELECT p") .getResultStream() .findFirst() .map(fromEntity)); } @Override @Nonnull public List<T> results() { // Warning: the stream must be consumed *within* runInTx2() return txManager.computeInTx(em -> createQuery(em, entityClass, "SELECT p") .getResultStream() .map(fromEntity) .collect(Collectors.toList())); } @Override @Nonnegative public int count() { return txManager.computeInTx(em -> createQuery(em, Long.class, "SELECT COUNT(p)").getSingleResult()).intValue(); }
A point that is worth mentioning is about how transactions are handled: it largely depends on the used technology, as one needs to respect the best or
mandatory practices that come with it. In the case of JPA, it is required that the Stream
of results produced by a query is consumed before the
transaction is committed; in our case this means within the call to TxManager
.
Finder
sAn extended Finder
is a subclass of Finder
that exposes additional methods for filtering the results. For instance we could write a
PersonFinder
for the previous PersonRegistry
that extends Finder<Person>
and offers two new methods that filter by first or last name with a
regular expression:
@Nonnull public PersonFinder withFirstName (@Nonnull String regex); @Nonnull public PersonFinder withLastName (@Nonnull String regex);
The registry now would return a PersonFinder
instead of the general Finder<Person>
, like this:
public interface PersonRegistry2 extends PersonRegistry { @Override @Nonnull public PersonFinder findPerson(); }
There is a first problem to address: to make it possible to freely mix all the intermediate methods, both the new ones and those defined in the base
Finder
. This cannot be achieved by merely extending the Finder
interface (i. e. interface PersonFinder extends Finder<Person>
), as the
methods declared in Finder
return a value which is statically typed as Finder
; so the compiler would not allow to call the new methods. In other
words this would be possible:
List<Person> persons = findPerson().withLastName("B.*").max(5).results();
but this wouldn't compile:
List<Person> persons = findPerson().max(5).withLastName("B.*").results();
Free mixing of methods is mandatory to fulfill the flexibility target that allows a portion of the application to refine a query that has been partially constructed in another part of the application.
To address this problem a specific interface named ExtendedFinderSupport
is provided. It just re-declares the methods provided by Finder
by
overriding their return value type (in our example to PersonFinder
in place of Finder<Person>
). This is possible thanks to the fact that Java features
covariant return type.
ExtendedFinderSupport
takes two generics: the type of the managed object (Person
) and type of the new Finder
(PersonFinder
). To better
understand this, have a look at theExtendedFinderSupport
source:
public interface ExtendedFinderSupport<T, F extends Finder<T>> extends Finder<T> { /** {@inheritDoc} */ @Override @Nonnull public F from (@Nonnegative int firstResult); /** {@inheritDoc} */ @Override @Nonnull public F max (@Nonnegative int maxResults); /** {@inheritDoc} */ @Override @Nonnull public F sort (@Nonnull SortCriterion criterion); /** {@inheritDoc} */ @Override @Nonnull public F sort (@Nonnull SortCriterion criterion, @Nonnull SortDirection direction); /** {@inheritDoc} */ @Override @Nonnull public F withContext (@Nonnull Object context); }
So a properly designed PersonFinder
must extend ExtendedFinderSupport<Person, PersonFinder>
in place of Finder<Person>
:
public interface PersonFinder extends ExtendedFinderSupport<Person, PersonFinder> { // START SNIPPET: new-methods @Nonnull public PersonFinder withFirstName (@Nonnull String regex); @Nonnull public PersonFinder withLastName (@Nonnull String regex); // END SNIPPET: new-methods }
In this way the new methods can be freely mixed with the ones inherited by the super interface:
log.info("Whose first name starts with B: {}", registry.findPerson() .withFirstName("B.*") .results()); log.info("Whose first name starts with B, sorted by first name: {}", registry.findPerson() .sort(BY_FIRST_NAME) .withFirstName("B.*") .results());
Finder
sIn a complex application it might be convenient to write a number of different Finder
s in form of a hierarchy, for instance because there is some common behaviour that
can be effectively captured by means of the generalisation-specialisation relationship (even though composition often is a better approach). The Finder
API doesn't mandate anything in addition of respecting the contract declared in its interface and have an immutable implementation, so one can proceed with
his favourite design strategy. Anyway the API provides a support class
HierarchicFinderSupport
which offers the capability of having a completely encapsulated status: that is with all fields private
(rather than
protected
) and each level of the hierarchy doesn't know anything of the internal status of the others.
This is a way to mitigate the tight coupling caused by inheritance, so one can make changes to the internal status to a Finder in an intermediate level of
the hierarchy without forcing the subclasses to be adjusted.
To explain how this works by examples, we are going to show how an implementation of the extended Finder
we introduced in the previous section might be
done (in-memory, to keep things simpler).
First we have to declare fields for the internal state and a public constructor to initialize the object with reasonable defaults:
@Nonnull private final List<Person> persons; @Nonnull private final Pattern firstNamePattern; @Nonnull private final Pattern lastNamePattern; // This is for public use public PersonFinderImpl2a (@Nonnull final List<Person> persons) { this(persons, Pattern.compile(".*"), Pattern.compile(".*")); }
A private constructor to initialize everything to arbitrary values is also needed:
// This could be generated by Lombok @RequiredArgsConstructor private PersonFinderImpl2a (@Nonnull final List<Person> persons, @Nonnull final Pattern firstNamePattern, @Nonnull final Pattern lastNamePattern) { this.persons = persons; this.firstNamePattern = firstNamePattern; this.lastNamePattern = lastNamePattern; }
As it was explained above, intermediate methods must create copies of the Finder
to comply with the immutability constraint. In a normal class this would be
performed by a copy constructor that takes all the fields, including those of the superclass(es); but since we decided to make them private
they can't
be accessed. So all we can do is to call the constructor shown in the above code snippet that only deals with the fields of the current class. Since it calls
the super
default constructor, this means that the state of the super class(es) will be reset to a default: i.e. any change applied by intermediate
methods implemented in the super class(es) will be lost. Obviously this is not how things are supposed to work: that's why HierarchicFinderSupport
offers a
clonedWithOverride()
method that fixes everything.
@Override @Nonnull public PersonFinder withFirstName (@Nonnull final String regex) { return clonedWith(new PersonFinderImpl2a(persons, Pattern.compile(regex), lastNamePattern)); } @Override @Nonnull public PersonFinder withLastName (@Nonnull final String regex) { return clonedWith(new PersonFinderImpl2a(persons, firstNamePattern, Pattern.compile(regex))); }
How does it work? It relies on the the presence of a special copy constructor that looks like this:
public PersonFinderImpl2a (@Nonnull final PersonFinderImpl2a other, @Nonnull final Object override) { super(other, override); final var source = getSource(PersonFinderImpl2a.class, other, override); this.persons = source.persons; this.firstNamePattern = source.firstNamePattern; this.lastNamePattern = source.lastNamePattern; }
Note: having this special copy constructor is a requirement of any subclass of
HierarchicFinderSupport
. TheHierarchicFinderSupport
constructor makes a runtime check by introspection and throws an exception if the proper copy constructor is not found.
It takes two parameters:
other
is the usual parameter used in a clone constructor and references the instance being cloned.override
is the incomplete finder we instantiated in our custom intermediate methods. It holds the variations to apply to the state of the new
Finder
.We need to initialize all the fields of our pertinence (that is, the ones declared in the current class) choosing from where to get their values. Aren't they
in the override
object? No, they aren't always there. If we are in a hierarchy of Finder
s all copy constructors will be called wherever a change
is made; in other words, we aren't sure that our portion of state is the one that needs to be partially changed. We can tell by looking at the dynamic type of
the override
object: if it is our same type, it's the incomplete Finder
with the new values, and we must initialize from it. Otherwise we must
initialize as in a regular clone constructor, from the other
object. A convenience method getSource()
performs the decision for us. Of course we
need to call the super()
constructor to make sure everything is fine (but no details of the super class are exposed by it).
Is it a bit clumsy? Admittedly it is, even though the code is simple and clean: once the concept is clear, it's easy to write a copy constructor for a new
extended Finder
. Part of the clumsiness derives from the complexity of inheritance, that we are trying to work around. If you don't like this approach,
just forget HierarchicFinderSupport
.
If you really don't like the concept of “incomplete” Finder
(which is a curious thing indeed, a short-lived object “degraded“ to a value object)
you can use a simpler value object just holding the required values. Since
override
is a generic Object
, it will work. Again, this approach requires some more code to write; but here
@Data
annotation from Lombok or
Java 16 records might be useful.
For instance, an alternate implementation can encapsulate its parameters in a special inner class:
static class Status { Status (@Nonnull final List<Person> persons, @Nonnull final Pattern firstNamePattern, @Nonnull final Pattern lastNamePattern) { this.persons = persons; this.firstNamePattern = firstNamePattern; this.lastNamePattern = lastNamePattern; } @Nonnull final List<Person> persons; @Nonnull final Pattern firstNamePattern; @Nonnull final Pattern lastNamePattern; } @Nonnull private final Status status; // This is for public use public PersonFinderImpl2b (@Nonnull final List<Person> persons) { this(new Status(persons, Pattern.compile(".*"), Pattern.compile(".*"))); }
so the private constructor becomes:
private PersonFinderImpl2b (@Nonnull final Status status) { this.status = status; }
The new copy constructor now is:
public PersonFinderImpl2b (@Nonnull final PersonFinderImpl2b other, @Nonnull final Object override) { super(other, override); final var source = getSource(Status.class, other.status, override); this.status = new Status(source.persons, source.firstNamePattern, source.lastNamePattern); }
And the methods to specify parameters are:
@Override @Nonnull public PersonFinder withFirstName (@Nonnull final String regex) { return clonedWith(new Status(status.persons, Pattern.compile(regex), status.lastNamePattern)); } @Override @Nonnull public PersonFinder withLastName (@Nonnull final String regex) { return clonedWith(new Status(status.persons, status.firstNamePattern, Pattern.compile(regex))); }
HierarchicFinderSupport
Note: this part of the API might go away in future: mention that after TFT-262 a Finder implementation only requires results().
If you decide to implement a Finder
by subclassing HierarchicFinderSupport
there is an alternative way to implement the termination methods, as they have
default implementations. You can rather implement either of these two methods:
@Nonnull protected List<T> computeNeededResults()
This method is responsible to produce the final results as they will be returned to the caller. That is it must respect parameters concerning pagination
(from()
or max()
), sorting and such. For instance, if the source is a relational database this method should prepare and execute a SQL query with
all the relevant clauses (WHERE
, ORDER BY
, LIMIT
, etc.). If this method is not overridden, it will call the method shown below and then
apply pagination and sorting by itself (in memory).
@Nonnull protected List<T> computeResults()
This method would return all the objects of pertinence, without filtering or sorting them; the default implementation of computeNeededResults()
will
take care of that. Since this implies to work in memory after having loaded/created all the objects, this approach is easier to write but less efficient. It's
ok for mocks or simple cases. The implementation of our example is:
@Override @Nonnull protected List<Person> computeResults() { return persons.stream() .filter(p -> firstNamePattern.matcher(p.getFirstName()).matches() && lastNamePattern.matcher(p.getLastName()).matches()) .collect(Collectors.toList()); }
Stream
A first look at Finder
, in particular the presence of intermediate and termination methods, sure recalls a similarity with Java 8
Stream
. Finder
was designed before Java 8 existed
and at that time it partly covered functions that were later made available with Stream
; but it has been conceived with a different scope:
Stream
is a library facility that focuses on a functional and efficient way to navigate through an abstract sequence of objects; it can be
customised via Spliterator
for integrating to unusual data
sources, but it can't interact with them. In other words, a Spliterator
can't receive from the Stream
information about filtering or sorting:
first data are extracted from the data source, then they are manipulated in memory. Last but not least, the API has a predefined set of exposed methods that
can't be extended.Finder
, instead, is a business facility that can interact with the data source and is well aware of the business model; so it can be extended with
new methods that are related to the specific structure of model classes (in the previous example, by knowing that a Person
has firstName
and lastName
).Furthermore it has been designed to integrate with another member of this library, which is named As
and allows to use a particular implementation
of the DCI architectural pattern.
A Stream
can filter results by means of function composition: for instance filter(p -> Pattern.matches("B.*", p.getFirstName()))
; but in this
case filtering happens only after the objects have been loaded in memory because the data source has no way to know what is happening and cannot optimise its
behaviour. For instance, if the data source is a DAO to a database, it can't create an ad-hoc SQL statement; Finder
instead can cooperate with
the data source and prepare an optimised query.
Finder
s can be effectively be used in synergy with Stream
by chaining the appropriated methods: this allows to choose which part of the processing
must be performed by the data source and which part in memory, after data have been retrieved.
// Here both filtering and sorting are performed by the Finder, which could make them happen in the data source. log.info("Whose first name starts with B, sorted by first name: {}", registry.findPerson() .withFirstName("B.*") .sort(BY_FIRST_NAME) .results()); // Here filtering is performed as above, but sorting is done in memory after all data have been retrieved. log.info("Whose first name starts with B, sorted by first name: {}", registry.findPerson() .withFirstName("B.*") .stream() .sorted(Comparator.comparing(Person::getFirstName)) .collect(Collectors.toList())); // Here both filtering and sorting are performed in memory. log.info("Whose first name starts with B, sorted by first name: {}", registry.findPerson() .stream() .filter(p -> Pattern.matches("B.*", p.getFirstName())) .sorted(Comparator.comparing(Person::getFirstName)) .collect(Collectors.toList()));
This explains why Finder
doesn't offer methods such as filter(Predicate<T>)
: because in no way from a compiled Java function it could understand
how to prepare a query for a generic data source. Such a method would be only useful to post-process data once they have been loaded in memory, but it's more
effective to pass the results to a Stream
and use the standard Java API.
InMemoryFinderExample | A simple in-memory Finder example. |
ExtendedFinderExample | An extended finder with two custom methods and some examples of interaction with Stream s |
JPAFinderExample | A data source Finder that runs with JPA (Hibernate). This example also uses As (see below). |
As
is a factory for providing adapters (in the meaning of the Adapter pattern) of an
object.
Terminology note: the object for which we are going to create an adapter will be called “datum” and the adapters “roles”. These terms are mutuated from the DCI architectural pattern (Data, Context and Interaction), even though
As
needn't to be used in that way. But TheseFoolishThings does provide explicit support for DCI, as will be explained in the relevant chapter.
Let's start again from a model class, that could be still the Person
entity. In a typical application we might need to display it in a user interface
and to save it to a file, for instance in the XML format. The first point is to decouple Person
from the way we perform those two operations, also to comply
with the Dependency Inversion principle: we want the UI and the XML subsystem to depend on the
abstraction (Person
), not the opposite way.
We introduce two small interfaces: Displayable
for computing the display name and Marshallable
to serialize an object to an XML stream.
interface Displayable
{
String getDisplayName();
}
interface Marshallable
{
void writeTo (Path path)
throws IOException;
}
These two interfaces are very simple, so they are also in compliance with the Single Responsibility principle and the Interface Segregation principle.
Having Person
to implement the two interfaces is not an option, because would lead to tight coupling. Working with composition would slightly improve things:
class Person
{
public Displayable getDisplayable() { ... }
public Marshallable getMarshallable() { ... }
}
even though a hardwired implementation of the two interfaces inside Person
would still leave us not too far from the starting point. Introducing a
RoleFactory
might be the next step:
class RoleFactory
{
public static RoleFactory getInstance() { ... }
public Displayable createDisplayableFor (Person person) { ... }
public Marshallable createMarshallableFor (Person person) { ... }
}
class Person
{
public Displayable getDisplayable()
{
return RoleFactory.getInstance().createDisplayableFor(this);
}
public Marshallable getMarshallable()
{
return RoleFactory.getInstance().createMarshallableeFor(this);
}
}
Since in a real world application we are going to deal with multiple entities, RoleFactory
must be generic:
class RoleFactory
{
public static RoleFactory getInstance() { ... }
public Displayable createDisplayableFor (Object datum) { ... }
public Marshallable createMarshallableFor (Object datum) { ... }
}
But it's no good to have a fixed, limited set of roles. Who knows what we are going to need in a user interface?
For instance, a Selectable
role might
be used to execute a task whenever a Person
representation is double-clicked in a UI widget. RoleFactory
can be further generalised as:
class RoleFactory
{
public static RoleFactory getInstance() { ... }
public <T> T createRoleFor (Object datum, Class<T> roleType) { ... }
}
so Person
becomes:
class Person
{
public Displayable getDisplayable()
{
return RoleFactory.getInstance().createRoleFor(this, Displayable.class);
}
public Marshallable getMarshallable()
{
return RoleFactory.getInstance().createRoleFor(this, Marshallable.class);
}
}
But, again, there is still too much coupling involving Person
: any new role would require a new method and after all we don't want Person
to depend even on the
RoleFactory
infrastructure; it might be a legacy code as well that we can't or don't want to change. Let's move the responsibility of retrieving the
adapter from the adaptee class to the client code that requires the adapter (it does make sense):
class UserInterface
{
private final RoleFactory roleFactory = RoleFactory.getInstance();
public void renderPerson (Person person)
{
String displayName = roleFactory.createRoleFor(person, Displayable.class).getDisplayName();
}
}
So now we are back to the pristine Person
totally unaware of the roles:
class Person
{
...
}
Now the design is good and we can introduce some syntactic sugar. Since the operation might be read like «given a Person
treat it as
it were a Displayable
» we can rename createRoleFor()
to as()
(short names with a proper meaning improve readability) and, with a bit of
rearranging methods and using static imports, get to this code:
import static RoleFactory.as;
class UserInterface
{
public void renderPerson (Person person)
{
String displayName = as(person, Displayable.class).getDisplayName();
}
}
If on the other hand we can apply a small change to Person
(the bare minimum), we could think of an interface
interface As
{
public <T> T as (Class<T> roleType);
}
and have Person
to implement that interface:
class Person implements As
{
...
}
So we now have another version of our code:
class UserInterface
{
public void renderPerson (Person person)
{
String displayName = person.as(Displayable.class).getDisplayName();
}
}
class Persistence
{
public void storePerson (Person person, Path path)
throws IOException
{
person.as(Marshallable.class).writeTo(path);
}
}
According to Martin Fowler:
Tell-Don't-Ask is a principle that helps people remember that object-orientation is about bundling data with the functions that operate on that data. It reminds us that rather than asking an object for data and acting on that data, we should instead tell an object what to do. This encourages to move behavior into an object to go with the data.
It's one of the way we can make our design really strong and resistant to change. Unfortunately, in practice it is the exact opposite of what is commonly used in Java with the Java Beans idiom, which mandates getter and setter methods. Known libraries/frameworks (such as JPA, JAXB, GUI frameworks, etc.) are designed like that and inspire programmers to follow that way.
This is also due to the fact that TDA is more complex to implement, in particular when there is the need of adding. For instance, given a Person
provided
with standard getters such as getFirstName()
and getLastName()
, it's easy to use these properties in a plurality of contexts, such as:
var joe = new Person("Joe", "Smith");
System.out.println("Name: %s lastName: %s\n", joe.getFirstName(), joe.getLastName());
...
graphicContext.renderString(x, y, String.format("Name: %s last name: %s\n", joe.getFirstName(), joe.getLastName()));
How this would look like in TDA? Something such as:
var joe = new Person("Joe", "Smith");
joe.render("Name: %1$s lastName: %2$s", System.out::println); // 1$ is first name, 2$ is last name, etc.
This assumes render(Consumer<String>)
is implemented in Person
; not a big deal since almost any object we can think of can be rendered as a string and
it can be done with facilities available in the standard Java library. But what about this?
joe.render(graphicContext, x, y, "Name: %1$s lastName: %2$s\");
render(GraphicContext, int, int, String)
would introduce a dependency in Person
, a model class, to GraphicContext
, part of a graphical API: this
is not acceptable. As
can come to the rescue. Since roles can be injected without touching the original object, a possible solution is:
joe.as(Renderable.class).render("Name: %1$s lastName: %2$s", System.out::println);
joe.as(GraphicRenderable.class).render(graphicContext, x, y, "Name: %1$s last name: %2$s\");
Now Person
does not depend on GraphicRenderable
; a concrete implementation of GraphicRenderable
depends on Person
(which is good and complies with the
Depencency Inversion Principle); the client code depends on both (as expected).
PENDING: more details about the implementation of roles, their “friendship” to owner classes and constraints imposed by Java 9 modules.
Injected DCI roles could be also useful for a business model designed following the TDA principle in mind as adapters to an external world that follows the Java Beans idiom.
If you got up to here, you have understood what As
is for. Now it's time to deal with implementation details. But before going on let's recap and give a couple
of definitions. Role implementations compatible with As
can be:
class Person implements Displayable, Marshallable
. This is totally against
the decoupling that As
fosters, but it's legal.As.forObject()
is introduced. While still a coupled approach, it
might be meaningful for some corner case.Note that even when static roles are used, dynamic ones can always be adder later.
To be able to use As
we need to learn three more things:
As
capabilities for datum objects;As
supportOnce an object is declared to implement As
, how to write the code for the methods in the contract? The easiest way is by delegation:
class MyObject implements As { private final As delegate = As.forObject(this); @Override @Nonnull public <T> Optional<T> maybeAs (@Nonnull Class<? extends T> type) { return delegate.maybeAs(type); } @Override @Nonnull public <T> Collection<T> asMany (@Nonnull Class<? extends T> type) { return delegate.asMany(type); } }
If Lombok is used, the code is even simpler:
@EqualsAndHashCode(exclude = "delegate") @ToString(exclude = "delegate") class MyObject implements As { @Delegate private final As delegate = As.forObject(this); }
Remember in any case to exclude the delegate object from equals()
, hashCode()
and toString()
.
Note that this step only satisfies the implementation requirements of the object, while the runtime has been not initialised yet; this means that no role will ever be found. See below the “Configuration” chapters for further details.
It is possible to call As.forObject()
with extra arguments that are interpreted as static roles. If a role is an implementation of RoleFactory
, it will
actually acts a factory of possibly dynamic roles. While this works, it is not the most powerful approach since it couples objects with their roles, while the
whole point of As
is to make them totally decoupled.
With Lombok, if one accepts advanced features such as @ExtensionMethod
, things can be further simplified: it is sufficient to put the annotation
@ExtensionMethod(AsExtensions.class)
to the class in which you want to use As
methods. In the code sample below Person
is a POJO that doesn't implement
As
, but the relevant methods are available on it:
@ExtensionMethod(AsExtensions.class) @Slf4j public class DisplayableExample { public void run() { final var joe = new Person(new Id("1"), "Joe", "Smith"); final var luke = new Person(new Id("2"), "Luke", "Skywalker"); // approach with classic getter log.info("******** (joe as Displayable).displayName: {}", joe.as(_Displayable_).getDisplayName()); log.info("******** (luke as Displayable).displayName: {}", luke.as(_Displayable_).getDisplayName()); // approach oriented to Tell Don't Ask joe.as(_Renderable_).renderTo("******** (joe as Renderable): %1$s %2$s ", log::info); luke.as(_Renderable_).renderTo("******** (luke as Renderable): %1$s %2$s ", log::info); } }
Note that this approach might have a performance impact: see issue TFT-301.
At last, it is possible to do without instance methods, using instead the static methods of AsExtensions
:
import static it.tidalwave.util.AsExtensions.*;
...
Displayable d = as(joe, _Displayable_);
Also in this case there might be a performance hit.
As
and roles with genericsAs explained above, As.as()
expects a Class
as a parameter; this works well with roles that don't use generics. But what about ones that do? Let's for
instance assume to have the role:
interface DataRetriever<T> { public List<T> retrieve(); }
Because of type erasure, the expression as(DataRetriever.class)
doesn't bear any information about the associated generic type. The As
API has been designed
so that the following code compiles and works:
List<String> f1 = object1.as(DataRetriever.class).retrieve(); List<LocalDate> f2 = object2.as(DataRetriever.class).retrieve();
because the result of as()
is not generified and the compiler is allowed to assign it to any generified type; but this raises a warning. To work around this
problem a specific As.Type
has been introduced to be used as parameter in place of Class
:
private static final As.Type<DataRetriever<String>> _StringRetriever_ = As.type(DataRetriever.class); private static final As.Type<DataRetriever<LocalDate>> _LocalDateRetriever_ = As.type(DataRetriever.class);
So the following code compiles with no warning:
List<String> f3 = object1.as(_StringRetriever_).retrieve(); List<LocalDate> f4 = object2.as(_LocalDateRetriever_).retrieve();
… at the expense of a warning in the declaration of As.Type
variables.
Note that it's still not possible to have two roles with the same class and different generics associated to the same object: again because of type erasure the runtime would consider the as two instances of the same role type. To differentiate them it is necessary to use two distinct subclasses.
After the runtime is instantiated, a global context is implicitly activated; a simple code sample is given in the “DciDisplayableExample” module.
The runtime is scanned for classes annotated with DciRole
, which specifies which datum class (or classes) the role is associated to. The datum
instance is also injected in the constructor and, typically, the role implementation keeps a reference to it by means of a field.
@DciRole(datumType = Person.class) @RequiredArgsConstructor public final class PersonDisplayable implements Displayable { @Nonnull private final Person datum; @Override @Nonnull public String getDisplayName() { return String.format("%s %s", datum.firstName, datum.lastName); } }
Now everything is ready to use the role:
@ExtensionMethod(AsExtensions.class) @Slf4j public class DisplayableExample { public void run() { final var joe = new Person(new Id("1"), "Joe", "Smith"); final var luke = new Person(new Id("2"), "Luke", "Skywalker"); // approach with classic getter log.info("******** (joe as Displayable).displayName: {}", joe.as(_Displayable_).getDisplayName()); log.info("******** (luke as Displayable).displayName: {}", luke.as(_Displayable_).getDisplayName()); // approach oriented to Tell Don't Ask joe.as(_Renderable_).renderTo("******** (joe as Renderable): %1$s %2$s ", log::info); luke.as(_Renderable_).renderTo("******** (luke as Renderable): %1$s %2$s ", log::info); } }
In most cases a global context is everything needed for an application.
The example named “DciMarshalXStreamExample” illustrates how local contexts work. It uses the popular serialization framework named XStream to provide XML serialisation capabilities in form of roles.
Let's first introduce the model objects:
@Immutable @AllArgsConstructor @Getter @EqualsAndHashCode public class Person { @Nonnull public static Person prototype() { return new Person("", ""); } public Person (@Nonnull final String firstName, @Nonnull final String lastName) { this(Id.of(UUID.randomUUID().toString()), firstName, lastName); } final Id id; @Nonnull final String firstName; @Nonnull final String lastName; @Override @Nonnull public String toString() { return firstName + " " + lastName; } }
@NoArgsConstructor @EqualsAndHashCode public class ListOfPersons implements List<Person> { @Delegate private final List<Person> persons = new ArrayList<>(); public static ListOfPersons empty () { return new ListOfPersons(); } @Nonnull public static ListOfPersons of (@Nonnull final Person ... persons) { return new ListOfPersons(List.of(persons)); } public ListOfPersons (@Nonnull final List<? extends Person> persons) { this.persons.addAll(persons); } @Override @Nonnull public String toString() { return persons.toString(); } }
ListOfPersons
is basically an implementation of List<Person>
that delegates all methods to an ArrayList
. While it doesn't offer any specific
additional behaviour (apart from some factory methods), it is required to use dynamic roles as they are bound to a specific class; because of Java
type erasure a List<Person>
cannot be distinguished from a List
of any other kind, such as List<String>
. Having a specific subclass fixes this
problem, acting as a sort of “reification”.
Now let's deal with Xstream. The first thing to do is to set up a bag of configuration that instructs the framework how to manage our model objects. This configuration is encapsulated in a specific DCI context:
@DciContext public interface XStreamContext { @Nonnull public XStream getXStream(); }
@DciContext public class XStreamContext1 implements XStreamContext { @Getter private final XStream xStream = new XStream(new StaxDriver()); public XStreamContext1() { // xStream.alias("person", PersonConverter.MutablePerson.class); xStream.alias("person", Person.class); xStream.aliasField("first-name", PersonConverter.MutablePerson.class, "firstName"); xStream.aliasField("last-name", PersonConverter.MutablePerson.class, "lastName"); xStream.useAttributeFor(PersonConverter.MutablePerson.class, "id"); xStream.registerConverter(new IdXStreamConverter()); xStream.registerConverter(new PersonConverter()); xStream.alias("persons", ListOfPersons.class); xStream.addImplicitCollection(ListOfPersons.class, "persons"); xStream.addPermission(AnyTypePermission.ANY); } }
Details about Xstream converters are not listed since they are specific to Xstream. An alternate implementation could be:
@DciContext public class XStreamContext2 implements XStreamContext { @Getter private final XStream xStream = new XStream(new StaxDriver()); public XStreamContext2() { // xStream.alias("person", PersonConverter.MutablePerson.class); xStream.alias("PERSON", Person.class); xStream.aliasField("ID", PersonConverter.MutablePerson.class, "id"); xStream.aliasField("FIRST-NAME", PersonConverter.MutablePerson.class, "firstName"); xStream.aliasField("LAST-NAME", PersonConverter.MutablePerson.class, "lastName"); xStream.registerConverter(new IdXStreamConverter()); xStream.registerConverter(new PersonConverter()); xStream.alias("PERSONS", ListOfPersons.class); xStream.addImplicitCollection(ListOfPersons.class, "persons"); xStream.addPermission(AnyTypePermission.ANY); } }
Now, what if one wishes to use each of the two serialisation configurations in the same application, but in different circumstances? That's what DCI local contexts
are for: they can be activated only in specific portions of the code, bound and unbound to the current thread by specific calls to an instance of
ContextManager
(it must be injected e.g. by using Spring):
final var xStreamContext1 = new XStreamContext1(); try { contextManager.addLocalContext(xStreamContext1); codeThatUsesMarshalling(); } finally { contextManager.removeLocalContext(xStreamContext1); }
The try/finally
pattern to ensure that the context is unbound even in case of exception can be replaced by a shorter syntax using try-with-resources
:
try (final var binder = contextManager.binder(new XStreamContext2())) { codeThatUsesMarshalling(); }
Alternate variants with lambdas are also supported.
PENDING: include examples
Now let's go with the implementation of roles. First we introduce a generic support for Marshallable
as follows:
@RequiredArgsConstructor public abstract class XStreamMarshallableSupport<T> implements Marshallable { @Nonnull private final T datum; @Nonnull private final XStreamContext xStreamContext; @Override public final void marshal (@Nonnull final OutputStream os) { xStreamContext.getXStream().toXML(datum, os); } }
Two subclasses are required to bear the relevant annotations that bind them with their owners (Person
and ListOfPersons
).
@DciRole(datumType = Person.class, context = XStreamContext.class) public final class PersonXStreamMarshallable extends XStreamMarshallableSupport<Person> { public PersonXStreamMarshallable (@Nonnull final Person datum, @Nonnull final XStreamContext context) { super(datum, context); } }
@DciRole(datumType = ListOfPersons.class, context = XStreamContext.class) public final class ListOfPersonsXStreamMarshallable extends XStreamMarshallableSupport<ListOfPersons> { public ListOfPersonsXStreamMarshallable (@Nonnull final ListOfPersons datum, @Nonnull final XStreamContext context) { super(datum, context); } }
Note that in this case the @DciRole
annotation explicitly refers XStreamContext
, since the role must be active only when either of the two contexts is activated.
The context instance is injected in the constructor together with the associated datum instance, so it can provide the Xstream configuration.
The implementation of unmarshallers is similar:
@RequiredArgsConstructor public abstract class XStreamUnmarshallableSupport<T> implements Unmarshallable { @Nonnull private final T datum; @Nonnull private final XStreamContext xStreamContext; @Override @Nonnull public final T unmarshal (@Nonnull final InputStream is) { return (T)xStreamContext.getXStream().fromXML(is); } }
@DciRole(datumType = Person.class, context = XStreamContext.class) public final class PersonXStreamUnmarshallable extends XStreamUnmarshallableSupport<Person> { public PersonXStreamUnmarshallable (@Nonnull final Person datum, @Nonnull final XStreamContext context) { super(datum, context); } }
@DciRole(datumType = ListOfPersons.class, context = XStreamContext.class) public final class ListOfPersonsXStreamUnmarshallable extends XStreamUnmarshallableSupport<ListOfPersons> { public ListOfPersonsXStreamUnmarshallable (@Nonnull final ListOfPersons datum, @Nonnull final XStreamContext context) { super(datum, context); } }
Now everything is ready:
final var joe = new Person(new Id("1"), "Joe", "Smith"); final var luke = new Person(new Id("2"), "Luke", "Skywalker"); var marshalledPersons = ""; var marshalledPerson = ""; try (final var os = new ByteArrayOutputStream()) { joe.as(_Marshallable_).marshal(os); log.info("******** (joe as Marshallable) marshalled: {}\n", marshalledPerson = os.toString(UTF_8)); } try (final var os = new ByteArrayOutputStream()) { ListOfPersons.of(joe, luke).as(_Marshallable_).marshal(os); log.info("******** (listOfPersons as Marshallable) marshalled: {}\n", marshalledPersons = os.toString(UTF_8)); }
For what concerns unmarshallers, since as()
must be called on an instantiated object a “prototype” empty object must be created. It is immediately discarded,
as the relevant object is the one returned by the unmarshall()
call.
try (final var is = new ByteArrayInputStream(marshalledPerson.getBytes(UTF_8))) { final var person = Person.prototype().as(_Unmarshallable_).unmarshal(is); log.info("******** Unmarshalled person: {}\n", person); } try (final var is = new ByteArrayInputStream(marshalledPersons.getBytes(UTF_8))) { final var listOfPersons = ListOfPersons.empty().as(_Unmarshallable_).unmarshal(is); log.info("******** Unmarshalled persons: {}\n", listOfPersons); }
Global and local contexts can co-exist: local contexts just bind new roles in addition to those made available by the global context. Multiple local contexts
can be used at the same time. If the same role is bound by more than a single context, all of them are available by calling the method As.asMany()
.
For what concerns the As.as()
or As.maybeAs()
methods that return a single role, at the moment it is not deterministic which one is returned.
See issue TFT-192.
While the global context is immutable, local contexts can come and go; the lifespan of a typical owner object encompasses multiple activations and
deactivations of local contexts. So, to what instant of the owner lifespan do the set of roles returned by as()
refer? Always at the creation time of
the owner object, even though roles are not necessarily instantiated at that moment. The runtime takes a snapshot of local contexts active a creation
time of a owner object and uses that snapshot every time it searches for a role.
Finder
sAn exception of the above mentioned rule might happen with Finder
s, in the case that their result is composed of objects implementing As
: the programmer
might want to use local contexts specified at the moment of the instantiation of the Finder
, and not at the moment it computes the result. In this case
the local context can be activated inside the Finder
implementation.
For this reason the ExtendedFinderSupport
interface provides a specific support, namely the withContext(Object)
method: it allows to make the Finder
aware of it (it can be called multiple times, in which case local contexts are accumulated). The class HierarchicFinderSupport
provides the accumulation
behaviour and makes the local contexts available to subclasses by means of a method getContexts()
.
PENDING: Show a code example.
A role can be implemented by referring other roles. For instance, let's introduce two example roles that save/load an object to/from a Path
:
public interface Savable { public static final Class<Savable> _Savable_ = Savable.class; public default void saveTo (@Nonnull final Path path) throws IOException { saveTo(path, StandardCharsets.UTF_8); } public void saveTo (@Nonnull final Path path, @Nonnull final Charset charset, @Nonnull OpenOption... openOptions) throws IOException; }
public interface Loadable { public static final Class<Loadable> _Loadable_ = Loadable.class; public default <T> T loadFrom (@Nonnull final Path path) throws IOException { return loadFrom(path, StandardCharsets.UTF_8); } public <T> T loadFrom (@Nonnull final Path path, @Nonnull final Charset charset, @Nonnull OpenOption... openOptions) throws IOException; }
They can be used as follows:
joe.as(_Savable_).saveTo(path1); ListOfPersons.of(joe, luke).as(_Savable_).saveTo(path2); final var p = Person.prototype().as(_Loadable_).loadFrom(path1); final var lp = ListOfPersons.empty().as(_Loadable_).loadFrom(path2);
We can provide implementations relying upon the Marshallable
/ Unmarshallable
roles, whose instances can be achieved by using as()
. This could be done
directly on the datum, if it implements As
; or by creating a delegate by means of As.forObject
as in example below:
@DciRole(datumType = Object.class) public class MarshallableSavable implements Savable { @Nonnull private final As datumAsDelegate; public MarshallableSavable (@Nonnull final Object datum) { this.datumAsDelegate = As.forObject(datum); } @Override public void saveTo (@Nonnull final Path path, @Nonnull final Charset charset, @Nonnull final OpenOption ... openOptions) throws IOException { assert charset.equals(StandardCharsets.UTF_8); try (final var os = Files.newOutputStream(path, openOptions)) { datumAsDelegate.as(_Marshallable_).marshal(os); } } }
@DciRole(datumType = Object.class) public class MarshallableLoadable implements Loadable { @Nonnull private final As datumAsDelegate; public MarshallableLoadable (@Nonnull final Object datum) { this.datumAsDelegate = As.forObject(datum); } @Override public <T> T loadFrom (@Nonnull final Path path, @Nonnull final Charset charset, @Nonnull final OpenOption ... openOptions) throws IOException { assert charset.equals(StandardCharsets.UTF_8); try (final var is = Files.newInputStream(path, openOptions)) { return datumAsDelegate.as(_Unmarshallable_).unmarshal(is); } } }
As
implementation relies on a singleton named SystemRoleFactory
that, given an object, returns all the roles associated to it.
A default implementation relies upon the the Java Service Provider interface, based on the class
ServiceProvider
. In short, a special file named
META-INF/services/it.tidalwave.util.spi.SystemRoleFactoryProvider
is searched for at runtime and it must contain the name of a provider for
SystemRoleFactory
.
The default implementation (without Spring) is unable to find any role. Applications can specify an overriding implementation, such as in the example:
public class HardwiredSystemRoleFactoryProvider implements SystemRoleFactoryProvider { private static final List<Class<?>> ROLES = List.of(PersonJpaPersistable.class); static class HardwiredRoleFactory extends SystemRoleFactorySupport { public void initialize() { scan(ROLES); } } @Override @Nonnull public SystemRoleFactory getSystemRoleFactory () { final var h = new HardwiredRoleFactory(); h.initialize(); return h; } }
And the META-INF/services/it.tidalwave.util.spi.SystemRoleFactoryProvider
file contains:
it.tidalwave.thesefoolishthings.examples.jpafinderexample.HardwiredSystemRoleFactoryProvider
Of course it is possible to provide more sophisticated implementations, such as a classpath scanner (with Spring this is provided out-of-the-box).
The standard Java SPI approach sets up the runtime once and for all, as it is appropriate for an application. But when running tests a specific runtime must be installed from scratch each time. On this purpose a specific method is available that should be called before running a test (or a batch of tests), providing an empty provider or a mock:
@BeforeClass public void setup() { SystemRoleFactory.reset(); }
To have As
working with Spring another dependency must be added:
<dependency>
<groupId>it.tidalwave.thesefoolishthings</groupId>
<artifactId>it-tidalwave-role-spring</artifactId>
<version>3.2-ALPHA-23</version>
</dependency>
In the code it is sufficient to include the bean RoleSpringConfiguration
in the application context, as in this example:
@Configuration public class Main { @Bean public DisplayableExample displayableExample() { return new DisplayableExample(); } public static void main (@Nonnull final String ... args) { final var context = new AnnotationConfigApplicationContext(RoleSpringConfiguration.class, Main.class); context.getBean(DisplayableExample.class).run(); } }
PENDING: This probably is not strictly required, but it makes stuff such as the
ContextManager
available with dependency injection.
If annotations are not used and beans.xml
files are preferred, the value of RoleSpringConfiguration.BEANS
must be included in the XML context.
The Spring adapter is able to scan the classpath to find annotated roles. Java classpath scanners need to work from a set of specified root packages;
the default ones are com
, org
ad it
. If custom packages are needed, they can be specified as follows:
it.tidalwave.util.spring.ClassScanner.setBasePackages("fr:es:de");
With Spring roles can specify additional parameters in their constructor: the runtime will try to inject into them beans defined in the context.
PENDING: Injection qualifiers are not supported yet.
JPAFinderExample | While the main focus of this example is a Finder , DCI is used to inject persistence-related roles. It also demonstrates a a custom SystemRoleFactoryProvider . |
Standalone, with Lombok ExtensionMethod |
DciDisplayableExample | A very simple DCI example. | Spring |
DciMarshalXStreamExample | DCI used to persist entities with Xstream. | Spring |
DciPersistenceJpaExample | DCI used to persist entities with JPA/Hibernate. | SpringBoot |
DciSwingExample | A little demonstration of DCI with a User Interface (Swing). | Spring |
Inspired by the heterogeneous map pattern described in Effective Java by Joshua Bloch,
TypeSafeMap
is an immutable map that works with type-aware keys, so its retrieval method is type-safe; furthermore it supports Optional
.
final Key<String> k1 = Key.of("Key 1", String.class); final Key<Integer> k2 = Key.of("Key 2", Integer.class); final var m = TypeSafeMap.newInstance() .with(k1, "Value 1") .with(k2, 1); final Optional<String> v1 = m.getOptional(k1); final Optional<Integer> v2 = m.getOptional(k2); assertThat(v1.get(), is("Value 1")); assertThat(v2.get(), is(1));
TypeSafeMultiMap
is similar, but associates keys to collection of values, not single values; so associating multiple (key, value)
pairs keep
all the values instead of replacing the previous one.
final Key<String> k1 = Key.of("Key 1", String.class); final Key<Integer> k2 = Key.of("Key 2", Integer.class); final var m = TypeSafeMultiMap.newInstance() .with(k1, "Value 1") .with(k1, "Value 2") .with(k2, 1) .with(k2, 2); final Collection<String> v1 = m.get(k1); final Collection<Integer> v2 = m.get(k2); assertThat(v1, is(List.of("Value 1", "Value 2"))); assertThat(v2, is(List.of(1, 2)));
Find more information in the JavaDoc for TypeSafeMap and TypeSafeMultiMap.