Poseidon Athens Half Marathon Registrations – Architecture, Technical & Infrastructure Overview

Table of Contents

  • Overview
  • Samples
    • Front-end Samples
    • Admin Site Samples
    • Report Site Samples
  • Architectural Overview
  • Technologies/ Frameworks
  • Infrastructure
  • Future Improvements

Overview

I have been helping my dad who is organising the 3rd biggest running event in Greece, the Poseidon Athens Half Marathon and Parallel Races, by putting together a registration capability. This article is describing all the technical aspects of this effort.

The effort has been originally started in 2019, in line for the 2020 event, but Covid hit and all the running events in Greece got postponed by two years, so it eventually went live at the end of 2021 in line for the 2022 event which successfully took place in the middle of past April.

Samples

Front-end Samples

Although the registrations for 2023 are currently closed on the main public front end in preparation for the 2023 event, here are some screenshots from the test site:

Following that page comes a secured payment page that forwards the request via the registration-server to a confirmation or rejection page.

Admin Site Samples

All the below are sample dummy data in the test environment

Report Site Samples

All the below are sample dummy data in the test environment

Architectural Overview

In short, there is a public Front-end for the registration forms that consumes and directly interacts exclusively with a public Back-end component. The latter component is also responsible for scheduling jobs and email notification sendouts. To support Organisation members activities there are two additional components: the Admin site that is an authenticated/authorised view of all the data and the Report site for anything regarding reports and analytics. Finally, there is also a standalone tool that is responsible of parsing exclusively group registrations received via a customised excel spreadsheet.

Technologies/ Frameworks

The public Front-end has been built using the Create React App npx command. Multiple Ant Design components have been utilised especially the Form component capabilities. For internalisation the i18next React library has been used while for routing, the React Router has been used with the HashRouter variant.

The public Back-end has been built using Spring Boot exposing Rest Endpoints. For Database connectivity Hibernate has been used. The Spring Scheduling capabilities have been utilised for sending the email notifications while Apache Camel Barcode component has been used to generate the QR code on the email notifications. For the generation of the PDF attachment on the email, the iText library has been used.

For the creation of the private Admin site, the JHipster generator has been used to bootstrap the project in its React variant. With its Liquibase embedded capabilities, it is the master for the database generation and future changes application. Special care has been taken to maintain the script written in JDL (JHipster Domain Language) utilising the excellent JDL Studio Visualiser. The Authentication capability has been enhanced to allow some new more granular roles that are guarding certain functionalities.

The private Report site has been built as a single application where the backend endpoints are served via Spring Boot while the frontend code is served via React components. Special care has been taken to construct it in a generic fashion in that everything that appears takes the form of tiles that are showing some diagram or numerical value along with some icon and a download report link. For the visualisation part the Ant Design Charts have been used. For the Database connectivity the Jooq library has been utilised.

Infrastructure

The overall infrastructure can be summarised on the below diagram:

The code repositories are all hosted in GitHub while CI/CD has been setup as GitHub Action triggering AWS CodePipeline

All of the backend applications (registrations-server) as well as the standalone applications with both frontend and backend code (registrations-report and registrations-admin) have been deployed into AWS Elastic Beanstalk which is AWS’s PaaS solution sitting on top of their AWS EC2 solution.

Each one of the application has been deployed twice, representing Test and Production environments, each having their configurations injected.

The registrations-client front-end codebase also triggers via GitHub actions AWS CodePipeline, but this time gets deployed in AWS Amplify solution. In particular there is a main branch for Test deployment while there exist a prod branch for the Production deployment. Aside from making it very easy to deploy front end codebases, AWS Amplify is also giving out of the box an Amazon Certificate on the autogenerated domain or the explicitly owned and associated domain.

For the registrations-server public application in particular, the Load Balancing capabilities of Elastic Beanstalk have been utilised to auto-scale (up or down) based on a Network traffic threshold strategy.

This setup is bringing two additional benefits: Firstly it makes it very easy to associate a domain and/or certificate to the Load Balancer and secondly it makes it equally easy to do the same on subdomains.

For that reason via AWS Route 53 a domain has been acquired: stc-events-registrations-server.org and underneath it two subdomains have been setup: test.stc-events-registrations-server.org and prod.stc-events-registrations-server.org, each being setup via DNS records to forward directly to the Load Balanced AWS Elastic Beanstalk instance as it is shown below:

Having this setup is also giving out-of-the-box a certificate associated at the Load Balancer level.

Future Improvements

  • Transition to a more generic and dynamic, metadata-based configuration where more than one registration forms could be accommodated. In such an implementation the front-end registration code will be quite thin and all the metadata configuration about what is to be displayed and how will be coming from the back-end. That would facilitate setting up future registration events effortlessly without the need to write any line of code.
  • Decoupling the public back-end from scheduling activities that should be residing to a dedicated standalone other back-end application responsible exclusively for batch operations.
  • Although Card Payment is supported currently via Cardlink, invoicing is still issued manually. In a future improvement, invoice will also be generated automatically and instantly at the point of purchase.
  • Enhancing the Group Registrations capability. Currently it is accommodated via Excel spreadsheets that are sent out to Organisations and received back, being parsed later via a standalone tool. The nature of Excel is limited in terms of form handling so quality of data is lacking introducing delays in processing group registration requests. This functionality will be migrated to a web-based solution where the registration will be happening online.

Contributing Java “Currying” in StackOverflow Documentation

A week or so ago I’ve contributed in the StackOverflow Documentation an entry about Currying  which until now it’s mostly intact and only cosmetic changes have been made to the original. This is how it goes:

Currying is the technique of translating the evaluation of a function that takes multiple arguments into evaluating a sequence of functions, each with a single argument.

This can be useful when:

  1. Different arguments of a function are calculated at different times. (see Example 1)
  2. Different arguments of a function are calculated by different tiers of the application. (see Example 2)

This generic utility applies currying on a 2-argument function:

class FunctionUtils {

    public static <A,B,C> Function<A,Function<B,C>> curry(BiFunction<A, B, C> f) {
        return a -> b -> f.apply(a,b);
    }

}

The above returned curried lambda expression can also be viewed/written written as:

a -> ( b -> f.apply(a,b) );

Example 1

Let’s assume that the total yearly income is a function composed by the income and a bonus:

BiFunction<Integer,Integer,Integer> totalYearlyIncome = (income,bonus) -> income + bonus;

Let’s assume that the yearly income portion is known in advance:

Function<Integer,Integer> partialTotalYearlyIncome = FunctionUtils.curry(totalYearlyIncome).apply(10000);

And at some point down the line the bonus is known:

System.out.println(partialTotalYearlyIncome.apply(100));

Example 2

Let’s assume that the car manufacturing involves the application of car wheels and car body:

BiFunction<String,String,String> carManufacturing = (wheels,body) -> wheels.concat(body);

These parts are applied by different factories:

class CarWheelsFactory {
    public Function<String,String> applyCarWheels(BiFunction<String,String,String> carManufacturing) {
        return FunctionUtils.curry(carManufacturing).apply("applied wheels..");
    }
}

class CarBodyFactory {
    public String applyCarBody(Function<String,String> partialCarWithWheels) {
        return partialCarWithWheels.apply("applied car body..");
    }
}

Notice that the CarWheelsFactory above curries the car manufacturing function and only applies the wheels. The car manufacturing process then will take the below form:

CarWheelsFactory carWheelsFactory = new CarWheelsFactory();
CarBodyFactory   carBodyFactory   = new CarBodyFactory();

BiFunction<String,String,String> carManufacturing = (wheels,body) -> wheels.concat(body);

Function<String,String> partialCarWheelsApplied = carWheelsFactory.applyCarWheels(carManufacturing);
String carCompleted = carBodyFactory.applyCarBody(partialCarWheelsApplied);

ConcurrentHashMap computeIfAbsent method in Java 8

The very nifty method computeIfAbsent has been added in the ConcurrentMap interface in Java 8 as part of the atomic operations of the ConcurrentMap interface. It’s more precisely a default method that provides an alternative to what we use to code ourselves:

if (map.get(key) == null) {
   V newValue = mappingFunction.apply(key);
   if (newValue != null)
      return map.putIfAbsent(key, newValue);
   }
}

but this time providing a function as a second argument.

Most often this method will be used in the context of ConcurrentHashMap in which case the method is implemented in a thread-safe synchronised way.

In terms of usage the method is handy for situations where we want to maintain a thread-safe cache of expensive one-off computed resources.

Here’s another example of holding a key-value pair where value is a thread-safe counter represented by an AtomicInteger:

private final Map counters = new ConcurrentHashMap();

private void accumulate(String name) {
    counters.computeIfAbsent(name, k -> new AtomicInteger()).incrementAndGet();
}

Guava Splitter

Guava Splitter is a utility class that is offering roughly speaking the opposite of what the Joiner utility class does.

You can have a read at my previous quick article about Guava Joiner.

Splitter is offering similar look-and-feel as Joiner: the on method serves as a static factory method, subsequent method calls are following the builder pattern returning a wrapped this (therefore need to be packed prior the final split method call). Finally it doesn’t try to be complicated returning an Iterable of Strings. It is aspiring in replacing the usage of String.split method.

Its usage when we want to get back an Iterable of Strings looks like:

 
Splitter.on(",").trimResults().split("my,string,to,split")

When we want to perform the operation retrieving a Map out of its String representation:

Splitter.on(",").trimResults().omitEmptyStrings().withKeyValueSeparator("->").split("one->1,two->2,three->3")

In this instance the call to withKeyValueSeparator is yielding a Splitter.MapSplitter inner class that is adding the map context to the splitter.

Guava Joiner

Guava Joiner is a nice utility class that is String-ing data structure contents, useful for debugging, logging, reporting or toString-ing activities in a flexible, builder pattern-esque and fluent way.

What we used to achieve with this conventional code:

public static final String joinIterable(final Iterable iterable, String delimiter){
   if(iterable == null)
      throw new IllegalArgumentException();
   StringBuilder result = new StringBuilder("Conventional iterable join: ");
   for(T elem : iterable){
      result.append(elem).append(delimiter);
   }
   result.setLength(result.length() - 1);
   return result.toString();
}

public static final String join(final List list, String delimiter){
   return joinIterable(list, delimiter);
}

can now be achieved with:

Joiner.on(",").useForNull("null").join(list)

The Joiner class is a utility class and the on method represents a static factory method that returns a new Joiner. The useForNull method follows the builder pattern returning a wrapped version of this (therefore need to be packed prior the final join method call). Be carefull that this call is needed otherwise null elements will throw a NPE. The join method is passing the Iterable in this case data structure.

Similarly for maps, what we used to achieve with this:

    public static final <K,V> String join(final Map<K,V> map, String delimiter){
        if(map == null)
            throw new IllegalArgumentException();
        StringBuilder result = new StringBuilder("Conventional map join: ");
        for(K key : map.keySet()){
            result.append(key).append("->").append(map.get(key)).append(delimiter);
        }
        result.setLength(result.length() - 1);
        return result.toString();
    }

can now be achieved by:

Joiner.on(",").withKeyValueSeparator("->").useForNull("null").join(map)

In this instance, the withKeyValueSeparator method call is returning a Joiner.MapJoiner that is a static class that is adding the map context to the Joiner class.

Spring, JdbcTemplate, Oracle example

This is a demonstration of how Spring and Oracle are playing happily together via JdbcTemplate.

Prerequisites:

Let’s start from the end: this is how the JUnit 4 test case looks like for the DAO of our domain object for all its CRUD operations armored with rollback functionality so our database data are not being altered after our transactional operations. Notice the use of matchers and the spring-context-enabling annotation of the test class:

package com.dimitrisli.springJdbcOracle.dao.impl;

import com.dimitrisli.springJdbcOracle.dao.LocationDao;
import com.dimitrisli.springJdbcOracle.model.Location;
import org.junit.Test;
import org.junit.runner.RunWith;
import static org.junit.Assert.*;
import static org.hamcrest.CoreMatchers.*;
import org.springframework.test.context.ContextConfiguration;
import org.springframework.test.context.junit4.SpringJUnit4ClassRunner;
import org.springframework.test.context.transaction.TransactionConfiguration;
import org.springframework.transaction.annotation.Transactional;

import javax.inject.Inject;
import java.util.List;

@RunWith(SpringJUnit4ClassRunner.class)
@ContextConfiguration(locations = "/spring/context/applicationContext.xml")
@TransactionConfiguration(transactionManager = "jdbcTransactionManager", defaultRollback = true)
@Transactional
public class LocationDaoTest {

    @Inject private LocationDao locationDao;

    @Test
    public void testSelectAllLocations(){
    List<Location> locations = locationDao.getLocations();
    assertThat(locations.size(), is(23));
    }

    @Test
    public void testSelectOneLocation(){
      Location location = locationDao.getLocation(1000L);
      assertNotNull("test entry not found", location);
    }

    @Test
    public void testDeleteLocation(){
        assertNotNull("entry for test should be there", locationDao.getLocation(1000L));
        locationDao.deleteLocation(1000L);
        assertNull("entry wasn't successfully deleted", locationDao.getLocation(1000L));
    }

    @Test
    public void testInsertLocation(){
        Location location = new Location(1000L,"test","11111","athens","athens","IT");
        int sizeBeforeInsert = locationDao.getLocations().size();
        locationDao.createLocation(location);
        assertThat(locationDao.getLocations().size(),is(sizeBeforeInsert + 1));
    }

    @Test
    public void testUpdateLocation(){
        Location newLocation = new Location(1000L,"test","11111","athens","athens","IT");
        locationDao.updateLocation(newLocation);
        Location changedLocation = locationDao.getLocation(1000L);
        assertThat(changedLocation.getStreetAddress(), is("test"));
    }

}

The POM looks like this:

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>

    <groupId>SpringJdbcOracle</groupId>
    <artifactId>SpringJdbcOracle</artifactId>
    <version>1.0</version>


    <build>
        <plugins>
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-compiler-plugin</artifactId>
                <version>2.3.2</version>
                <configuration>
                    <source>1.6</source>
                    <target>1.6</target>
                    <encoding>${project.build.sourceEncoding}</encoding>
                </configuration>
            </plugin>

            <!--Logging related plugin
                this plugin breaks the build if non-wanted logging frameworks are spotted in the classpath
            -->
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-enforcer-plugin</artifactId>
                <version>1.0.1</version>
                <executions>
                    <execution>
                        <id>enforce-versions</id>
                        <goals>
                            <goal>enforce</goal>
                        </goals>
                        <configuration>
                            <rules>
                                <bannedDependencies>
                                    <excludes>
                                        <exclude>commons-logging:commons-logging</exclude>
                                        <exclude>log4j:log4j</exclude>
                                    </excludes>
                                </bannedDependencies>
                            </rules>
                        </configuration>
                    </execution>
                </executions>
            </plugin>
        </plugins>
    </build>

    <dependencies>
        <!--Spring related dependencies -->
        <dependency>
            <groupId>org.springframework</groupId>
            <artifactId>spring-jdbc</artifactId>
            <version>${spring.version}</version>
            <exclusions>
                <exclusion>
                    <groupId>commons-logging</groupId>
                    <artifactId>commons-logging</artifactId>
                </exclusion>
            </exclusions>
        </dependency>
        <dependency>
            <groupId>org.springframework</groupId>
            <artifactId>spring-test</artifactId>
            <version>${spring.version}</version>
            <exclusions>
                <exclusion>
                    <groupId>commons-logging</groupId>
                    <artifactId>commons-logging</artifactId>
                </exclusion>
            </exclusions>
        </dependency>
        <dependency>
            <groupId>javax.inject</groupId>
            <artifactId>javax.inject</artifactId>
            <version>1</version>
        </dependency>


        <!--Oracle jdbc driver-->
        <dependency>
            <groupId>com.oracle</groupId>
            <artifactId>ojdbc6</artifactId>
            <version>11.2.0.3</version>
        </dependency>

        <!-- DB Connection Pool -->
        <dependency>
            <groupId>commons-dbcp</groupId>
            <artifactId>commons-dbcp</artifactId>
            <version>1.4</version>
        </dependency>

        <!--Logging related dependencies
            Further info: http://www.slf4j.org/faq.html#excludingJCL and
                          http://blog.frankel.ch/configuring-maven-to-use-slf4j
        -->
        <dependency>
            <groupId>ch.qos.logback</groupId>
            <artifactId>logback-classic</artifactId>
            <!--scope should be runtime but applied at compile time
                to get autocompletion visibility at logback.xml-->
            <!--scope>runtime</scope-->
            <version>0.9.24</version>
        </dependency>
        <dependency>
            <groupId>org.slf4j</groupId>
            <artifactId>slf4j-api</artifactId>
            <version>1.6.1</version>
        </dependency>
        <dependency>
            <groupId>org.slf4j</groupId>
            <artifactId>jcl-over-slf4j</artifactId>
            <version>1.7.2</version>
        </dependency>

        <!-- JUnit 4 -->
        <dependency>
            <groupId>junit</groupId>
            <artifactId>junit</artifactId>
            <version>4.10</version>
        </dependency>

        <!-- Misc -->
        <dependency>
            <groupId>cglib</groupId>
            <artifactId>cglib</artifactId>
            <version>2.2.2</version>
            <scope>runtime</scope>
        </dependency>


    </dependencies>

    <properties>
        <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
        <spring.version>3.1.2.RELEASE</spring.version>
     </properties>


</project>

Notes:
– Note how we explicitly piping any commons logging or log4j logging through our logback slf4j wrapper
– Notice how we are declaring our Oracle driver dependency given that it’s already installed in our local Maven repo (see prerequisites section)

The applicationContext:

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
       xmlns:context="http://www.springframework.org/schema/context"
       xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context.xsd">

    <import resource="classpath*:spring/database/database.xml"/>
    <context:component-scan base-package="com.dimitrisli.springJdbcOracle" />
</beans>

The imported database context above is:

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
       xmlns:p="http://www.springframework.org/schema/p"
       xmlns:tx="http://www.springframework.org/schema/tx"
       xmlns:c="http://www.springframework.org/schema/c"
       xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://www.springframework.org/schema/tx http://www.springframework.org/schema/tx/spring-tx.xsd">

    <bean  class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer"
           p:location="properties/database.properties" />

    <bean  id="dataSource"
           class="org.apache.commons.dbcp.BasicDataSource"
           destroy-method="close"
           p:driverClassName="${jdbc.driverClassName}"
           p:url="${jdbc.url}"
           p:username="${jdbc.username}"
           p:password="${jdbc.password}" />

    <bean  class="org.springframework.jdbc.core.namedparam.NamedParameterJdbcTemplate"
           c:dataSource-ref="dataSource"  />

    <bean  id="jdbcTransactionManager" class="org.springframework.jdbc.datasource.DataSourceTransactionManager"
            p:dataSource-ref="dataSource"/>
    <tx:annotation-driven transaction-manager="jdbcTransactionManager"/>

</beans>

Notes:

– We are using DBCP for our DB connection pool datasource
– We are using the parameter namespace to save on some attribute injection open-close xml characters
– We are explicitly stating NamedParameterJdbcTemplate as our jdbcTemplate and inject it with the needed datasource so we can have it available for injection conveniently in our DAOs.

The domain POJO object we are about to play with is the Location object that corresponds to the Locations table on the Oracle HR schema:


package com.dimitrisli.springJdbcOracle.model;

public class Location {

    private Long locationId;
    private String streetAddress;
    private String postalCode;
    private String city;
    private String stateProvince;
    private String countryId;

    public Location(Long locationId, String streetAddress, String postalCode, String city, String stateProvince, String countryId) {
        this.locationId = locationId;
        this.streetAddress = streetAddress;
        this.postalCode = postalCode;
        this.city = city;
        this.stateProvince = stateProvince;
        this.countryId = countryId;
    }

//getters, hashcode(), equals(), toString() ignored for brevity

The RowMapper that will provide Location objects having resultSets coming from the DB. It’s a factory method (although not static but stateless by our design) used internally from Spring per DB line result returned:

package com.dimitrisli.springJdbcOracle.orm;

import com.dimitrisli.springJdbcOracle.model.Location;
import org.springframework.jdbc.core.RowMapper;
import org.springframework.stereotype.Component;

import java.sql.ResultSet;
import java.sql.SQLException;

@Component
public class LocationRowMapper implements RowMapper<Location> {

    @Override
    public Location mapRow(ResultSet rs, int rowNum) throws SQLException {
        return  new Location(rs.getLong("LOCATION_ID"),
                             rs.getString("STREET_ADDRESS"),
                             rs.getString("POSTAL_CODE"),
                             rs.getString("CITY"),
                             rs.getString("STATE_PROVINCE"),
                             rs.getString("COUNTRY_ID"));
    }
}

Here’s the DAO interface responsible for the CRUD operations:

package com.dimitrisli.springJdbcOracle.dao;

import com.dimitrisli.springJdbcOracle.model.Location;

import java.util.List;

public interface LocationDao {

    public void createLocation(Location location);
    public List<Location> getLocations();
    public Location getLocation(Long locationId);
    public void updateLocation(Location location);
    public void deleteLocation(Long locationId);

}

and its implementation looks like this:


package com.dimitrisli.springJdbcOracle.dao.impl;

import com.dimitrisli.springJdbcOracle.dao.LocationDao;
import com.dimitrisli.springJdbcOracle.model.Location;
import com.dimitrisli.springJdbcOracle.orm.LocationRowMapper;
import org.springframework.jdbc.core.namedparam.MapSqlParameterSource;
import org.springframework.jdbc.core.namedparam.NamedParameterJdbcOperations;
import org.springframework.jdbc.core.namedparam.SqlParameterSource;
import org.springframework.stereotype.Repository;

import javax.inject.Inject;
import java.util.HashMap;
import java.util.List;

@Repository("locationDao")
public class LocationDaoImpl implements LocationDao {

    private static final String CREATE_SQL = "INSERT INTO LOCATIONS( LOCATION_ID, STREET_ADDRESS, POSTAL_CODE, CITY, " +
                                             "STATE_PROVINCE, COUNTRY_ID) " +
                                             "VALUES (LOCATIONS_SEQ.NEXTVAL, :streetAddress, :postalCode, :city, " +
                                             ":stateProvince, :countryId)";

    private static final String GET_ALL_SQL = "SELECT LOCATION_ID, STREET_ADDRESS, POSTAL_CODE, CITY, STATE_PROVINCE, COUNTRY_ID " +
                                              "FROM LOCATIONS";

    private static final String GET_SQL = "SELECT LOCATION_ID, STREET_ADDRESS, POSTAL_CODE, CITY, STATE_PROVINCE, COUNTRY_ID " +
                                          "FROM LOCATIONS WHERE LOCATION_ID = :locationId";

    private static final String DELETE_SQL = "DELETE LOCATIONS WHERE LOCATION_ID = :locationId";

    private static final String UPDATE_SQL = "UPDATE LOCATIONS SET STREET_ADDRESS = :streetAddress, POSTAL_CODE=:postalCode, " +
                                            "CITY = :city, STATE_PROVINCE = :stateProvince, COUNTRY_ID = :countryId " +
                                            "WHERE LOCATION_ID = :locationId";

    @Inject private NamedParameterJdbcOperations jdbcTemplate;
    @Inject private LocationRowMapper locationRowMapper;

    @Override
    public void createLocation(Location location) {
        SqlParameterSource params = new MapSqlParameterSource()
                .addValue("streetAddress", location.getStreetAddress())
                .addValue("postalCode", location.getPostalCode())
                .addValue("city", location.getCity())
                .addValue("stateProvince", location.getStateProvince())
                .addValue("countryId", location.getCountryId());
        jdbcTemplate.update(CREATE_SQL, params);
    }

    @Override
    public List<Location> getLocations() {
        return jdbcTemplate.query(GET_ALL_SQL, new HashMap<String, Object>(), locationRowMapper);
    }

    @Override
    public Location getLocation(Long locationId) {
        SqlParameterSource params = new MapSqlParameterSource()
                .addValue("locationId", locationId);
        List<Location> locations = jdbcTemplate.query(GET_SQL, params, locationRowMapper);
        return locations.isEmpty()?null:locations.get(0);
    }

    @Override
    public void updateLocation(Location location) {
        SqlParameterSource params = new MapSqlParameterSource()
                .addValue("locationId", location.getLocationId())
                .addValue("streetAddress", location.getStreetAddress())
                .addValue("postalCode", location.getPostalCode())
                .addValue("city", location.getCity())
                .addValue("stateProvince", location.getStateProvince())
                .addValue("countryId", location.getCountryId());
        jdbcTemplate.update(UPDATE_SQL, params);
    }

    @Override
    public void deleteLocation(Long locationId) {
        jdbcTemplate.update(DELETE_SQL, new MapSqlParameterSource("locationId",locationId));
    }
}

Notes:
– Notice how we inject the JdbcTemplate and not fetching it from this class directly
– Notice how we inject the RowMapper and we don’t anonymous-class-it from this class directly
– The CRUD operations are setup parameterized in the top of the file as private static finals
– In all the CRUD operations we are using either jdbcTemplate.update() or jdbcTemplate.query() methods

Here’s the Github repo of the project

Maven install ojdbc6

I really wished the Oracle driver jar was part of any (legal) publicly available Maven repo, but it’s not. So we’ll have to take matters on our hands and install it in our local repo once and for all so we can effortlessly thereafter summon it via our pom file:

       <dependency>
            <groupId>com.oracle</groupId>
            <artifactId>ojdbc6</artifactId>
            <version>11.2.0.3</version>
        </dependency>

Steps:

  • Download the jdbc6.jar from the Oracle website. I tried to automate this step via a Groovy script but this pesky agreement radio-button gets in the way (which is there for a reason to be fair)
  • Supposing mvn is already setup in your path:
  • mvn install:install-file -Dfile=ojdbc6.jar -DgroupId=com.oracle -DartifactId=ojdbc6 -Dversion=11.2.0.3 -Dpackaging=jar -DgeneratePom=true
    

Install Java 6,7,8 on Mac OS X

Here’s a quick guide to have Java versions 6,7,8 installed on Mac OS X above and beyond.

Java 6

Java 6 is the last supported version provided by Apple. Therefore we’ll follow the Apple way to install the JDK although Java 1.6 can be branched out from Oracle or the OpenJDK project.

Successful installation will place Java 6 under /Library/Java/JavaVirtualMachines/1.6.0_37-b06-434.jdk/. Following that you can mark JAVA_HOME or point your IDE towards /Library/Java/JavaVirtualMachines/1.6.0_37-b06-434.jdk/Contents/Home/. Also to browse the JDK under an IDE point your editor to the JDK sources found in /Library/Java/JavaVirtualMachines/1.6.0_37-b06-434.jdk/Contents/Home/src.jar

Java 7

We need to go out in the wild to get JDK 7 installed on Mac OS X since mother-Apple doesn’t support it. We can download it either from Oracle or OpenJDK:

Successful installation will place Java 7 under /Library/Java/JavaVirtualMachines/jdk1.7.0_07.jdk/. Following that you can mark JAVA_HOME or point your IDE towards /Library/Java/JavaVirtualMachines/jdk1.7.0_07.jdk/Contents/Home/. Also to browse the JDK under an IDE point your editor to the JDK sources found in /Library/Java/JavaVirtualMachines/jdk1.7.0_07.jdk/Contents/Home/src.jar

Java 8

We can get a copy of the latest snapshot of JDK 8 to play around with the lambda expressions (natively supported on IntelliJ IDEA) if you haven’t tried already closures in Groovy, Scala or just Predicates/Functions in Google Guava from this download resource

Successful installation will place Java 8 under /Library/Java/JavaVirtualMachines/jdk1.8.0.jdk/. Following that you can mark JAVA_HOME or point your IDE towards /Library/Java/JavaVirtualMachines/jdk1.8.0.jdk/Contents/Home/. Also to browse the JDK under an IDE point your editor to the JDK sources found in /Library/Java/JavaVirtualMachines/jdk1.8.0.jdk/Contents/Home/src.jar

Groovy, Scala closures demonstration

This is a quick demo of closures usage for Groovy and Scala side-by-side.

Let’s get an easy study case where we are trying to identify whether a string contains unique characters. Starting with Java we would have:

Java

public class StringAllUniqueChars {

    public static boolean hasStringAllUniqueChars(String str){

        //cache data structure
        final HashSet<Character> stackConfinedCache = new HashSet<Character>();
        //imperative iteration
        for(Character c : str.toCharArray()){
            if(stackConfinedCache.remove(c))
                //fail fast
                return false;
            else
                stackConfinedCache.add(c);
        }
        return true;
    }

    public static void main(String[] args) {
        System.out.println("hello\t" + StringAllUniqueChars.hasStringAllUniqueChars("hello"));
        System.out.println("helo\t" + StringAllUniqueChars.hasStringAllUniqueChars("helo"));
    }
}

Keypoints:

  • We create a data structure to store the findings along the way and fail fast in case of the first dup char found
  • We iterate over the chars of the string using a for loop

Groovy

def hasStringAllUniqueChars(str){

    str.collect {         //in a collection
            str.count(it)    //the occurrences of each char
        }
        .findAll {
            it>1    //filter those with more than one occurrences
        }
        .size()==0  //make sure they don't exist
}

def hasStringAllUniqueChars2(str){
    !   //if we don't
    str.any{       //find any
        str.count(it)>1    //character occurrence in string more than once
    }
}

println "hello\t" + hasStringAllUniqueChars2("hello")
println "hello\t" + hasStringAllUniqueChars("helo")

Keypoints:

  • We are running it in a form of Groovy script
  • We are making use of the default it reference while being on a closure
  • In the first implementation we are first transforming the string characters into a collection of corresponding character sizes. Then we are filtering only duplicate characters and finally we are taking the decision based on whether we have any dups across all chars.
  • In the second implementation we are taking a shortcut using Groovy’s any method hand-picking in its closure only dup characters

Scala

object StringAllUniqueChars {

  def hasStringAllUniqueChars(str: String) =

    !str.exists{  //if we don't find any case where
      c =>    //each character's
        str.count(_==c)>1       //count in the string is greater than 1
    }

  def main(args: Array[String]){
    println("hello\t"+hasStringAllUniqueChars("hello"))
    println("helo\t"+hasStringAllUniqueChars("helo"))
  }

}

Keypoints:

  • We are using an Object since we want to host our main method somewhere. It’s Scala’s way to address static and normally the so-called companion object is grouping all the static content of its corresponding class.
  • We are employing closures in a similar way as the second Groovy implementation using Groovy’s exists and count methods

For a matter of completeness this is how Scala, Groovy and Java are co-existing happily together during compile/run time under the Maven umbrella:

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>

    <groupId>ScalaGroovyClosures</groupId>
    <artifactId>ScalaGroovyClosures</artifactId>
    <version>1.0</version>

    <build>
        <plugins>
            <!--java compiler-->
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-compiler-plugin</artifactId>
                <version>2.3.2</version>
                <configuration>
                    <source>1.6</source>
                    <target>1.6</target>
                </configuration>
            </plugin>

            <!--scala compiler-->
            <plugin>
                <groupId>org.scala-tools</groupId>
                <artifactId>maven-scala-plugin</artifactId>
                <version>2.15.2</version>
                <executions>
                    <execution>
                        <goals>
                            <goal>compile</goal>
                            <goal>testCompile</goal>
                        </goals>
                    </execution>
                </executions>
            </plugin>

            <!--groovy compiler-->
            <plugin>
                <groupId>org.codehaus.groovy.maven</groupId>
                <artifactId>gmaven-plugin</artifactId>
                <version>1.0</version>
                <executions>
                    <execution>
                        <goals>
                            <goal>compile</goal>
                            <goal>testCompile</goal>
                        </goals>
                    </execution>
                </executions>
            </plugin>
        </plugins>
    </build>
    <dependencies>
        <dependency>
            <groupId>org.scala-lang</groupId>
            <artifactId>scala-library</artifactId>
            <version>2.9.2</version>
        </dependency>
        <dependency>
            <groupId>org.codehaus.groovy.maven.runtime</groupId>
            <artifactId>gmaven-runtime-1.6</artifactId>
            <version>1.0</version>
        </dependency>
    </dependencies>

    <properties>
        <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
    </properties>

</project>